AutoPrompt and promptolution

Intent-based prompt calibration and modular prompt optimization represent competing approaches to the same problem—automating prompt engineering—with AutoPrompt focusing on calibration-driven tuning while Promptolution emphasizes modularity and framework flexibility.

AutoPrompt
51
Established
promptolution
46
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 19/25
Maintenance 10/25
Adoption 9/25
Maturity 16/25
Community 11/25
Stars: 2,947
Forks: 261
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 114
Forks: 8
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About AutoPrompt

Eladlev/AutoPrompt

A framework for prompt tuning using Intent-based Prompt Calibration

Implements Intent-based Prompt Calibration through iterative synthetic data generation and LLM-driven annotation to identify edge cases and refine prompts. Integrates with LangChain, Weights & Biases, and Argilla for human-in-the-loop feedback, supporting classification, generation, and moderation tasks with configurable budget limits (typically <$1 with GPT-4 Turbo).

About promptolution

automl/promptolution

A unified, modular Framework for Prompt Optimization

Supports multiple state-of-the-art prompt optimization algorithms (CAPO, EvoPrompt, OPRO) with a unified LLM backend spanning API-based models, local inference via vLLM/transformers, and cluster deployments. Built-in response caching, parallelized inference, and detailed token tracking enable cost-efficient, reproducible large-scale experiments. Decomposes optimization into modular components—Task, Predictor, LLM, and Optimizer—allowing researchers to customize any stage without rigid abstractions.

Scores updated daily from GitHub, PyPI, and npm data. How scores work