AutoPrompt and promptolution
Intent-based prompt calibration and modular prompt optimization represent competing approaches to the same problem—automating prompt engineering—with AutoPrompt focusing on calibration-driven tuning while Promptolution emphasizes modularity and framework flexibility.
About AutoPrompt
Eladlev/AutoPrompt
A framework for prompt tuning using Intent-based Prompt Calibration
Implements Intent-based Prompt Calibration through iterative synthetic data generation and LLM-driven annotation to identify edge cases and refine prompts. Integrates with LangChain, Weights & Biases, and Argilla for human-in-the-loop feedback, supporting classification, generation, and moderation tasks with configurable budget limits (typically <$1 with GPT-4 Turbo).
About promptolution
automl/promptolution
A unified, modular Framework for Prompt Optimization
Supports multiple state-of-the-art prompt optimization algorithms (CAPO, EvoPrompt, OPRO) with a unified LLM backend spanning API-based models, local inference via vLLM/transformers, and cluster deployments. Built-in response caching, parallelized inference, and detailed token tracking enable cost-efficient, reproducible large-scale experiments. Decomposes optimization into modular components—Task, Predictor, LLM, and Optimizer—allowing researchers to customize any stage without rigid abstractions.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work