AutoPrompt and PromptAgent
PromptAgent and AutoPrompt are competitors, as both aim to provide automated prompt optimization solutions using different methodologies, with PromptAgent employing strategic planning with language models while AutoPrompt utilizes intent-based prompt calibration within its framework.
About AutoPrompt
Eladlev/AutoPrompt
A framework for prompt tuning using Intent-based Prompt Calibration
Implements Intent-based Prompt Calibration through iterative synthetic data generation and LLM-driven annotation to identify edge cases and refine prompts. Integrates with LangChain, Weights & Biases, and Argilla for human-in-the-loop feedback, supporting classification, generation, and moderation tasks with configurable budget limits (typically <$1 with GPT-4 Turbo).
About PromptAgent
maitrix-org/PromptAgent
This is the official repo for "PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization". PromptAgent is a novel automatic prompt optimization method that autonomously crafts prompts equivalent in quality to those handcrafted by experts, i.e., expert-level prompts.
Employs Monte Carlo Tree Search (MCTS) to strategically sample model errors and iteratively refine prompts through reward simulation, unifying prompt sampling and evaluation in a single principled framework. Supports diverse model backends including OpenAI APIs, PaLM, Hugging Face text generation models, and vLLM for local inference, with YAML-based configuration for flexible experimentation. Integrates with BIG-bench tasks and the LLM Reasoners library, enabling optimization across reasoning and knowledge-intensive domains.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work