Promptify and ppromptor
These are **complements**: Promptify provides a framework for versioning and structuring prompts with output parsing, while Prompt-Promptor automates prompt generation itself—one manages prompts you write, the other generates them for you, and they could be used together in a pipeline.
About Promptify
promptslab/Promptify
Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research
Provides task-specific NLP classes (NER, classification, QA, relation extraction) that return type-safe Pydantic models instead of raw text, eliminating parsing brittleness. Abstracts away LLM provider differences through LiteLLM, allowing seamless switching between OpenAI, Anthropic, Ollama, and 100+ other backends with a single model string. Includes built-in evaluation metrics (precision, recall, F1) and cost tracking, plus batch/async processing for production workloads.
About ppromptor
pikho/ppromptor
Prompt-Promptor is a python library for automatically generating prompts using LLMs
Employs a three-agent architecture (Proposer, Evaluator, Analyzer) that iteratively refines prompts through collaborative feedback loops and human expert input. Supports both proprietary APIs (OpenAI) and open-source models (LLaMA, WizardLM), enabling weaker models to be guided by stronger LLMs. Provides a Streamlit-based web interface with experiment tracking and side-by-side prompt comparison for managing the prompt engineering workflow.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work