promptulate and PromptAgent

These are complements: Promptulate provides the general LLM automation framework for building agent applications, while PromptAgent offers a specialized prompt optimization technique that could be integrated into Promptulate workflows to improve prompt quality during agent development.

promptulate
65
Established
PromptAgent
47
Emerging
Maintenance 10/25
Adoption 16/25
Maturity 25/25
Community 14/25
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 19/25
Stars: 592
Forks: 39
Downloads: 310
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 351
Forks: 46
Downloads: —
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
Stale 6m No Package No Dependents

About promptulate

Undertone0809/promptulate

🚀Lightweight Large language model automation and Autonomous Language Agents development framework. Build your LLM Agent Application in a pythonic way!

Leverages litellm for unified model abstraction, supporting 25+ providers (OpenAI, Anthropic, Gemini, local Ollama, etc.) through a single `pne.chat()` interface. Provides specialized agent types (WebAgent, ToolAgent, CodeAgent) with atomized planners, lifecycle hooks for custom code injection, and converts Python functions directly into tools without wrapper boilerplate. Integrates LangChain tools, includes prompt caching, streaming/async support, and Streamlit components for rapid prototyping.

About PromptAgent

maitrix-org/PromptAgent

This is the official repo for "PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization". PromptAgent is a novel automatic prompt optimization method that autonomously crafts prompts equivalent in quality to those handcrafted by experts, i.e., expert-level prompts.

Employs Monte Carlo Tree Search (MCTS) to strategically sample model errors and iteratively refine prompts through reward simulation, unifying prompt sampling and evaluation in a single principled framework. Supports diverse model backends including OpenAI APIs, PaLM, Hugging Face text generation models, and vLLM for local inference, with YAML-based configuration for flexible experimentation. Integrates with BIG-bench tasks and the LLM Reasoners library, enabling optimization across reasoning and knowledge-intensive domains.

Scores updated daily from GitHub, PyPI, and npm data. How scores work