Promptify and Prompture

These are complements: Promptify focuses on prompt engineering and versioning workflows, while Prompture specializes in structured output validation and comparative model testing—capabilities that would naturally be used together in a production LLM pipeline.

Promptify
74
Verified
Prompture
46
Emerging
Maintenance 16/25
Adoption 14/25
Maturity 25/25
Community 19/25
Maintenance 13/25
Adoption 15/25
Maturity 18/25
Community 0/25
Stars: 4,572
Forks: 361
Downloads: 62
Commits (30d): 2
Language: Python
License: Apache-2.0
Stars: 9
Forks: —
Downloads: 3,845
Commits (30d): 0
Language: Python
License: MIT
No Dependents
No risk flags

About Promptify

promptslab/Promptify

Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research

Provides task-specific NLP classes (NER, classification, QA, relation extraction) that return type-safe Pydantic models instead of raw text, eliminating parsing brittleness. Abstracts away LLM provider differences through LiteLLM, allowing seamless switching between OpenAI, Anthropic, Ollama, and 100+ other backends with a single model string. Includes built-in evaluation metrics (precision, recall, F1) and cost tracking, plus batch/async processing for production workloads.

About Prompture

jhd3197/Prompture

Prompture is an API-first library for requesting structured JSON output from LLMs (or any structure), validating it against a schema, and running comparative tests between models.

Built on Pydantic models with native schema validation, Prompture supports 12 LLM providers (OpenAI, Claude, Groq, Ollama, etc.) through a unified `provider/model` routing system. It includes TOON input conversion for 45-60% token savings, per-field stepwise extraction with smart type coercion, caching backends (memory/SQLite/Redis), and automatic JSON repair via secondary LLM passes. The tool use layer simulates function calling for providers without native support, while batch testing enables side-by-side model comparison with integrated usage tracking and cost calculation.

Scores updated daily from GitHub, PyPI, and npm data. How scores work