JacobHuang91/prompt-refiner

🚀 Lightweight Python library for building production LLM applications with smart context management and automatic token optimization. Save 10-20% on API costs while fitting RAG docs, chat history, and prompts into your token budget.

30
/ 100
Emerging

Provides modular pipelines for cleaning, compressing, and scrubbing text data—including HTML stripping, deduplication, and PII redaction—with composable operations via pipe syntax. Includes specialized modules for AI agent workflows: `SchemaCompressor` achieves 57% average token reduction on function definitions, and `ResponseCompressor` optimizes tool outputs, targeting frameworks like OpenAI's function calling while integrating seamlessly with completion and chat APIs.

No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 9 / 25
Community 8 / 25

How are scores calculated?

Stars

36

Forks

3

Language

Python

License

MIT

Last pushed

Dec 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/JacobHuang91/prompt-refiner"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.