samuelfaj/distill

Distill large CLI outputs into small answers for LLMs and save tokens!

37
/ 100
Emerging

Pipes stdin through LLMs via stdio to extract structured answers, supporting multiple providers (Ollama, OpenAI, LM Studio, LocalAI, vLLM, llama.cpp, and others) with configurable models and timeouts. Built as a CLI filter designed for agentic workflows—passes through interactive prompts unchanged while compressing verbose outputs like logs, diffs, and test results with explicit user-defined output formats to minimize token consumption in agent loops.

262 stars.

No License No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 3 / 25
Community 11 / 25

How are scores calculated?

Stars

262

Forks

16

Language

TypeScript

License

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/samuelfaj/distill"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.