samuelfaj/distill
Distill large CLI outputs into small answers for LLMs and save tokens!
Pipes stdin through LLMs via stdio to extract structured answers, supporting multiple providers (Ollama, OpenAI, LM Studio, LocalAI, vLLM, llama.cpp, and others) with configurable models and timeouts. Built as a CLI filter designed for agentic workflows—passes through interactive prompts unchanged while compressing verbose outputs like logs, diffs, and test results with explicit user-defined output formats to minimize token consumption in agent loops.
262 stars.
Stars
262
Forks
16
Language
TypeScript
License
—
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/samuelfaj/distill"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mangiucugna/json_repair
A python module to repair invalid JSON from LLMs
antfu/shiki-stream
Streaming highlighting with Shiki. Useful for highlighting text streams like LLM outputs.
iw4p/partialjson
+1M Downloads! Repair invalid LLM JSON, commonly used to parse the output of LLMs — Parsing...
yokingma/fetch-sse
An easy API for making Event Source requests, with all the features of fetch(), Supports...
kaptinlin/jsonrepair
A high-performance Golang library for easily repairing invalid JSON documents. Designed to fix...