avilum/minrlm

Token-efficient Recursive Language Model. 3.6x fewer tokens than vanilla LLMs. Data never enters the prompt.

55
/ 100
Established

Implements a REPL-based execution model where the LLM generates Python code to query data directly, keeping raw context out of the prompt entirely; uses entropy profiling via zlib compression to identify relevant sections and task-specific routing to optimize code patterns for structured data, search, math, and code retrieval tasks. Wraps execution in a DockerREPL sandbox (seccomp, stdlib-only) and optionally delegates smaller sub-tasks to a secondary LLM on filtered evidence, achieving 30pp accuracy gains over vanilla on frontier models while maintaining flat token cost regardless of document size.

Used by 1 other package. Available on PyPI.

Maintenance 13 / 25
Adoption 13 / 25
Maturity 20 / 25
Community 9 / 25

How are scores calculated?

Stars

31

Forks

3

Language

Python

License

MIT

Last pushed

Mar 18, 2026

Monthly downloads

202

Commits (30d)

0

Dependencies

1

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/avilum/minrlm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.