ggml-org/llama.vim

Vim plugin for LLM-assisted code/text completion

48
/ 100
Emerging

Supports both fill-in-middle completions with real-time auto-suggestions and instruction-based editing for refactoring tasks. Communicates with a local llama.cpp server instance, enabling efficient inference on consumer hardware through smart context reuse across multiple file buffers. Includes configurable keybindings, performance metrics, and compatibility with FIM-specialized models like Qwen2.5-Coder across various parameter sizes.

1,913 stars. Actively maintained with 3 commits in the last 30 days.

No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 9 / 25
Community 16 / 25

How are scores calculated?

Stars

1,913

Forks

95

Language

Vim Script

License

MIT

Last pushed

Jan 31, 2026

Commits (30d)

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ggml-org/llama.vim"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.