ggml-org/llama.vim
Vim plugin for LLM-assisted code/text completion
Supports both fill-in-middle completions with real-time auto-suggestions and instruction-based editing for refactoring tasks. Communicates with a local llama.cpp server instance, enabling efficient inference on consumer hardware through smart context reuse across multiple file buffers. Includes configurable keybindings, performance metrics, and compatibility with FIM-specialized models like Qwen2.5-Coder across various parameter sizes.
1,913 stars. Actively maintained with 3 commits in the last 30 days.
Stars
1,913
Forks
95
Language
Vim Script
License
MIT
Category
Last pushed
Jan 31, 2026
Commits (30d)
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ggml-org/llama.vim"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
ggml-org/llama.vscode
VS Code extension for LLM-assisted code/text completion
10Nates/ollama-autocoder
A simple to use Ollama autocompletion engine with options exposed and streaming functionality
DmitryNekrasov/ai-code-completion-idea-plugin
Implementation of IntelliJ IDEA code completion plugin using a local LLM.
xNul/code-llama-for-vscode
Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to...
Wells-the-Doctor/leaxer
🌟 Build and deploy local AI models with Leaxer for real-time interaction, streamlined document...