pushinspektor856/GitHub-Copilot-Free
An ultra-fast C++ daemon proxy that replaces the official GitHub Copilot endpoint, allowing you to use completely free local or open-source LLMs inside VS Code and JetBrains. Unlock advanced AI code completion with zero subscription fees and sub-millisecond latency.
Intercepts Copilot extension requests via asynchronous C++20 proxy, reformats AST syntax context on-the-fly, and routes to OpenAI-compatible APIs or local LLMs (Ollama, DeepSeek Coder, Llama-3), while stripping telemetry payloads. Works transparently with VS Code, JetBrains IDEs, and WebStorm through spoofed authentication tokens. Achieves sub-0.5ms latency with minimal 4MB footprint by handling token substitution synchronously at the network protocol layer.
Stars
75
Forks
—
Language
C++
License
MIT
Category
Last pushed
Mar 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/pushinspektor856/GitHub-Copilot-Free"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
CNSeniorious000/oai2ollama
OpenAI API -> Ollama API (original idea:...
theblixguy/xcode-copilot-server
GitHub Copilot proxy for Xcode with support for Claude Agent and Codex Agent.
yuchanns/copilot-openai-api
A FastAPI proxy server that seamlessly turns GitHub Copilot's chat completion/embeddings...
hankchiutw/copilot-proxy
A simple HTTP proxy that exposes your GitHub Copilot free quota as an OpenAI-compatible API
srikanth235/privy
An open-source alternative to GitHub copilot that runs locally.