andrewnguonly/Lumos
A RAG LLM co-pilot for browsing the web, powered by local LLMs
Implements retrieval-augmented generation (RAG) by chunking webpage content into a vector store and querying it against local Ollama models for inference and embeddings. Runs as a Chrome extension that communicates with a local Ollama server via HTTP, with configurable content parsers for domain-specific extraction and customizable embedding/inference models to optimize performance across different hardware setups.
1,516 stars. No commits in the last 6 months.
Stars
1,516
Forks
111
Language
TypeScript
License
MIT
Category
Last pushed
Jan 26, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/andrewnguonly/Lumos"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jatinkrmalik/LLMFeeder
Brower extension to convert web pages to clean Markdown and copy to clipboard so you can feed it...
Warma10032/VideoAdGuard
哔哩哔哩浏览器插件:基于大语言模型,对B站视频中的植入广告进行检测。一键跳过视频中的植入/口播广告。
jose-mdz/groq-chrome-ext
Chrome extension that interacts with content using Groq
Aschen/coding-challenge-ia-solver
A Chrome extension using LLM and DevTools to automatically solve coding challenge by...
idosal/WebextLLM
Web extension that embeds LLMs in your browser to power AI in web apps