pinecone-io/canopy
Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone
ArchivedImplements end-to-end RAG workflows through three core components: `ChatEngine` for multi-turn conversations, `ContextEngine` for semantic retrieval and prompt engineering, and `KnowledgeBase` for automatic document chunking/embedding into Pinecone or Qdrant vector stores. Provides both a FastAPI-based production server with Swagger UI and a CLI tool for interactive testing and RAG vs. non-RAG response comparison. Supports pluggable LLM providers (OpenAI, Cohere, Anyscale) and alternative vector backends beyond Pinecone.
1,029 stars. No commits in the last 6 months.
Stars
1,029
Forks
126
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/pinecone-io/canopy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
notadev-iamaura/OneRAG
Production-ready RAG Framework (Python/FastAPI). 1-line config swaps: 6 Vector DBs (Weaviate,...
teilomillet/raggo
A lightweight, production-ready RAG (Retrieval Augmented Generation) library in Go.
electricpipelines/barq
Dabarqus is incredibly fast RAG that runs everywhere.
MERakram/Advanced-RAG-monorepo
🚀 Production-ready modular RAG monorepo: Local LLM inference (vLLM) • Hybrid retrieval with...
balavenkatesh3322/rag-doctor
🩺 Agentic RAG pipeline failure diagnosis tool. Tells you why your RAG failed — chunk...