timescale/private-rag-example

Private RAG app sample using Llama3, Ollama and PostgreSQL

27
/ 100
Experimental

Implements vector embeddings and semantic search within PostgreSQL using pgai and pgvector extensions, enabling efficient document retrieval without external embedding services. The pipeline orchestrates local LLM inference through Ollama, document chunking, and vector storage entirely within a containerized environment. Supports swappable models (Llama3.2, Mistral) and includes pgai for in-database AI operations, eliminating dependency on cloud-based RAG platforms.

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 1 / 25
Community 18 / 25

How are scores calculated?

Stars

62

Forks

15

Language

Jupyter Notebook

License

Last pushed

Nov 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/timescale/private-rag-example"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.