tyrell/llm-ollama-llamaindex-bootstrap-ui

This is a LlamaIndex project bootstrapped with create-llama to act as a full stack UI to accompany Retrieval-Augmented Generation (RAG) Bootstrap Application.

33
/ 100
Emerging

Loads pre-built vector store indexes from a separate RAG pipeline and queries them with streaming enabled, separating the indexing and query layers. The full-stack architecture pairs a Python backend (FastAPI/LlamaIndex) that handles index loading and retrieval with a Node.js/React frontend served on port 3000, allowing real-time conversational interaction with indexed documents. Designed to work with locally-hosted LLMs via Ollama while leveraging LlamaIndex's data connectors and query abstractions.

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

32

Forks

14

Language

TypeScript

License

Last pushed

Feb 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/tyrell/llm-ollama-llamaindex-bootstrap-ui"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.