llama_index and llama-index-vector-search-javascript

LlamaIndex is the core RAG framework, while the Azure sample is a reference implementation demonstrating how to deploy LlamaIndex with Azure-specific services for vector search and LLM inference—making them complements designed to be used together.

Maintenance 25/25
Adoption 25/25
Maturity 25/25
Community 22/25
Maintenance 10/25
Adoption 5/25
Maturity 9/25
Community 15/25
Stars: 47,631
Forks: 7,002
Downloads: 10,276,119
Commits (30d): 136
Language: Python
License: MIT
Stars: 14
Forks: 4
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
No risk flags
No Package No Dependents

About llama_index

run-llama/llama_index

LlamaIndex is the leading document agent and OCR platform

Provides a modular architecture with 300+ integration packages through LlamaHub, enabling flexible composition of LLM, embedding, and vector store providers without vendor lock-in. Core capabilities include agentic workflow orchestration, structured data extraction, and multi-format document parsing (130+ formats) via LlamaParse, with a namespace pattern that cleanly separates core abstractions from provider-specific implementations.

About llama-index-vector-search-javascript

Azure-Samples/llama-index-vector-search-javascript

A sample app for the Retrieval-Augmented Generation pattern using LlamaIndex.ts, running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences using your own data.

Implements document ingestion with LlamaIndex's data connectors to populate Azure AI Search indexes, enabling semantic search over custom documents. The full-stack JavaScript solution deploys to Azure Container Apps with infrastructure-as-code (Bicep) via Azure Developer CLI, including built-in monitoring through Azure Monitor for production observability.

Scores updated daily from GitHub, PyPI, and npm data. How scores work