serverless-chat-langchainjs and langchainjs-quickstart-demo

These are ecosystem siblings that demonstrate different deployment patterns for the same stack: one shows a complete serverless RAG system architecture on Azure, while the other provides an introductory quickstart for building and migrating LangChain.js applications to Azure.

Maintenance 16/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 2/25
Adoption 9/25
Maturity 16/25
Community 21/25
Stars: 856
Forks: 483
Downloads:
Commits (30d): 5
Language: TypeScript
License: MIT
Stars: 75
Forks: 33
Downloads:
Commits (30d): 0
Language: JavaScript
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About serverless-chat-langchainjs

Azure-Samples/serverless-chat-langchainjs

Build your own serverless AI Chat with Retrieval-Augmented-Generation using LangChain.js, TypeScript and Azure

Implements a full-stack RAG pipeline with Azure Cosmos DB vector storage and LangChain.js for document ingestion, paired with a Lit-based web component frontend on Azure Static Web Apps and Azure Functions backend. Supports local development with Ollama for cost-free testing, and maintains per-user chat session history while following the HTTP protocol for AI chat apps standard.

About langchainjs-quickstart-demo

Azure-Samples/langchainjs-quickstart-demo

Build a generative AI application using LangChain.js, from local to Azure

Implements a RAG-based Q&A system that ingests YouTube transcripts and offers dual deployment paths: locally using FAISS + Ollama (LLaMa3), or on Azure using AI Search + GPT-4 Turbo. Both versions can run as Azure Functions with HTTP streaming support, enabling seamless scaling from prototype to production without code changes.

Scores updated daily from GitHub, PyPI, and npm data. How scores work