serverless-chat-langchainjs and langchainjs-quickstart-demo
These are ecosystem siblings that demonstrate different deployment patterns for the same stack: one shows a complete serverless RAG system architecture on Azure, while the other provides an introductory quickstart for building and migrating LangChain.js applications to Azure.
About serverless-chat-langchainjs
Azure-Samples/serverless-chat-langchainjs
Build your own serverless AI Chat with Retrieval-Augmented-Generation using LangChain.js, TypeScript and Azure
Implements a full-stack RAG pipeline with Azure Cosmos DB vector storage and LangChain.js for document ingestion, paired with a Lit-based web component frontend on Azure Static Web Apps and Azure Functions backend. Supports local development with Ollama for cost-free testing, and maintains per-user chat session history while following the HTTP protocol for AI chat apps standard.
About langchainjs-quickstart-demo
Azure-Samples/langchainjs-quickstart-demo
Build a generative AI application using LangChain.js, from local to Azure
Implements a RAG-based Q&A system that ingests YouTube transcripts and offers dual deployment paths: locally using FAISS + Ollama (LLaMa3), or on Azure using AI Search + GPT-4 Turbo. Both versions can run as Azure Functions with HTTP streaming support, enabling seamless scaling from prototype to production without code changes.
Scores updated daily from GitHub, PyPI, and npm data. How scores work