NVIDIA/nim-anywhere
Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench
Provides a complete RAG pipeline framework using NVIDIA NIM microservices (LLM, embedding, and reranker models) that scales from local development to production environments. Combines a Python-based chain server backend with a web frontend for knowledge base integration and chat inference, supporting flexible configuration via files or environment variables. Designed for AI Workbench but operates standalone, enabling enterprises to build private RAG systems that keep sensitive data local while leveraging optimized NVIDIA models.
209 stars. No commits in the last 6 months.
Stars
209
Forks
98
Language
Python
License
Apache-2.0
Category
Last pushed
May 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/NVIDIA/nim-anywhere"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
NVIDIA/GenerativeAIExamples
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
opea-project/GenAIExamples
Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which...
fw-ai/cookbook
Recipes and resources for building, deploying, and fine-tuning generative AI with Fireworks.
SAP-samples/btp-genai-starter-kit
This repo aims to help developers to get into the genAI topic quicker by automating AI Core and...
codecentric/c4-genai-suite
c4 GenAI Suite