IBJunior/local-agent-docker-model-runner

A flexible, extensible AI agent backend built with NestJS—designed for running local, open-source LLMs (Llama, Gemma, Qwen, DeepSeek, etc.) via Docker Model Runner. Real-time streaming, Redis messaging, web search, and Postgres memory out of the box. No cloud APIs required!

13
/ 100
Experimental

This project provides a backend system for developers to quickly set up and run AI agents using local, open-source language models like Llama or Gemma, without needing cloud services. It takes your messages and model configurations to produce real-time AI responses, conversation history, and web search capabilities. It's designed for developers building custom AI applications, chatbots, or intelligent workflows where data privacy or cost control is important.

No commits in the last 6 months.

Use this if you are a developer building an AI application and want a flexible, self-hosted backend to run open-source Large Language Models (LLMs) locally with features like conversation memory and web search.

Not ideal if you are a non-developer looking for a ready-to-use AI tool or if you primarily rely on cloud-based LLM APIs (like OpenAI, Anthropic, or Google Cloud AI).

AI application development local LLMs backend development chatbot infrastructure open-source AI
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 7 / 25
Community 0 / 25

How are scores calculated?

Stars

6

Forks

Language

TypeScript

License

Last pushed

Jun 22, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/IBJunior/local-agent-docker-model-runner"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.