biraj21/llm-server-from-scratch
FastAPI server for locally serving Gemma 3 270M & OpenAI Whisper with batched inference and streaming support.
This project helps developers experiment with deploying and serving large language models (LLMs) and speech-to-text models locally. It takes text prompts or audio inputs and provides generated text or transcribed speech as output. It is designed for software developers or machine learning engineers interested in understanding model serving fundamentals, rather than deploying production systems.
No commits in the last 6 months.
Use this if you are a developer learning about serving LLMs or speech models and want to experiment with features like batched inference and streaming locally.
Not ideal if you need a robust, production-ready solution for deploying AI models or if you are not comfortable with command-line tools and Python development.
Stars
8
Forks
—
Language
HTML
License
—
Category
Last pushed
Sep 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/biraj21/llm-server-from-scratch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...