LLMSystems/TensorrtServer
A high-performance deep learning model inference server based on TensorRT, supporting fast inference for Embedding, Reranker, and NLI models.
Stars
5
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/LLMSystems/TensorrtServer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
byte5ai/palaia
Palaia — Local, crash-safe memory for AI agents. Semantic vector search...
ddickmann/vllm-factory
Production inference for encoder models - ColBERT, GLiNER, ColPali, embeddings etc. - as vLLM...
j33pguy/magi
MAGI — Multi-Agent Graph Intelligence. Universal memory server for AI agents. MCP + gRPC + REST...
abdullah85398/embedding-server
A high-performance, self-hosted, model-agnostic embedding service designed for LLM applications,...
thetenzinwoser/recall-mcp
Local semantic search MCP server for markdown docs and Granola meeting transcripts. No API keys,...