IlyasMoutawwakil/py-txi
A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.
Manages TGI/TEI Docker containers programmatically via `docker-py` with automatic lifecycle management, port allocation, and container cleanup tied to the Python process. Supports batched inference for both text generation and embeddings with real-time log streaming for debugging. Designed as a drop-in replacement for the Transformers API, allowing researchers and developers to leverage optimized inference servers without manual container orchestration.
No commits in the last 6 months. Available on PyPI.
Stars
32
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 19, 2025
Monthly downloads
77
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/IlyasMoutawwakil/py-txi"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
FlagOpen/FlagEmbedding
Retrieval and Retrieval-augmented LLMs
Blaizzy/mlx-embeddings
MLX-Embeddings is the best package for running Vision and Language Embedding models locally on...
qdrant/fastembed
Fast, Accurate, Lightweight Python library to make State of the Art Embedding
Merck/Sapiens
Sapiens is a human antibody language model based on BERT.
amansrivastava17/embedding-as-service
One-Stop Solution to encode sentence to fixed length vectors from various embedding techniques