mddunlap924/LLM-Inference-Serving

This repository demonstrates LLM execution on CPUs using packages like llamafile, emphasizing low-latency, high-throughput, and cost-effective benefits for inference and serving.

14
/ 100
Experimental

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 1 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Jupyter Notebook

License

Last pushed

Dec 04, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mddunlap924/LLM-Inference-Serving"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.