antgroup/glake

GLake: optimizing GPU memory management and IO transmission.

Archived
42
/ 100
Emerging

Implements virtual memory stitching and multi-path concurrent I/O to defragment GPU memory and accelerate CPU-GPU transfers without requiring code changes. Built as a pluggable layer beneath PyTorch that provides global memory pooling across GPUs, automatic deduplication for inference workloads, and specialized KV-cache optimization for LLM serving. Achieves 4× training throughput improvement and 3-12× I/O acceleration through memory defragmentation, tiering, and parallel transfer channels.

499 stars. No commits in the last 6 months.

Archived Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

499

Forks

45

Language

Python

License

Apache-2.0

Last pushed

Mar 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/antgroup/glake"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.