antgroup/glake
GLake: optimizing GPU memory management and IO transmission.
ArchivedImplements virtual memory stitching and multi-path concurrent I/O to defragment GPU memory and accelerate CPU-GPU transfers without requiring code changes. Built as a pluggable layer beneath PyTorch that provides global memory pooling across GPUs, automatic deduplication for inference workloads, and specialized KV-cache optimization for LLM serving. Achieves 4× training throughput improvement and 3-12× I/O acceleration through memory defragmentation, tiering, and parallel transfer channels.
499 stars. No commits in the last 6 months.
Stars
499
Forks
45
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/antgroup/glake"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Tencent/AngelSlim
Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.
nebuly-ai/optimate
A collection of libraries to optimise AI model performances
liyucheng09/Selective_Context
Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40%...
kyo-takano/chinchilla
A toolkit for scaling law research ⚖
microsoft/only_train_once
OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators,...