luyug/GradCache

Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint

51
/ 100
Established

Implements gradient caching to decouple memory constraints from batch size by processing inputs in smaller chunks while maintaining full-batch gradient semantics. Supports both PyTorch and JAX/TPU backends with a flexible API that handles various input formats (tensors, dicts, lists) and integrates seamlessly with Hugging Face Transformers models. Enables tied encoders and distributed training through a customizable loss function interface, allowing cost-effective training on single GPUs or low-RAM systems that previously required high-memory hardware.

429 stars and 22 monthly downloads. No commits in the last 6 months. Available on PyPI.

Stale 6m No Dependents
Maintenance 0 / 25
Adoption 13 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

429

Forks

27

Language

Python

License

Apache-2.0

Last pushed

Mar 26, 2024

Monthly downloads

22

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/luyug/GradCache"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.