NVIDIA/bionemo-framework

BioNeMo Framework: For building and adapting AI models in drug discovery at scale

58
/ 100
Established

Provides pre-optimized training recipes for biological models (ESM2, CodonFM, Llama3) leveraging NVIDIA TransformerEngine for low-precision training (FP8/MXFP8) and distributed strategies like Megatron-FSDP and sequence packing. Integrates with PyTorch, Hugging Face, and NVIDIA's distributed training stack to enable efficient multi-GPU scaling with benchmarked throughput gains (e.g., 2,367 TFLOPS/GPU on ESM2 15B).

679 stars. Actively maintained with 35 commits in the last 30 days.

No License No Package No Dependents
Maintenance 23 / 25
Adoption 10 / 25
Maturity 1 / 25
Community 24 / 25

How are scores calculated?

Stars

679

Forks

126

Language

Jupyter Notebook

License

Last pushed

Mar 13, 2026

Commits (30d)

35

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/NVIDIA/bionemo-framework"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.