litgpt and jam-gpt

The high-performance, production-ready framework for LLMs (Lightning-AI/litgpt) and the experimental reimplementation for research and development (loke-x/jam-gpt) are ecosystem siblings, representing distinct phases or approaches within the LLM implementation lifecycle.

litgpt
78
Verified
jam-gpt
26
Experimental
Maintenance 13/25
Adoption 20/25
Maturity 25/25
Community 20/25
Maintenance 0/25
Adoption 6/25
Maturity 9/25
Community 11/25
Stars: 13,225
Forks: 1,409
Downloads: 15,196
Commits (30d): 5
Language: Python
License: Apache-2.0
Stars: 21
Forks: 3
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No risk flags
Stale 6m No Package No Dependents

About litgpt

Lightning-AI/litgpt

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

Implements models from scratch without abstraction layers and combines Flash Attention with FSDP for distributed training across 1-1000+ GPUs/TPUs. Supports parameter-efficient finetuning via LoRA/QLoRA with mixed-precision quantization (fp4/8/16/32) to reduce GPU memory requirements, while integrating with PyTorch Lightning and Lightning Cloud infrastructure for end-to-end pretraining, finetuning, and deployment workflows through declarative YAML recipes.

About jam-gpt

loke-x/jam-gpt

An Experimental Reimplementation of LLM models for research and development process

Scores updated daily from GitHub, PyPI, and npm data. How scores work