SpecForge and TorchSpec

SpecForge and TorchSpec are competitors for training speculative decoding models, with SpecForge offering additional integration into the SGLang serving ecosystem while TorchSpec provides a PyTorch-native alternative.

SpecForge
76
Verified
TorchSpec
38
Emerging
Maintenance 23/25
Adoption 10/25
Maturity 18/25
Community 25/25
Maintenance 13/25
Adoption 7/25
Maturity 9/25
Community 9/25
Stars: 729
Forks: 179
Downloads:
Commits (30d): 28
Language: Python
License: MIT
Stars: 32
Forks: 3
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Dependents
No Package No Dependents

About SpecForge

sgl-project/SpecForge

Train speculative decoding models effortlessly and port them smoothly to SGLang serving.

About TorchSpec

torchspec-project/TorchSpec

A PyTorch native library for training speculative decoding models

Decouples inference and training via a disaggregated pipeline that streams hidden states from vLLM or SGLang inference engines to distributed training workers through Mooncake's in-memory store, enabling independent scaling of each component. Integrates directly with PyTorch FSDP for distributed training, uses vLLM's Worker Extension API to avoid RPC serialization overhead, and supports vocabulary pruning with HuggingFace checkpoint conversion. Includes production examples for Qwen3, Kimi-K2.5, and MiniMax-M2.5 models with configurable training modes for resuming interrupted runs or continual training from existing weights.

Scores updated daily from GitHub, PyPI, and npm data. How scores work