bytedance/lightseq

LightSeq: A High Performance Library for Sequence Processing and Generation

Archived
46
/ 100
Emerging

Built on CUDA with custom fused kernels optimized for Transformer architectures, it supports fp16 and int8 mixed-precision training and inference across BERT, GPT, ViT, and other sequence models. Integrates seamlessly with Fairseq, Hugging Face, and DeepSpeed, with TensorRT Inference Server backend for production deployment and includes decoding algorithms like beam search and CRF.

3,304 stars. No commits in the last 6 months.

Archived Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

3,304

Forks

335

Language

C++

License

Last pushed

May 16, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/bytedance/lightseq"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.