rukmini-17/scalable-sequence-modeling
Comparative analysis of Mamba vs. Transformers trained from scratch. Benchmarking Mamba's linear O(N) scaling and constant-time inference against quadratic attention mechanisms.
Stars
—
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Dec 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rukmini-17/scalable-sequence-modeling"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/MambaVision
[CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone
sign-language-translator/sign-language-translator
Python library & framework to build custom translators for the hearing-impaired and translate...
kyegomez/Jamba
PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"
fashn-AI/fashn-human-parser
Human parsing model for fashion and virtual try-on applications
autonomousvision/transfuser
[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving;...