QwenLM/ParScale
Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling
32
/ 100
Emerging
476 stars. No commits in the last 6 months.
No License
Stale 6m
No Package
No Dependents
Maintenance
2 / 25
Adoption
10 / 25
Maturity
7 / 25
Community
13 / 25
Stars
476
Forks
24
Language
Python
License
—
Category
Last pushed
May 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/QwenLM/ParScale"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
67
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
57
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
55
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase
54
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
54