PRIME-RL/PRIME

Scalable RL solution for advanced reasoning of language models

43
/ 100
Emerging

Implements online RL with implicit process reward models (PRMs) that learn dense, token-level rewards directly from outcome labels without requiring step-level annotations. The approach jointly trains a policy and PRM initialized from the same SFT model, using RLOO advantage estimation to combine outcome and process rewards for PPO updates. Integrated with veRL framework and optimized for math and coding reasoning tasks.

1,813 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

1,813

Forks

104

Language

Python

License

Apache-2.0

Last pushed

Mar 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/PRIME-RL/PRIME"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.