Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Open-Reasoner-Zero: An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model
[go: Go Back, main page]

\"Screenshot

\n","updatedAt":"2025-04-01T05:14:53.621Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9179,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.2910603880882263},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[{"reaction":"🚀","users":["reign12","lee2333","kamzero","hanqer"],"count":4}],"isReport":false}},{"id":"67ec75c40d37308e108a0019","author":{"_id":"65642d7401de72cb63165d22","avatarUrl":"/avatars/1f4417c4ac5e781ce73eae1060e3f7f2.svg","fullname":"ytaewon","name":"hamzzi","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false},"createdAt":"2025-04-01T23:24:52.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Awesome","html":"

Awesome

\n","updatedAt":"2025-04-01T23:24:52.147Z","author":{"_id":"65642d7401de72cb63165d22","avatarUrl":"/avatars/1f4417c4ac5e781ce73eae1060e3f7f2.svg","fullname":"ytaewon","name":"hamzzi","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5393093228340149},"editors":["hamzzi"],"editorAvatarUrls":["/avatars/1f4417c4ac5e781ce73eae1060e3f7f2.svg"],"reactions":[{"reaction":"❤️","users":["reign12","Kelexine"],"count":2}],"isReport":false}},{"id":"67ede6506047e7ceb40096b1","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-04-03T01:37:20.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning](https://huggingface.co/papers/2502.14768) (2025)\n* [R1-Zero's\"Aha Moment\"in Visual Reasoning on a 2B Non-SFT Model](https://huggingface.co/papers/2503.05132) (2025)\n* [DAPO: An Open-Source LLM Reinforcement Learning System at Scale](https://huggingface.co/papers/2503.14476) (2025)\n* [Understanding R1-Zero-Like Training: A Critical Perspective](https://huggingface.co/papers/2503.20783) (2025)\n* [Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't](https://huggingface.co/papers/2503.16219) (2025)\n* [Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond](https://huggingface.co/papers/2503.10460) (2025)\n* [Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models](https://huggingface.co/papers/2503.06749) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-03T01:37:20.569Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7293363213539124},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.24290","authors":[{"_id":"67eb762381e530baa56dc830","user":{"_id":"625026b7d2d191ac43320c5e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/625026b7d2d191ac43320c5e/2ExzHlZ-Bk8SQMyBjeY6N.jpeg","isPro":false,"fullname":"Jingcheng Hu","user":"reign12","type":"user"},"name":"Jingcheng Hu","status":"admin_assigned","statusLastChangedAt":"2025-04-01T08:09:02.123Z","hidden":false},{"_id":"67eb762381e530baa56dc831","user":{"_id":"664ae39ab5e5f95dc6209365","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/664ae39ab5e5f95dc6209365/8Z9ERYhX6URXh4si6jWGm.jpeg","isPro":false,"fullname":"Yinmin Zhang","user":"YinminZhang","type":"user"},"name":"Yinmin Zhang","status":"admin_assigned","statusLastChangedAt":"2025-04-01T08:09:09.884Z","hidden":false},{"_id":"67eb762381e530baa56dc832","name":"Qi Han","hidden":false},{"_id":"67eb762381e530baa56dc833","user":{"_id":"60d4440fe648443279aaffd8","avatarUrl":"/avatars/bf7209c1f14ae120f5bfda5fda1301b7.svg","isPro":false,"fullname":"Daxin Jiang","user":"djiang","type":"user"},"name":"Daxin Jiang","status":"admin_assigned","statusLastChangedAt":"2025-04-01T08:09:22.914Z","hidden":false},{"_id":"67eb762381e530baa56dc834","name":"Xiangyu Zhang","hidden":false},{"_id":"67eb762381e530baa56dc835","name":"Heung-Yeung Shum","hidden":false}],"publishedAt":"2025-03-31T16:36:05.000Z","submittedOnDailyAt":"2025-04-01T03:44:53.609Z","title":"Open-Reasoner-Zero: An Open Source Approach to Scaling Up Reinforcement\n Learning on the Base Model","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"We introduce Open-Reasoner-Zero, the first open source implementation of\nlarge-scale reasoning-oriented RL training focusing on scalability, simplicity\nand accessibility. Through extensive experiments, we demonstrate that a\nminimalist approach, vanilla PPO with GAE (lambda=1, gamma=1) and\nstraightforward rule-based rewards, without any KL regularization, is\nsufficient to scale up both response length and benchmark performance, similar\nto the phenomenon observed in DeepSeek-R1-Zero. Using the same base model as\nDeepSeek-R1-Zero-Qwen-32B, our implementation achieves superior performance on\nAIME2024, MATH500, and the GPQA Diamond benchmark while demonstrating\nremarkable efficiency -- requiring only a tenth of the training steps, compared\nto DeepSeek-R1-Zero pipeline. In the spirit of open source, we release our\nsource code, parameter settings, training data, and model weights across\nvarious sizes.","upvotes":62,"discussionId":"67eb762481e530baa56dc872","projectPage":"https://huggingface.co/Open-Reasoner-Zero","githubRepo":"https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero","githubRepoAddedBy":"user","ai_summary":"Open-Reasoner-Zero achieves superior performance on reasoning benchmarks using a minimalist PPO approach, requiring fewer training steps than DeepSeek-R1-Zero.","ai_keywords":["reinforcement learning (RL)","PPO","GAE","benchmark performance","AIME2024","MATH500","GPQA Diamond","training steps","DeepSeek-R1-Zero-Qwen-32B"],"githubStars":2084},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"5f0c746619cb630495b814fd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1594651707950-noauth.jpeg","isPro":true,"fullname":"Lewis Tunstall","user":"lewtun","type":"user"},{"_id":"5fa241b4a13e063b8b2b5e2f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5fa241b4a13e063b8b2b5e2f/lbrO-eAcRDHqeoTPdMjkR.png","isPro":true,"fullname":"Prince Canuma","user":"prince-canuma","type":"user"},{"_id":"64f7f119a92703ef65d9a717","avatarUrl":"/avatars/118524faab66cecba6d4da622034b44b.svg","isPro":false,"fullname":"Sirui Zhang","user":"zsr200901","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"639883cb11095028d87b78c1","avatarUrl":"/avatars/0bd2e430affd0a1a1a85a61a8394a438.svg","isPro":false,"fullname":"Melih Özcan","user":"staycoolish","type":"user"},{"_id":"625026b7d2d191ac43320c5e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/625026b7d2d191ac43320c5e/2ExzHlZ-Bk8SQMyBjeY6N.jpeg","isPro":false,"fullname":"Jingcheng Hu","user":"reign12","type":"user"},{"_id":"63968940de7596eb94311d23","avatarUrl":"/avatars/89ed2b5345e713f3647ef9b336457b72.svg","isPro":false,"fullname":"Lei Yang","user":"diyer22","type":"user"},{"_id":"63f5b28c3aa49d8cb97f86d7","avatarUrl":"/avatars/2603d001589c5b2e7be9f2b0a5b53f66.svg","isPro":false,"fullname":"SunJianjian","user":"Swtju","type":"user"},{"_id":"63971c2a3507d82f7976b164","avatarUrl":"/avatars/9387a5b258cd1c80a7a4e71d0fa07994.svg","isPro":false,"fullname":"Jie Cheng","user":"jinachris","type":"user"},{"_id":"6440ff1fcea37249a0fb02d9","avatarUrl":"/avatars/d9c91f392f5574ba745b95b710011a92.svg","isPro":false,"fullname":"leeyusheng","user":"lee2333","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":3}">
Papers
arxiv:2503.24290

Open-Reasoner-Zero: An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model

Published on Mar 31, 2025
· Submitted by
AK
on Apr 1, 2025
#3 Paper of the day
Authors:
,
,

Abstract

Open-Reasoner-Zero achieves superior performance on reasoning benchmarks using a minimalist PPO approach, requiring fewer training steps than DeepSeek-R1-Zero.

AI-generated summary

We introduce Open-Reasoner-Zero, the first open source implementation of large-scale reasoning-oriented RL training focusing on scalability, simplicity and accessibility. Through extensive experiments, we demonstrate that a minimalist approach, vanilla PPO with GAE (lambda=1, gamma=1) and straightforward rule-based rewards, without any KL regularization, is sufficient to scale up both response length and benchmark performance, similar to the phenomenon observed in DeepSeek-R1-Zero. Using the same base model as DeepSeek-R1-Zero-Qwen-32B, our implementation achieves superior performance on AIME2024, MATH500, and the GPQA Diamond benchmark while demonstrating remarkable efficiency -- requiring only a tenth of the training steps, compared to DeepSeek-R1-Zero pipeline. In the spirit of open source, we release our source code, parameter settings, training data, and model weights across various sizes.

Community

Paper submitter

Screenshot 2025-04-01 at 1.14.39 AM.png

Awesome

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 8

Browse 8 models citing this paper

Datasets citing this paper 8

Browse 8 datasets citing this paper

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.24290 in a Space README.md to link it from this page.

Collections including this paper 23