Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - On the Optimal Reasoning Length for RL-Trained Language Models
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-12T01:42:37.998Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7420784831047058},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.09591","authors":[{"_id":"698c7521eb12ea7453916832","user":{"_id":"67f7ed6ae64695f24a724be1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67f7ed6ae64695f24a724be1/DTeiPbxey03v4kBmbJkaC.png","isPro":false,"fullname":"Daisuke Nohara","user":"neodymium6","type":"user"},"name":"Daisuke Nohara","status":"claimed_verified","statusLastChangedAt":"2026-02-11T12:43:42.483Z","hidden":false},{"_id":"698c7521eb12ea7453916833","user":{"_id":"6308c49c454dc257521bc7f9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6308c49c454dc257521bc7f9/UWUS6OPa6OpVu1T0gd-wJ.jpeg","isPro":false,"fullname":"Taishi","user":"Taishi-N324","type":"user"},"name":"Taishi Nakamura","status":"claimed_verified","statusLastChangedAt":"2026-02-11T22:16:31.194Z","hidden":false},{"_id":"698c7521eb12ea7453916834","name":"Rio Yokota","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/67f7ed6ae64695f24a724be1/IZ-6mQkAvIQwUdcXgQjb7.png"],"publishedAt":"2026-02-10T09:45:42.000Z","submittedOnDailyAt":"2026-02-11T10:30:14.868Z","title":"On the Optimal Reasoning Length for RL-Trained Language Models","submittedOnDailyBy":{"_id":"67f7ed6ae64695f24a724be1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67f7ed6ae64695f24a724be1/DTeiPbxey03v4kBmbJkaC.png","isPro":false,"fullname":"Daisuke Nohara","user":"neodymium6","type":"user"},"summary":"Reinforcement learning substantially improves reasoning in large language models, but it also tends to lengthen chain of thought outputs and increase computational cost during both training and inference. Though length control methods have been proposed, it remains unclear what the optimal output length is for balancing efficiency and performance. In this work, we compare several length control methods on two models, Qwen3-1.7B Base and DeepSeek-R1-Distill-Qwen-1.5B. Our results indicate that length penalties may hinder reasoning acquisition, while properly tuned length control can improve efficiency for models with strong prior reasoning. By extending prior work to RL trained policies, we identify two failure modes, 1) long outputs increase dispersion, and 2) short outputs lead to under-thinking.","upvotes":5,"discussionId":"698c7521eb12ea7453916835","ai_summary":"Length control methods in reinforcement learning-trained language models affect reasoning performance and computational efficiency, with optimal output lengths balancing these factors.","ai_keywords":["reinforcement learning","chain of thought","length control methods","reasoning acquisition","output length","computational cost","efficiency","performance"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6308c49c454dc257521bc7f9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6308c49c454dc257521bc7f9/UWUS6OPa6OpVu1T0gd-wJ.jpeg","isPro":false,"fullname":"Taishi","user":"Taishi-N324","type":"user"},{"_id":"67f7ed6ae64695f24a724be1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67f7ed6ae64695f24a724be1/DTeiPbxey03v4kBmbJkaC.png","isPro":false,"fullname":"Daisuke Nohara","user":"neodymium6","type":"user"},{"_id":"684d57f26e04c265777ead3f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/cuOj-bQqukSZreXgUJlfm.png","isPro":false,"fullname":"Joakim Lee","user":"Reinforcement4All","type":"user"},{"_id":"6409e25c9989bcb11720c7a5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6409e25c9989bcb11720c7a5/6vua4khJB7v5eLhPpxkfY.jpeg","isPro":false,"fullname":"Kazuki Fujii","user":"kazukifujii","type":"user"},{"_id":"6495796f03b716d5e8a8462d","avatarUrl":"/avatars/0fddee0db97c819b6287e7094a2afb4b.svg","isPro":false,"fullname":"Jinsei Shiraishi","user":"OsakanaTeishoku","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2602.09591

On the Optimal Reasoning Length for RL-Trained Language Models

Published on Feb 10
· Submitted by
Daisuke Nohara
on Feb 11
Authors:

Abstract

Length control methods in reinforcement learning-trained language models affect reasoning performance and computational efficiency, with optimal output lengths balancing these factors.

AI-generated summary

Reinforcement learning substantially improves reasoning in large language models, but it also tends to lengthen chain of thought outputs and increase computational cost during both training and inference. Though length control methods have been proposed, it remains unclear what the optimal output length is for balancing efficiency and performance. In this work, we compare several length control methods on two models, Qwen3-1.7B Base and DeepSeek-R1-Distill-Qwen-1.5B. Our results indicate that length penalties may hinder reasoning acquisition, while properly tuned length control can improve efficiency for models with strong prior reasoning. By extending prior work to RL trained policies, we identify two failure modes, 1) long outputs increase dispersion, and 2) short outputs lead to under-thinking.

Community

Paper author Paper submitter

RL-trained reasoning models often produce longer CoT, increasing test-time cost. We compare several length-control methods on Qwen3-1.7B-Base and DeepSeek-R1-Distill-Qwen-1.5B, and characterize when length penalties hurt reasoning acquisition vs when tuned control improves efficiency. We also highlight two failure modes: overly long outputs increase dispersion, while overly short outputs cause under-thinking.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.09591 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.09591 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.09591 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.