Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - InftyThink+: Effective and Efficient Infinite-Horizon Reasoning via Reinforcement Learning
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-10T01:40:19.161Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7409842014312744},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.06960","authors":[{"_id":"69894ae3beecc443208d25b4","user":{"_id":"64098738342c26884c792c93","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64098738342c26884c792c93/SxBUd-wLrl-PjQsrVYJte.jpeg","isPro":false,"fullname":"Yuchen Yan","user":"yanyc","type":"user"},"name":"Yuchen Yan","status":"claimed_verified","statusLastChangedAt":"2026-02-09T08:29:57.465Z","hidden":false},{"_id":"69894ae3beecc443208d25b5","name":"Liang Jiang","hidden":false},{"_id":"69894ae3beecc443208d25b6","name":"Jin Jiang","hidden":false},{"_id":"69894ae3beecc443208d25b7","name":"Shuaicheng Li","hidden":false},{"_id":"69894ae3beecc443208d25b8","name":"Zujie Wen","hidden":false},{"_id":"69894ae3beecc443208d25b9","name":"Zhiqiang Zhang","hidden":false},{"_id":"69894ae3beecc443208d25ba","name":"Jun Zhou","hidden":false},{"_id":"69894ae3beecc443208d25bb","name":"Jian Shao","hidden":false},{"_id":"69894ae3beecc443208d25bc","name":"Yueting Zhuang","hidden":false},{"_id":"69894ae3beecc443208d25bd","name":"Yongliang Shen","hidden":false}],"publishedAt":"2026-02-06T18:59:27.000Z","submittedOnDailyAt":"2026-02-09T00:18:12.522Z","title":"InftyThink+: Effective and Efficient Infinite-Horizon Reasoning via Reinforcement Learning","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"Large reasoning models achieve strong performance by scaling inference-time chain-of-thought, but this paradigm suffers from quadratic cost, context length limits, and degraded reasoning due to lost-in-the-middle effects. Iterative reasoning mitigates these issues by periodically summarizing intermediate thoughts, yet existing methods rely on supervised learning or fixed heuristics and fail to optimize when to summarize, what to preserve, and how to resume reasoning. We propose InftyThink+, an end-to-end reinforcement learning framework that optimizes the entire iterative reasoning trajectory, building on model-controlled iteration boundaries and explicit summarization. InftyThink+ adopts a two-stage training scheme with supervised cold-start followed by trajectory-level reinforcement learning, enabling the model to learn strategic summarization and continuation decisions. Experiments on DeepSeek-R1-Distill-Qwen-1.5B show that InftyThink+ improves accuracy by 21% on AIME24 and outperforms conventional long chain-of-thought reinforcement learning by a clear margin, while also generalizing better to out-of-distribution benchmarks. Moreover, InftyThink+ significantly reduces inference latency and accelerates reinforcement learning training, demonstrating improved reasoning efficiency alongside stronger performance.","upvotes":12,"discussionId":"69894ae3beecc443208d25be","projectPage":"https://zju-real.github.io/InftyThink-Plus/","githubRepo":"https://github.com/ZJU-REAL/InftyThink-Plus","githubRepoAddedBy":"user","ai_summary":"InftyThink+ uses reinforcement learning to optimize iterative reasoning processes, improving accuracy and efficiency in large language models.","ai_keywords":["chain-of-thought","iterative reasoning","reinforcement learning","trajectory-level reinforcement learning","summarization","reasoning efficiency","inference latency"],"githubStars":24,"organization":{"_id":"61bac2af530e5c78d7b99667","name":"zju","fullname":"Zhejiang University","avatar":"https://cdn-uploads.huggingface.co/production/uploads/5e1058e9fcf41d740b69966d/7G1xjlxwCdMEmKcxNR0n5.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64098738342c26884c792c93","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64098738342c26884c792c93/SxBUd-wLrl-PjQsrVYJte.jpeg","isPro":false,"fullname":"Yuchen Yan","user":"yanyc","type":"user"},{"_id":"622474f38dc6b0b64f5e903d","avatarUrl":"/avatars/d6b60a014277a8ec7d564163c5f644aa.svg","isPro":false,"fullname":"Yuxin Zuo","user":"yuxinzuo","type":"user"},{"_id":"682c14409f1aeba16e13af66","avatarUrl":"/avatars/57660cd718e390b91134f8494ebefd3e.svg","isPro":false,"fullname":"Hai","user":"fiowhahf","type":"user"},{"_id":"66d8512c54209e9101811e8e","avatarUrl":"/avatars/62dfd8e6261108f2508efe678d5a2a57.svg","isPro":false,"fullname":"M Saad Salman","user":"MSS444","type":"user"},{"_id":"6434b6619bd5a84b5dcfa4de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6434b6619bd5a84b5dcfa4de/h8Q6kPNjFNc03wmdboHzq.jpeg","isPro":true,"fullname":"Young-Jun Lee","user":"passing2961","type":"user"},{"_id":"5e1058e9fcf41d740b69966d","avatarUrl":"/avatars/ce74839ba871f2b54313a670a233ba82.svg","isPro":false,"fullname":"Yongliang Shen","user":"tricktreat","type":"user"},{"_id":"62cca92ca3157f8b4155c8bb","avatarUrl":"/avatars/fb3c38e7f5a4db3ca49cc6c75f4d5eae.svg","isPro":false,"fullname":"Jason","user":"Duplets","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6197f5213619d373ad154f73","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6197f5213619d373ad154f73/PzMlx-n984x03Ldy34Xdi.jpeg","isPro":false,"fullname":"Milad Aghajohari","user":"miladink","type":"user"},{"_id":"66ab85440e1b938d84ee2b11","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66ab85440e1b938d84ee2b11/8UnubTbO-vrOu2uG4TuUL.jpeg","isPro":false,"fullname":"Tarl","user":"Y-Tarl","type":"user"},{"_id":"65ef2d78e26bcf263dc7a806","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65ef2d78e26bcf263dc7a806/3QSx6Yk_thl7YARek5sx4.png","isPro":false,"fullname":"Fan Yuan","user":"Leoyfan","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"61bac2af530e5c78d7b99667","name":"zju","fullname":"Zhejiang University","avatar":"https://cdn-uploads.huggingface.co/production/uploads/5e1058e9fcf41d740b69966d/7G1xjlxwCdMEmKcxNR0n5.png"}}">
InftyThink+ uses reinforcement learning to optimize iterative reasoning processes, improving accuracy and efficiency in large language models.
AI-generated summary
Large reasoning models achieve strong performance by scaling inference-time chain-of-thought, but this paradigm suffers from quadratic cost, context length limits, and degraded reasoning due to lost-in-the-middle effects. Iterative reasoning mitigates these issues by periodically summarizing intermediate thoughts, yet existing methods rely on supervised learning or fixed heuristics and fail to optimize when to summarize, what to preserve, and how to resume reasoning. We propose InftyThink+, an end-to-end reinforcement learning framework that optimizes the entire iterative reasoning trajectory, building on model-controlled iteration boundaries and explicit summarization. InftyThink+ adopts a two-stage training scheme with supervised cold-start followed by trajectory-level reinforcement learning, enabling the model to learn strategic summarization and continuation decisions. Experiments on DeepSeek-R1-Distill-Qwen-1.5B show that InftyThink+ improves accuracy by 21% on AIME24 and outperforms conventional long chain-of-thoughtreinforcement learning by a clear margin, while also generalizing better to out-of-distribution benchmarks. Moreover, InftyThink+ significantly reduces inference latency and accelerates reinforcement learning training, demonstrating improved reasoning efficiency alongside stronger performance.