Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - BoostStep: Boosting mathematical capability of Large Language Models via
improved single-step reasoning
https://github.com/beichenzbc/BoostStep\n","updatedAt":"2025-01-07T07:52:35.655Z","author":{"_id":"64b4eec4faa3181a5eab9c46","avatarUrl":"/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg","fullname":"Jiaqi Wang","name":"myownskyW7","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":25,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8226499557495117},"editors":["myownskyW7"],"editorAvatarUrls":["/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg"],"reactions":[],"isReport":false}},{"id":"677dd60a77600b835246e9c9","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-01-08T01:34:02.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Beyond Examples: High-level Automated Reasoning Paradigm in In-Context Learning via MCTS](https://huggingface.co/papers/2411.18478) (2024)\n* [RARE: Retrieval-Augmented Reasoning Enhancement for Large Language Models](https://huggingface.co/papers/2412.02830) (2024)\n* [SRA-MCTS: Self-driven Reasoning Augmentation with Monte Carlo Tree Search for Code Generation](https://huggingface.co/papers/2411.11053) (2024)\n* [Enhancing the Reasoning Capabilities of Small Language Models via Solution Guidance Fine-Tuning](https://huggingface.co/papers/2412.09906) (2024)\n* [AtomThink: A Slow Thinking Framework for Multimodal Mathematical Reasoning](https://huggingface.co/papers/2411.11930) (2024)\n* [BPP-Search: Enhancing Tree of Thought Reasoning for Mathematical Modeling Problem Solving](https://huggingface.co/papers/2411.17404) (2024)\n* [Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search](https://huggingface.co/papers/2501.01478) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-01-08T01:34:02.877Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7244563102722168},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2501.03226","authors":[{"_id":"677cdcd50604b68871999e0f","user":{"_id":"64b93578ee257c3a4cfceed1","avatarUrl":"/avatars/e6188562254f75a09b4048b800860016.svg","isPro":false,"fullname":"Beichen Zhang","user":"BeichenZhang","type":"user"},"name":"Beichen Zhang","status":"admin_assigned","statusLastChangedAt":"2025-01-07T09:37:59.472Z","hidden":false},{"_id":"677cdcd50604b68871999e10","name":"Yuhong Liu","hidden":false},{"_id":"677cdcd50604b68871999e11","user":{"_id":"68943a6e8d3fb6db77ce2874","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/7LBz6WhQmiKxOqZByJhr9.jpeg","isPro":false,"fullname":"Xiaoyi Dong","user":"LightDong","type":"user"},"name":"Xiaoyi Dong","status":"claimed_verified","statusLastChangedAt":"2025-08-08T16:27:13.615Z","hidden":false},{"_id":"677cdcd50604b68871999e12","user":{"_id":"63859cf3b2906edaf83af9f0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63859cf3b2906edaf83af9f0/kajwuVzd4pDucSPlwghxo.png","isPro":true,"fullname":"Yuhang Zang","user":"yuhangzang","type":"user"},"name":"Yuhang Zang","status":"claimed_verified","statusLastChangedAt":"2025-01-07T08:41:16.697Z","hidden":false},{"_id":"677cdcd50604b68871999e13","name":"Pan Zhang","hidden":false},{"_id":"677cdcd50604b68871999e14","user":{"_id":"63ee1379190ddd6214efd73a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1676546883247-noauth.png","isPro":false,"fullname":"HAODONG DUAN","user":"KennyUTC","type":"user"},"name":"Haodong Duan","status":"admin_assigned","statusLastChangedAt":"2025-01-07T08:57:58.362Z","hidden":false},{"_id":"677cdcd50604b68871999e15","user":{"_id":"65000bef18830fabea469fdd","avatarUrl":"/avatars/b320c77dfad039d9f9c54127f610d44f.svg","isPro":false,"fullname":"Cao Yuhang","user":"yhcao","type":"user"},"name":"Yuhang Cao","status":"admin_assigned","statusLastChangedAt":"2025-01-07T08:57:51.041Z","hidden":false},{"_id":"677cdcd50604b68871999e16","user":{"_id":"636317ed80c1a705a6eff396","avatarUrl":"/avatars/3db090e101b916d9256d0d3e043db71d.svg","isPro":false,"fullname":"Dahua Lin","user":"lindahua","type":"user"},"name":"Dahua Lin","status":"admin_assigned","statusLastChangedAt":"2025-01-07T08:56:41.557Z","hidden":false},{"_id":"677cdcd50604b68871999e17","user":{"_id":"64b4eec4faa3181a5eab9c46","avatarUrl":"/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg","isPro":true,"fullname":"Jiaqi Wang","user":"myownskyW7","type":"user"},"name":"Jiaqi Wang","status":"admin_assigned","statusLastChangedAt":"2025-01-07T09:37:28.234Z","hidden":false}],"publishedAt":"2025-01-06T18:59:13.000Z","submittedOnDailyAt":"2025-01-07T05:22:35.633Z","title":"BoostStep: Boosting mathematical capability of Large Language Models via\n improved single-step reasoning","submittedOnDailyBy":{"_id":"64b4eec4faa3181a5eab9c46","avatarUrl":"/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg","isPro":true,"fullname":"Jiaqi Wang","user":"myownskyW7","type":"user"},"summary":"Cutting-edge large language models (LLMs) demonstrate promising performance\nin solving complex math problems with a divide-and-conquer pipeline and the\nassistance of in-context learning (ICL) examples. However, their potential for\nimprovement is limited by two critical problems within their ICL examples:\ngranularity-mismatch and the ensuing negative-effect noise problem.\nSpecifically, the LLMs are capable of the dividing process yet mostly failed by\ninaccurate reasoning within a few conquer steps, while the ICL examples\nretrieved in question-grained sometimes lack relevant steps for a specific\nchallenging reasoning step. Further, this disconnect may hinder the correct\nreasoning due to its irrelevance. To this end, we focus on improving the\nreasoning quality within each step and present BoostStep. BoostStep aligns the\ngranularity between the retrieving and reasoning on step grained, and provides\nhighly related ICL examples for each reasoning step with a novel `first-try'\nstrategy. BoostStep provides more relevant examples than the coarse\nquestion-grained strategy, enhancing the model reasoning quality within each\nstep steadily. BoostStep is a general and robust reasoning-enhancing method\nthat not only improves standalone reasoning performance but also integrates\nseamlessly with Monte Carlo Tree Search methods (MCTS) to refine both candidate\ngeneration and decision-making. Quantitatively, it improves GPT-4o and\nQwen2.5-Math-72B by 3.6\\% and 2.0\\% respectively on various mathematical\nbenchmarks, and 7.5\\% gain combined with MCTS.","upvotes":43,"discussionId":"677cdcd60604b68871999e7b","githubRepo":"https://github.com/beichenzbc/booststep","githubRepoAddedBy":"auto","ai_summary":"BoostStep improves large language models' reasoning quality in math problems by aligning granularity and providing relevant in-context learning examples, enhancing performance and integrating with Monte Carlo Tree Search methods.","ai_keywords":["large language models","divide-and-conquer pipeline","in-context learning","granularity-mismatch","negative-effect noise","conquering steps","question-grained","step-grained","reasoning quality","first-try strategy","Monte Carlo Tree Search","GPT-4o","Qwen2.5-Math-72B"],"githubStars":37},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"656f1b21b075b63c90ba02ee","avatarUrl":"/avatars/d6856815ef06261394178161e4d511b4.svg","isPro":false,"fullname":"Huang Qidong","user":"shikiw","type":"user"},{"_id":"65ab5332043d53781a115475","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65ab5332043d53781a115475/UaxSFDWteYsByzx7G_KKy.jpeg","isPro":false,"fullname":"Zhixiong Zhang (SII)","user":"rookiexiong","type":"user"},{"_id":"64e5f0c23e220d8f697d1ab0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64e5f0c23e220d8f697d1ab0/qD-Egzxs-ZvJT5GXeqE3d.jpeg","isPro":false,"fullname":"Jinsong Li","user":"Jinsong-Li","type":"user"},{"_id":"64b51fd8bcfd8542d6473d9a","avatarUrl":"/avatars/ceaa73b79f448996187f07733d96b800.svg","isPro":false,"fullname":"yujie","user":"yujieouo","type":"user"},{"_id":"632a80706813868fa4a649e3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/632a80706813868fa4a649e3/MbTsAYGadNwS3-G5vxEnm.jpeg","isPro":true,"fullname":"Zhibing LI","user":"lizb6626","type":"user"},{"_id":"6444f0a8b272430bdbf11785","avatarUrl":"/avatars/5135f817e638e97b280a28ba90d4381c.svg","isPro":false,"fullname":"laolao","user":"laolao77","type":"user"},{"_id":"63fda3fced9eead590ff6918","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1677566802735-noauth.jpeg","isPro":false,"fullname":"Zeyi Sun","user":"Zery","type":"user"},{"_id":"63859cf3b2906edaf83af9f0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63859cf3b2906edaf83af9f0/kajwuVzd4pDucSPlwghxo.png","isPro":true,"fullname":"Yuhang Zang","user":"yuhangzang","type":"user"},{"_id":"63ee1379190ddd6214efd73a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1676546883247-noauth.png","isPro":false,"fullname":"HAODONG DUAN","user":"KennyUTC","type":"user"},{"_id":"64adfeac4beffa272dfaef21","avatarUrl":"/avatars/883f6ba38b993476115dfafcef9ce3c1.svg","isPro":false,"fullname":"Yifei Li","user":"JoeLeelyf","type":"user"},{"_id":"64b4eec4faa3181a5eab9c46","avatarUrl":"/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg","isPro":true,"fullname":"Jiaqi Wang","user":"myownskyW7","type":"user"},{"_id":"6433dc0aa4c9c55871a53027","avatarUrl":"/avatars/91c5c0ab09726d4f648d1e27417a3a95.svg","isPro":false,"fullname":"Yang Lin","user":"Yang18","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":3}">
BoostStep improves large language models' reasoning quality in math problems by aligning granularity and providing relevant in-context learning examples, enhancing performance and integrating with Monte Carlo Tree Search methods.
AI-generated summary
Cutting-edge large language models (LLMs) demonstrate promising performance
in solving complex math problems with a divide-and-conquer pipeline and the
assistance of in-context learning (ICL) examples. However, their potential for
improvement is limited by two critical problems within their ICL examples:
granularity-mismatch and the ensuing negative-effect noise problem.
Specifically, the LLMs are capable of the dividing process yet mostly failed by
inaccurate reasoning within a few conquer steps, while the ICL examples
retrieved in question-grained sometimes lack relevant steps for a specific
challenging reasoning step. Further, this disconnect may hinder the correct
reasoning due to its irrelevance. To this end, we focus on improving the
reasoning quality within each step and present BoostStep. BoostStep aligns the
granularity between the retrieving and reasoning on step grained, and provides
highly related ICL examples for each reasoning step with a novel `first-try'
strategy. BoostStep provides more relevant examples than the coarse
question-grained strategy, enhancing the model reasoning quality within each
step steadily. BoostStep is a general and robust reasoning-enhancing method
that not only improves standalone reasoning performance but also integrates
seamlessly with Monte Carlo Tree Search methods (MCTS) to refine both candidate
generation and decision-making. Quantitatively, it improves GPT-4o and
Qwen2.5-Math-72B by 3.6\% and 2.0\% respectively on various mathematical
benchmarks, and 7.5\% gain combined with MCTS.