https://github.com/yale-nlp/MCTS-RAG\n","updatedAt":"2025-03-27T03:06:43.681Z","author":{"_id":"62f662bcc58915315c4eccea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg","fullname":"Yilun Zhao","name":"yilunzhao","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":21,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.818195104598999},"editors":["yilunzhao"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg"],"reactions":[],"isReport":false}},{"id":"67e5fcc4ddc1526ce0970628","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-03-28T01:35:00.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Graph-Augmented Reasoning: Evolving Step-by-Step Knowledge Graph Retrieval for LLM Reasoning](https://huggingface.co/papers/2503.01642) (2025)\n* [RAG-Gym: Optimizing Reasoning and Search Agents with Process Supervision](https://huggingface.co/papers/2502.13957) (2025)\n* [Vendi-RAG: Adaptively Trading-Off Diversity And Quality Significantly Improves Retrieval Augmented Generation With LLMs](https://huggingface.co/papers/2502.11228) (2025)\n* [Human Cognition Inspired RAG with Knowledge Graph for Complex Problem Solving](https://huggingface.co/papers/2503.06567) (2025)\n* [KiRAG: Knowledge-Driven Iterative Retriever for Enhancing Retrieval-Augmented Generation](https://huggingface.co/papers/2502.18397) (2025)\n* [A Survey on Knowledge-Oriented Retrieval-Augmented Generation](https://huggingface.co/papers/2503.10677) (2025)\n* [Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning](https://huggingface.co/papers/2503.09516) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
\n
\n
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-03-28T01:35:00.865Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7223686575889587},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.20757","authors":[{"_id":"67e4c08fd9b7021d4a600fa4","user":{"_id":"662b4e3bc709a61df840fda1","avatarUrl":"/avatars/fc73c63a4e1f8fbb084ec43ec9af0af0.svg","isPro":false,"fullname":"Hu Yunhai","user":"AlexCCtop","type":"user"},"name":"Yunhai Hu","status":"admin_assigned","statusLastChangedAt":"2025-03-27T09:57:51.052Z","hidden":false},{"_id":"67e4c08fd9b7021d4a600fa5","user":{"_id":"62f662bcc58915315c4eccea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg","isPro":true,"fullname":"Yilun Zhao","user":"yilunzhao","type":"user"},"name":"Yilun Zhao","status":"extracted_confirmed","statusLastChangedAt":"2025-03-31T03:40:49.669Z","hidden":false},{"_id":"67e4c08fd9b7021d4a600fa6","user":{"_id":"660103ec4ae78d4ded4633fc","avatarUrl":"/avatars/efce106d70f5d092bf44d0638aa49984.svg","isPro":false,"fullname":"CHEN Zhao","user":"chenzhao","type":"user"},"name":"Chen Zhao","status":"admin_assigned","statusLastChangedAt":"2025-03-27T09:58:04.608Z","hidden":false},{"_id":"67e4c08fd9b7021d4a600fa7","user":{"_id":"5f5ba21188f57f65f951f255","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1599840760465-noauth.png","isPro":false,"fullname":"Arman Cohan","user":"armanc","type":"user"},"name":"Arman Cohan","status":"admin_assigned","statusLastChangedAt":"2025-03-27T09:57:57.092Z","hidden":false}],"publishedAt":"2025-03-26T17:46:08.000Z","submittedOnDailyAt":"2025-03-27T01:36:43.674Z","title":"MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree\n Search","submittedOnDailyBy":{"_id":"62f662bcc58915315c4eccea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg","isPro":true,"fullname":"Yilun Zhao","user":"yilunzhao","type":"user"},"summary":"We introduce MCTS-RAG, a novel approach that enhances the reasoning\ncapabilities of small language models on knowledge-intensive tasks by\nleveraging retrieval-augmented generation (RAG) to provide relevant context and\nMonte Carlo Tree Search (MCTS) to refine reasoning paths. MCTS-RAG dynamically\nintegrates retrieval and reasoning through an iterative decision-making\nprocess. Unlike standard RAG methods, which typically retrieve information\nindependently from reasoning and thus integrate knowledge suboptimally, or\nconventional MCTS reasoning, which depends solely on internal model knowledge\nwithout external facts, MCTS-RAG combines structured reasoning with adaptive\nretrieval. This integrated approach enhances decision-making, reduces\nhallucinations, and ensures improved factual accuracy and response consistency.\nThe experimental results on multiple reasoning and knowledge-intensive datasets\ndatasets (i.e., ComplexWebQA, GPQA, and FoolMeTwice) show that our method\nenables small-scale LMs to achieve performance comparable to frontier LLMs like\nGPT-4o by effectively scaling inference-time compute, setting a new standard\nfor reasoning in small-scale models.","upvotes":11,"discussionId":"67e4c092d9b7021d4a60108b","githubRepo":"https://github.com/yale-nlp/mcts-rag","githubRepoAddedBy":"auto","ai_summary":"MCTS-RAG improves small language models' reasoning by integrating retrieval-augmented generation and Monte Carlo Tree Search, leading to performance comparable to large models on knowledge-intensive tasks.","ai_keywords":["retrieve-augmented generation","MCTS","Monte Carlo Tree Search","reasoning paths","decision-making process","ComplexWebQA","GPQA","FoolMeTwice","LMs","GPT-4o","scaling inference-time compute"],"githubStars":89},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62f662bcc58915315c4eccea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg","isPro":true,"fullname":"Yilun Zhao","user":"yilunzhao","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"66ef0dc40c00b14702f90be8","avatarUrl":"/avatars/22bdb060a476ce53b9942cb6951d83e4.svg","isPro":false,"fullname":"Hu","user":"Alexhu1999","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"67917b0f2da0d4ed3f9128f0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/s9Yi2cT7zxPdGoqpsV0Sg.png","isPro":false,"fullname":"John Schaefer","user":"johnschaefer","type":"user"},{"_id":"679185119afe88fb031405e1","avatarUrl":"/avatars/aac8d1a818bfa9ee09cf982cf1d724b3.svg","isPro":false,"fullname":"Lily","user":"chenyingli","type":"user"},{"_id":"651c80a26ba9ab9b9582c273","avatarUrl":"/avatars/e963452eafd21f517d800f2e58e0f918.svg","isPro":false,"fullname":"siyeng feng","user":"siyengfeng","type":"user"},{"_id":"65decc75beffeb39ba679eba","avatarUrl":"/avatars/735b678bd5863a0c1b1bdd3bbf8858fa.svg","isPro":true,"fullname":"r","user":"oceansweep","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"},{"_id":"663ccbff3a74a20189d4aa2e","avatarUrl":"/avatars/83a54455e0157480f65c498cd9057cf2.svg","isPro":false,"fullname":"Nguyen Van Thanh","user":"NguyenVanThanhHust","type":"user"},{"_id":"64bbe9b236eb058cd9d6a5b9","avatarUrl":"/avatars/c7c01a3fa8809e73800392679abff6d5.svg","isPro":false,"fullname":"Kai Zuberbühler","user":"kaizuberbuehler","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree
Search
Published on Mar 26, 2025
Abstract
MCTS-RAG improves small language models' reasoning by integrating retrieval-augmented generation and Monte Carlo Tree Search, leading to performance comparable to large models on knowledge-intensive tasks.
We introduce MCTS-RAG, a novel approach that enhances the reasoning
capabilities of small language models on knowledge-intensive tasks by
leveraging retrieval-augmented generation (RAG) to provide relevant context and
Monte Carlo Tree Search (MCTS) to refine reasoning paths. MCTS-RAG dynamically
integrates retrieval and reasoning through an iterative decision-making
process. Unlike standard RAG methods, which typically retrieve information
independently from reasoning and thus integrate knowledge suboptimally, or
conventional MCTS reasoning, which depends solely on internal model knowledge
without external facts, MCTS-RAG combines structured reasoning with adaptive
retrieval. This integrated approach enhances decision-making, reduces
hallucinations, and ensures improved factual accuracy and response consistency.
The experimental results on multiple reasoning and knowledge-intensive datasets
datasets (i.e., ComplexWebQA, GPQA, and FoolMeTwice) show that our method
enables small-scale LMs to achieve performance comparable to frontier LLMs like
GPT-4o by effectively scaling inference-time compute, setting a new standard
for reasoning in small-scale models.