Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning
[go: Go Back, main page]

\"Screenshot

\n","updatedAt":"2025-03-26T02:26:07.106Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9175,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3521834909915924},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"67e4ab41765e3105ef07ee49","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false},"createdAt":"2025-03-27T01:34:57.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning](https://huggingface.co/papers/2503.05592) (2025)\n* [Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning](https://huggingface.co/papers/2503.09516) (2025)\n* [Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models](https://huggingface.co/papers/2503.06749) (2025)\n* [Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study on Audio Question Answering](https://huggingface.co/papers/2503.11197) (2025)\n* [Learning from Failures in Multi-Attempt Reinforcement Learning](https://huggingface.co/papers/2503.04808) (2025)\n* [RAG-Gym: Optimizing Reasoning and Search Agents with Process Supervision](https://huggingface.co/papers/2502.13957) (2025)\n* [Reinforcement Learning is all You Need](https://huggingface.co/papers/2503.09512) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-03-27T01:34:57.565Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7530558109283447},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"67ea84fcabe646bb590232cd","author":{"_id":"6440b7c47bc3fbde138a1409","avatarUrl":"/avatars/d43ef8457f732c5e1753f96056b8ea8b.svg","fullname":"zhang","name":"Nurburgring","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":38,"isUserFollowing":false},"createdAt":"2025-03-31T12:05:16.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","hiddenReason":"Off-Topic","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2025-03-31T12:05:59.159Z","author":{"_id":"6440b7c47bc3fbde138a1409","avatarUrl":"/avatars/d43ef8457f732c5e1753f96056b8ea8b.svg","fullname":"zhang","name":"Nurburgring","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":38,"isUserFollowing":false}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.19470","authors":[{"_id":"67e365b0dcfc2aeae1bf3da2","user":{"_id":"66254042ea4f4ed066a77a1e","avatarUrl":"/avatars/c749b77417431bb364f6fa2189eabaa2.svg","isPro":false,"fullname":"Mingyang Chen","user":"anselcmy","type":"user"},"name":"Mingyang Chen","status":"claimed_verified","statusLastChangedAt":"2025-09-03T08:31:47.108Z","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3da3","user":{"_id":"646c3ced3e2a7b06594bbaa4","avatarUrl":"/avatars/6e2d0e2f35e159a7832919a454583ab1.svg","isPro":false,"fullname":"李天鹏","user":"yuanshuai","type":"user"},"name":"Tianpeng Li","status":"claimed_verified","statusLastChangedAt":"2025-09-03T08:31:45.301Z","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3da4","user":{"_id":"6436bb0dd58a5ea528c55acb","avatarUrl":"/avatars/df17b66780e14e07bbe4625f068a94ad.svg","isPro":false,"fullname":"Alvin Sun","user":"AlvinSunYooo","type":"user"},"name":"Haoze Sun","status":"claimed_verified","statusLastChangedAt":"2025-05-15T10:41:04.653Z","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3da5","user":{"_id":"658670184f349f95cf7d2252","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/658670184f349f95cf7d2252/MfYwxDS1w2kIvav2GvE_U.jpeg","isPro":false,"fullname":"Jie","user":"Jayok6","type":"user"},"name":"Yijie Zhou","status":"claimed_verified","statusLastChangedAt":"2025-09-03T08:31:40.752Z","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3da6","name":"Chenzheng Zhu","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3da7","user":{"_id":"641c45c921964f8f6d451d16","avatarUrl":"/avatars/da06cc603f8f9ee46ddb7dc72aae5bec.svg","isPro":false,"fullname":"FanYang","user":"fairyang","type":"user"},"name":"Fan Yang","status":"claimed_verified","statusLastChangedAt":"2025-09-03T08:31:43.334Z","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3da8","name":"Zenan Zhou","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3da9","name":"Weipeng Chen","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3daa","name":"Haofen Wang","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3dab","name":"Jeff Z. Pan","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3dac","name":"Wen Zhang","hidden":false},{"_id":"67e365b0dcfc2aeae1bf3dad","name":"Huajun Chen","hidden":false}],"publishedAt":"2025-03-25T09:00:58.000Z","submittedOnDailyAt":"2025-03-26T00:56:07.098Z","title":"ReSearch: Learning to Reason with Search for LLMs via Reinforcement\n Learning","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Large Language Models (LLMs) have shown remarkable capabilities in reasoning,\nexemplified by the success of OpenAI-o1 and DeepSeek-R1. However, integrating\nreasoning with external search processes remains challenging, especially for\ncomplex multi-hop questions requiring multiple retrieval steps. We propose\nReSearch, a novel framework that trains LLMs to Reason with Search via\nreinforcement learning without using any supervised data on reasoning steps.\nOur approach treats search operations as integral components of the reasoning\nchain, where when and how to perform searches is guided by text-based thinking,\nand search results subsequently influence further reasoning. We train ReSearch\non Qwen2.5-7B(-Instruct) and Qwen2.5-32B(-Instruct) models and conduct\nextensive experiments. Despite being trained on only one dataset, our models\ndemonstrate strong generalizability across various benchmarks. Analysis reveals\nthat ReSearch naturally elicits advanced reasoning capabilities such as\nreflection and self-correction during the reinforcement learning process.","upvotes":19,"discussionId":"67e365b1dcfc2aeae1bf3df6","githubRepo":"https://github.com/agent-rl/research","githubRepoAddedBy":"auto","ai_summary":"ReSearch is a framework that trains LLMs to integrate reasoning with search using reinforcement learning, enhancing capabilities for complex multi-hop questions.","ai_keywords":["Large Language Models","LLMs","OpenAI-o1","DeepSeek-R1","ReSearch","reinforcement learning","Qwen2.5-7B","Qwen2.5-32B"],"githubStars":1323},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6434b6619bd5a84b5dcfa4de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6434b6619bd5a84b5dcfa4de/h8Q6kPNjFNc03wmdboHzq.jpeg","isPro":true,"fullname":"Young-Jun Lee","user":"passing2961","type":"user"},{"_id":"6682ec8e9e8f301884217372","avatarUrl":"/avatars/65acb4e0c2e7328b27d75b18d0927444.svg","isPro":false,"fullname":"Zixiang Zheng","user":"imzhengzx","type":"user"},{"_id":"63082bb7bc0a2a5ee2253523","avatarUrl":"/avatars/6cf8d12d16d15db1070fbea89b5b3967.svg","isPro":false,"fullname":"Kuo-Hsin Tu","user":"dapumptu","type":"user"},{"_id":"60078446e55258e41786a959","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60078446e55258e41786a959/UGPCE4YqG9BVMSf0YauxL.png","isPro":false,"fullname":"Motoki Wu","user":"tokestermw","type":"user"},{"_id":"67cb764be34bbc71815b8c40","avatarUrl":"/avatars/e76a1bbbefd2d2c98c36f16f93ec8489.svg","isPro":false,"fullname":"Naveen Nallamothu","user":"naveennallamothu","type":"user"},{"_id":"651c80a26ba9ab9b9582c273","avatarUrl":"/avatars/e963452eafd21f517d800f2e58e0f918.svg","isPro":false,"fullname":"siyeng feng","user":"siyengfeng","type":"user"},{"_id":"5fd5e18a90b6dc4633f6d292","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5fd5e18a90b6dc4633f6d292/gZXHW5dd9R86AV9LMZ--y.png","isPro":true,"fullname":"Maziyar Panahi","user":"MaziyarPanahi","type":"user"},{"_id":"662dd7af9e6d371ab71cfd46","avatarUrl":"/avatars/9b99a47760715d41f54169620b782d5a.svg","isPro":false,"fullname":"Chao","user":"Youuxi","type":"user"},{"_id":"67e265f0c3cc4e5bab76c2bf","avatarUrl":"/avatars/a894885136a665ac5258b834c52558cc.svg","isPro":false,"fullname":"Agent-RL","user":"agentrl","type":"user"},{"_id":"602451620b002b9ff74df440","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/602451620b002b9ff74df440/Wrg8-_gJmwqVTT7c74-wD.png","isPro":false,"fullname":"Seth L","user":"splevine","type":"user"},{"_id":"6254f8e5d21e4cc386b881ad","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1649899774659-6254f8e5d21e4cc386b881ad.jpeg","isPro":false,"fullname":"Somshubra Majumdar","user":"smajumdar94","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2503.19470

ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning

Published on Mar 25, 2025
· Submitted by
AK
on Mar 26, 2025
Authors:
,
,
,
,
,
,

Abstract

ReSearch is a framework that trains LLMs to integrate reasoning with search using reinforcement learning, enhancing capabilities for complex multi-hop questions.

AI-generated summary

Large Language Models (LLMs) have shown remarkable capabilities in reasoning, exemplified by the success of OpenAI-o1 and DeepSeek-R1. However, integrating reasoning with external search processes remains challenging, especially for complex multi-hop questions requiring multiple retrieval steps. We propose ReSearch, a novel framework that trains LLMs to Reason with Search via reinforcement learning without using any supervised data on reasoning steps. Our approach treats search operations as integral components of the reasoning chain, where when and how to perform searches is guided by text-based thinking, and search results subsequently influence further reasoning. We train ReSearch on Qwen2.5-7B(-Instruct) and Qwen2.5-32B(-Instruct) models and conduct extensive experiments. Despite being trained on only one dataset, our models demonstrate strong generalizability across various benchmarks. Analysis reveals that ReSearch naturally elicits advanced reasoning capabilities such as reflection and self-correction during the reinforcement learning process.

Community

Paper submitter

Screenshot 2025-03-25 at 10.25.55 PM.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

This comment has been hidden (marked as Off-Topic)

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.19470 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.19470 in a Space README.md to link it from this page.

Collections including this paper 9