Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Search-o1: Agentic Search-Enhanced Large Reasoning Models
[go: Go Back, main page]

\"image.png\"

\n

Our experimental results:

\n

\"image.png\"

\n

\"image.png\"

\n","updatedAt":"2025-01-10T02:33:11.643Z","author":{"_id":"61cd4b833dd34ba1985e0753","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61cd4b833dd34ba1985e0753/BfHfrwotoMESpXZOHiIe4.png","fullname":"KABI","name":"dongguanting","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":64,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.4194351136684418},"editors":["dongguanting"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/61cd4b833dd34ba1985e0753/BfHfrwotoMESpXZOHiIe4.png"],"reactions":[{"reaction":"๐Ÿ”ฅ","users":["dongguanting","AdinaY"],"count":2}],"isReport":false},"replies":[{"id":"678760c3d00913d369588d60","author":{"_id":"61d5fff9848575d1146e87fb","avatarUrl":"/avatars/25913373b8db232d3b6c28f6c79ed7ae.svg","fullname":"Jihyuk Kim","name":"JihyukKim","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-01-15T07:16:19.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Hi @dongguanting , it is interesting to see Search-o1 (Table 2) outperforming human experts in physics and biology!\nI wonder how human experts have been hired. Are they PhD students in each field?","html":"

Hi \n\n@dongguanting\n\t , it is interesting to see Search-o1 (Table 2) outperforming human experts in physics and biology!
I wonder how human experts have been hired. Are they PhD students in each field?

\n","updatedAt":"2025-01-15T07:16:19.538Z","author":{"_id":"61d5fff9848575d1146e87fb","avatarUrl":"/avatars/25913373b8db232d3b6c28f6c79ed7ae.svg","fullname":"Jihyuk Kim","name":"JihyukKim","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9426814913749695},"editors":["JihyukKim"],"editorAvatarUrls":["/avatars/25913373b8db232d3b6c28f6c79ed7ae.svg"],"reactions":[],"isReport":false,"parentCommentId":"678086e7c5273cefdd39d724"}},{"id":"6787713821a7cf254c6c4418","author":{"_id":"61cd4b833dd34ba1985e0753","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61cd4b833dd34ba1985e0753/BfHfrwotoMESpXZOHiIe4.png","fullname":"KABI","name":"dongguanting","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":64,"isUserFollowing":false},"createdAt":"2025-01-15T08:26:32.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"> Hi @dongguanting , it is interesting to see Search-o1 (Table 2) outperforming human experts in physics and biology!\n> I wonder how human experts have been hired. Are they PhD students in each field?\n\nHi @JihyukKim ! As mentioned in the caption of Table 2, the scores of human experts are derived from the GPQA paper. \n\nPaper: GPQA: A Graduate-Level Google-Proof Q&A Benchmark\n\n![image.png](https://cdn-uploads.huggingface.co/production/uploads/61cd4b833dd34ba1985e0753/DRSmd8PWPgIsz1n3Nssi1.png)\n\n\n","html":"
\n

Hi \n\n@dongguanting\n\t , it is interesting to see Search-o1 (Table 2) outperforming human experts in physics and biology!
I wonder how human experts have been hired. Are they PhD students in each field?

\n
\n

Hi \n\n@JihyukKim\n\t ! As mentioned in the caption of Table 2, the scores of human experts are derived from the GPQA paper.

\n

Paper: GPQA: A Graduate-Level Google-Proof Q&A Benchmark

\n

\"image.png\"

\n","updatedAt":"2025-01-15T08:26:42.694Z","author":{"_id":"61cd4b833dd34ba1985e0753","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61cd4b833dd34ba1985e0753/BfHfrwotoMESpXZOHiIe4.png","fullname":"KABI","name":"dongguanting","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":64,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.8393938541412354},"editors":["dongguanting"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/61cd4b833dd34ba1985e0753/BfHfrwotoMESpXZOHiIe4.png"],"reactions":[{"reaction":"๐Ÿค—","users":["JihyukKim","dongguanting"],"count":2}],"isReport":false,"parentCommentId":"678086e7c5273cefdd39d724"}}]},{"id":"6781caab816943dee8b04a6a","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-01-11T01:34:35.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [RAG-Star: Enhancing Deliberative Reasoning with Retrieval Augmented Verification and Refinement](https://huggingface.co/papers/2412.12881) (2024)\n* [TinyThinker: Distilling Reasoning through Coarse-to-Fine Knowledge Internalization with Self-Reflection](https://huggingface.co/papers/2412.08024) (2024)\n* [Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective](https://huggingface.co/papers/2412.14135) (2024)\n* [Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models](https://huggingface.co/papers/2411.14432) (2024)\n* [Progressive Multimodal Reasoning via Active Retrieval](https://huggingface.co/papers/2412.14835) (2024)\n* [Review-Then-Refine: A Dynamic Framework for Multi-Hop Question Answering with Temporal Adaptability](https://huggingface.co/papers/2412.15101) (2024)\n* [REL: Working out is all you need](https://huggingface.co/papers/2412.04645) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-01-11T01:34:35.662Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7347296476364136},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2501.05366","authors":[{"_id":"678084b5883142429f3cf7a0","user":{"_id":"66e03eace17fb5ff054b7686","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66e03eace17fb5ff054b7686/PpSV0Qo5lwTyxIZMp57xq.jpeg","isPro":false,"fullname":"Xiaoxi Li","user":"lixiaoxi45","type":"user"},"name":"Xiaoxi Li","status":"claimed_verified","statusLastChangedAt":"2025-05-02T06:39:25.194Z","hidden":false},{"_id":"678084b5883142429f3cf7a1","user":{"_id":"61cd4b833dd34ba1985e0753","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61cd4b833dd34ba1985e0753/BfHfrwotoMESpXZOHiIe4.png","isPro":false,"fullname":"KABI","user":"dongguanting","type":"user"},"name":"Guanting Dong","status":"admin_assigned","statusLastChangedAt":"2025-01-10T08:43:08.496Z","hidden":false},{"_id":"678084b5883142429f3cf7a2","user":{"_id":"6695f14df0ffd8e3a379ad61","avatarUrl":"/avatars/5ebb7e55ee9c2d93850b279f440675b0.svg","isPro":false,"fullname":"Jiajie Jin","user":"jinjiajie","type":"user"},"name":"Jiajie Jin","status":"admin_assigned","statusLastChangedAt":"2025-01-10T08:43:35.885Z","hidden":false},{"_id":"678084b5883142429f3cf7a3","user":{"_id":"64bdfa1a1a62149c5e80ef6f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Wjc9gPFzlARBkdoTAOZm8.png","isPro":false,"fullname":"Yuyao Zhang","user":"KeriaZhang","type":"user"},"name":"Yuyao Zhang","status":"admin_assigned","statusLastChangedAt":"2025-01-10T08:43:53.951Z","hidden":false},{"_id":"678084b5883142429f3cf7a4","name":"Yujia Zhou","hidden":false},{"_id":"678084b5883142429f3cf7a5","user":{"_id":"625e62452a7279d3c77b5c38","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/625e62452a7279d3c77b5c38/zJINew6U4_Gup4WTobb-0.jpeg","isPro":false,"fullname":"Yutao Zhu","user":"yutaozhu94","type":"user"},"name":"Yutao Zhu","status":"admin_assigned","statusLastChangedAt":"2025-01-10T08:44:29.563Z","hidden":false},{"_id":"678084b5883142429f3cf7a6","user":{"_id":"603fb3ebabc4bc39998ff765","avatarUrl":"/avatars/1a20dec6a1e48017f9a650ac2e510b4e.svg","isPro":false,"fullname":"NaN","user":"namespace-Pt","type":"user"},"name":"Peitian Zhang","status":"admin_assigned","statusLastChangedAt":"2025-01-10T08:44:36.286Z","hidden":false},{"_id":"678084b5883142429f3cf7a7","user":{"_id":"66f0bf59e9d50ec57febf751","avatarUrl":"/avatars/be97941e60064e5dd806c6fe9db3c537.svg","isPro":false,"fullname":"Zhicheng Dou","user":"douzc","type":"user"},"name":"Zhicheng Dou","status":"admin_assigned","statusLastChangedAt":"2025-01-10T08:42:43.321Z","hidden":false}],"publishedAt":"2025-01-09T16:48:17.000Z","submittedOnDailyAt":"2025-01-09T23:54:30.115Z","title":"Search-o1: Agentic Search-Enhanced Large Reasoning Models","submittedOnDailyBy":{"_id":"61cd4b833dd34ba1985e0753","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61cd4b833dd34ba1985e0753/BfHfrwotoMESpXZOHiIe4.png","isPro":false,"fullname":"KABI","user":"dongguanting","type":"user"},"summary":"Large reasoning models (LRMs) like OpenAI-o1 have demonstrated impressive\nlong stepwise reasoning capabilities through large-scale reinforcement\nlearning. However, their extended reasoning processes often suffer from\nknowledge insufficiency, leading to frequent uncertainties and potential\nerrors. To address this limitation, we introduce Search-o1, a\nframework that enhances LRMs with an agentic retrieval-augmented generation\n(RAG) mechanism and a Reason-in-Documents module for refining retrieved\ndocuments. Search-o1 integrates an agentic search workflow into the reasoning\nprocess, enabling dynamic retrieval of external knowledge when LRMs encounter\nuncertain knowledge points. Additionally, due to the verbose nature of\nretrieved documents, we design a separate Reason-in-Documents module to deeply\nanalyze the retrieved information before injecting it into the reasoning chain,\nminimizing noise and preserving coherent reasoning flow. Extensive experiments\non complex reasoning tasks in science, mathematics, and coding, as well as six\nopen-domain QA benchmarks, demonstrate the strong performance of Search-o1.\nThis approach enhances the trustworthiness and applicability of LRMs in complex\nreasoning tasks, paving the way for more reliable and versatile intelligent\nsystems. The code is available at\nhttps://github.com/sunnynexus/Search-o1.","upvotes":102,"discussionId":"678084b6883142429f3cf7e7","ai_summary":"Search-o1 enhances large reasoning models with an agentic retrieval-augmented generation mechanism and a Reason-in-Documents module to improve performance on complex reasoning tasks.","ai_keywords":["Large reasoning models","reinforcement learning","knowledge insufficiency","Search-o1","agentic retrieval-augmented generation","Reason-in-Documents module","external knowledge","complex reasoning tasks","science","mathematics","coding","open-domain QA benchmarks","trustworthiness","reliability"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"61cd4b833dd34ba1985e0753","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61cd4b833dd34ba1985e0753/BfHfrwotoMESpXZOHiIe4.png","isPro":false,"fullname":"KABI","user":"dongguanting","type":"user"},{"_id":"66e03eace17fb5ff054b7686","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66e03eace17fb5ff054b7686/PpSV0Qo5lwTyxIZMp57xq.jpeg","isPro":false,"fullname":"Xiaoxi Li","user":"lixiaoxi45","type":"user"},{"_id":"6695f14df0ffd8e3a379ad61","avatarUrl":"/avatars/5ebb7e55ee9c2d93850b279f440675b0.svg","isPro":false,"fullname":"Jiajie Jin","user":"jinjiajie","type":"user"},{"_id":"654c99d6e82a71cb487c2ecd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/654c99d6e82a71cb487c2ecd/hiMMOyh-3bAUaqnBM5yT4.jpeg","isPro":false,"fullname":"ChenlongDeng","user":"ChenlongDeng","type":"user"},{"_id":"670af4881547b5f907a9f404","avatarUrl":"/avatars/a4c14a51fd1d7d45754d204e35e2daf2.svg","isPro":false,"fullname":"Shuting","user":"ShootingWongAndrew","type":"user"},{"_id":"664c4ddf4bea570e25cb4cc9","avatarUrl":"/avatars/13c805437efd34c5e6b7a3a9c229696a.svg","isPro":false,"fullname":"Vincent zhao","user":"Tung111","type":"user"},{"_id":"649a65605c74a2125c22bbc1","avatarUrl":"/avatars/e7435d3aeeb59acc6f6f43b48d6982a0.svg","isPro":false,"fullname":"Mao","user":"kyriemao","type":"user"},{"_id":"645c22967d655680b57cb304","avatarUrl":"/avatars/2dda79ac689b8dd21d15e8b780db16f8.svg","isPro":false,"fullname":"Klein Morrow","user":"kmorrow1","type":"user"},{"_id":"62e52483a944e2a56cd2c6ca","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62e52483a944e2a56cd2c6ca/xElM_6teIrP3QI-Run0Bl.jpeg","isPro":false,"fullname":"Jiejun Tan","user":"zstanjj","type":"user"},{"_id":"639acf42958ecbfa965f360c","avatarUrl":"/avatars/5b67ed3b748338fa0c34273db3c3e01b.svg","isPro":false,"fullname":"Trinh Thai Binh","user":"kongacute","type":"user"},{"_id":"642cd105ad221e8f41d2e669","avatarUrl":"/avatars/831cc1fb4852f2b42469868b326d8195.svg","isPro":false,"fullname":"xiaoxi li","user":"Sunnynexus","type":"user"},{"_id":"658e4851c0b1372b2e69aaaa","avatarUrl":"/avatars/ff073c7bb5229279e188e356da6481ae.svg","isPro":false,"fullname":"wang","user":"wangxbx","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2501.05366

Search-o1: Agentic Search-Enhanced Large Reasoning Models

Published on Jan 9, 2025
ยท Submitted by
KABI
on Jan 9, 2025
#2 Paper of the day

Abstract

Search-o1 enhances large reasoning models with an agentic retrieval-augmented generation mechanism and a Reason-in-Documents module to improve performance on complex reasoning tasks.

AI-generated summary

Large reasoning models (LRMs) like OpenAI-o1 have demonstrated impressive long stepwise reasoning capabilities through large-scale reinforcement learning. However, their extended reasoning processes often suffer from knowledge insufficiency, leading to frequent uncertainties and potential errors. To address this limitation, we introduce Search-o1, a framework that enhances LRMs with an agentic retrieval-augmented generation (RAG) mechanism and a Reason-in-Documents module for refining retrieved documents. Search-o1 integrates an agentic search workflow into the reasoning process, enabling dynamic retrieval of external knowledge when LRMs encounter uncertain knowledge points. Additionally, due to the verbose nature of retrieved documents, we design a separate Reason-in-Documents module to deeply analyze the retrieved information before injecting it into the reasoning chain, minimizing noise and preserving coherent reasoning flow. Extensive experiments on complex reasoning tasks in science, mathematics, and coding, as well as six open-domain QA benchmarks, demonstrate the strong performance of Search-o1. This approach enhances the trustworthiness and applicability of LRMs in complex reasoning tasks, paving the way for more reliable and versatile intelligent systems. The code is available at https://github.com/sunnynexus/Search-o1.

Community

Paper author Paper submitter
โ€ข
edited Jan 10, 2025

Our contributions of Search-O1 are as follows:

  1. We propose Search-o1, the first framework that integrates the agentic search workflow into the
    o1-like reasoning process of LRM for achieving autonomous knowledge supplementation.

  2. To effectively integrate external knowledge during reasoning, Search-o1 combines the reasoning
    process with an agentic RAG mechanism and a knowledge refinement module. This design enables
    the LRM to retrieve external knowledge on demand, seamlessly incorporating it into the reasoning
    chain while maintaining the original logical flow.

  3. With five complex reasoning domains and six open-domain QA benchmarks, we demonstrate that
    Search-o1 achieves remarkable performance in the reasoning field while maintaining substantial
    improvements in the general knowledge. Further quantitative analysis confirms its efficiency and
    scalability, offering practical guidance for trustworthy reasoning in LRMs.

Paper author Paper submitter

Our Search-O1 Framework:
image.png

Our experimental results:

image.png

image.png

ยท

Hi @dongguanting , it is interesting to see Search-o1 (Table 2) outperforming human experts in physics and biology!
I wonder how human experts have been hired. Are they PhD students in each field?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2501.05366 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.05366 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 38