Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Watching, Reasoning, and Searching: A Video Deep Research Benchmark on Open Web for Agentic Video Reasoning
[go: Go Back, main page]

https://researchpod-share.vercel.app/episode/def8ab9d-82b2-44a3-847d-77135741a278

\n","updatedAt":"2026-01-13T11:28:58.792Z","author":{"_id":"6960eca92f7ad9b043b5cbe0","avatarUrl":"/avatars/e68dcc7fd04f143d849d40414866e633.svg","fullname":"Noah","name":"noahml","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8073317408561707},"editors":["noahml"],"editorAvatarUrls":["/avatars/e68dcc7fd04f143d849d40414866e633.svg"],"reactions":[{"reaction":"๐Ÿ‘","users":["cristiano28","Yu2020","Huacan-Wang","POTATO66","potatoto888"],"count":5}],"isReport":false}},{"id":"6966f414851dd5274801f740","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-01-14T01:40:36.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Video-BrowseComp: Benchmarking Agentic Video Research on Open Web](https://huggingface.co/papers/2512.23044) (2025)\n* [CrossVid: A Comprehensive Benchmark for Evaluating Cross-Video Reasoning in Multimodal Large Language Models](https://huggingface.co/papers/2511.12263) (2025)\n* [JointAVBench: A Benchmark for Joint Audio-Visual Reasoning Evaluation](https://huggingface.co/papers/2512.12772) (2025)\n* [LongVideoAgent: Multi-Agent Reasoning with Long Videos](https://huggingface.co/papers/2512.20618) (2025)\n* [A Benchmark and Agentic Framework for Omni-Modal Reasoning and Tool Use in Long Videos](https://huggingface.co/papers/2512.16978) (2025)\n* [Active Video Perception: Iterative Evidence Seeking for Agentic Long Video Understanding](https://huggingface.co/papers/2512.05774) (2025)\n* [Skywork-R1V4: Toward Agentic Multimodal Intelligence through Interleaved Thinking with Images and DeepResearch](https://huggingface.co/papers/2512.02395) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-01-14T01:40:36.165Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6941177248954773},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"6968ed2106c6e11b7684999d","author":{"_id":"6966415575a7cc5f08189a9f","avatarUrl":"/avatars/2847456d0cc4d97cf35580da24f6b8f2.svg","fullname":"zero","name":"potatoto888","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2026-01-15T13:35:29.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This paper is both interesting and practically significant, and it can indeed promote further expansion in the field of video understanding. I look forward to your team's next work.","html":"

This paper is both interesting and practically significant, and it can indeed promote further expansion in the field of video understanding. I look forward to your team's next work.

\n","updatedAt":"2026-01-15T13:35:29.129Z","author":{"_id":"6966415575a7cc5f08189a9f","avatarUrl":"/avatars/2847456d0cc4d97cf35580da24f6b8f2.svg","fullname":"zero","name":"potatoto888","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9682532548904419},"editors":["potatoto888"],"editorAvatarUrls":["/avatars/2847456d0cc4d97cf35580da24f6b8f2.svg"],"reactions":[],"isReport":false}},{"id":"696b8aa1e7a76925b936fa45","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-01-17T13:12:01.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivlens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/watching-reasoning-and-searching-a-video-deep-research-benchmark-on-open-web-for-agentic-video-reasoning-9505-80c82ba2\n\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"

arXivlens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/watching-reasoning-and-searching-a-video-deep-research-benchmark-on-open-web-for-agentic-video-reasoning-9505-80c82ba2

\n
    \n
  • Executive Summary
  • \n
  • Detailed Breakdown
  • \n
  • Practical Applications
  • \n
\n","updatedAt":"2026-01-17T13:12:01.790Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5872972011566162},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[{"reaction":"๐Ÿ‘","users":["HJH2CMD","potatoto888"],"count":2}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.06943","authors":[{"_id":"6965babdfc8c4ecc02c7f8f5","user":{"_id":"6965e8d162405ba787fc50b2","avatarUrl":"/avatars/52858daa454e710712c8a29307e0fe30.svg","isPro":false,"fullname":"Chengwen Liu","user":"POTATO66","type":"user"},"name":"Chengwen Liu","status":"admin_assigned","statusLastChangedAt":"2026-01-13T15:46:54.096Z","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8f6","user":{"_id":"64084fa192033c150738e4f2","avatarUrl":"/avatars/dfff2216eb235c635e5abe6fda3084f0.svg","isPro":false,"fullname":"Yu_xm","user":"Yu2020","type":"user"},"name":"Xiaomin Yu","status":"admin_assigned","statusLastChangedAt":"2026-01-13T15:46:34.064Z","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8f7","name":"Zhuoyue Chang","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8f8","name":"Zhe Huang","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8f9","user":{"_id":"65562edfb7bad186e877c724","avatarUrl":"/avatars/bb91f42b102e113208bbe3238916a015.svg","isPro":false,"fullname":"zhangshuo","user":"mcflurryshuoz","type":"user"},"name":"Shuo Zhang","status":"claimed_verified","statusLastChangedAt":"2026-01-15T15:06:11.587Z","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8fa","name":"Heng Lian","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8fb","name":"Kunyi Wang","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8fc","name":"Rui Xu","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8fd","name":"Sen Hu","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8fe","user":{"_id":"65e459ef400c626ca0968db7","avatarUrl":"/avatars/23177b73ba6e4a9db1165d0b7036a4b7.svg","isPro":false,"fullname":"Jaden (Jianheng) Hou","user":"HJH2CMD","type":"user"},"name":"Jianheng Hou","status":"claimed_verified","statusLastChangedAt":"2026-01-13T15:45:36.919Z","hidden":false},{"_id":"6965babdfc8c4ecc02c7f8ff","name":"Hao Peng","hidden":false},{"_id":"6965babdfc8c4ecc02c7f900","name":"Chengwei Qin","hidden":false},{"_id":"6965babdfc8c4ecc02c7f901","name":"Xiaobin Hu","hidden":false},{"_id":"6965babdfc8c4ecc02c7f902","name":"Hong Peng","hidden":false},{"_id":"6965babdfc8c4ecc02c7f903","name":"Ronghao Chen","hidden":false},{"_id":"6965babdfc8c4ecc02c7f904","user":{"_id":"6603d56ab4344a2b07cd6d21","avatarUrl":"/avatars/1569bb60166532317c85e80da722ba1c.svg","isPro":false,"fullname":"Huacan Wang","user":"Huacan-Wang","type":"user"},"name":"Huacan Wang","status":"claimed_verified","statusLastChangedAt":"2026-01-15T15:06:15.770Z","hidden":false}],"publishedAt":"2026-01-11T15:07:37.000Z","submittedOnDailyAt":"2026-01-13T01:12:08.706Z","title":"Watching, Reasoning, and Searching: A Video Deep Research Benchmark on Open Web for Agentic Video Reasoning","submittedOnDailyBy":{"_id":"64084fa192033c150738e4f2","avatarUrl":"/avatars/dfff2216eb235c635e5abe6fda3084f0.svg","isPro":false,"fullname":"Yu_xm","user":"Yu2020","type":"user"},"summary":"In real-world video question answering scenarios, videos often provide only localized visual cues, while verifiable answers are distributed across the open web; models therefore need to jointly perform cross-frame clue extraction, iterative retrieval, and multi-hop reasoning-based verification. To bridge this gap, we construct the first video deep research benchmark, VideoDR. VideoDR centers on video-conditioned open-domain video question answering, requiring cross-frame visual anchor extraction, interactive web retrieval, and multi-hop reasoning over joint video-web evidence; through rigorous human annotation and quality control, we obtain high-quality video deep research samples spanning six semantic domains. We evaluate multiple closed-source and open-source multimodal large language models under both the Workflow and Agentic paradigms, and the results show that Agentic is not consistently superior to Workflow: its gains depend on a model's ability to maintain the initial video anchors over long retrieval chains. Further analysis indicates that goal drift and long-horizon consistency are the core bottlenecks. In sum, VideoDR provides a systematic benchmark for studying video agents in open-web settings and reveals the key challenges for next-generation video deep research agents.","upvotes":212,"discussionId":"6965babdfc8c4ecc02c7f905","projectPage":"https://videodr-benchmark.github.io/#/home","githubRepo":"https://github.com/QuantaAlpha/VideoDR-Benchmark","githubRepoAddedBy":"user","ai_summary":"VideoDR benchmark enables video question answering by combining cross-frame visual extraction, web retrieval, and multi-hop reasoning in open-domain settings.","ai_keywords":["video question answering","cross-frame visual anchor extraction","interactive web retrieval","multi-hop reasoning","multimodal large language models","Workflow paradigm","Agentic paradigm","goal drift","long-horizon consistency"],"githubStars":143,"organization":{"_id":"68b33ab6a9ed99140481cf44","name":"QuantaAlpha","fullname":"QuantaAlpha","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63f7767fbd28622c9b9915e9/DRN8PvmnpKmn2MSLQ7qhF.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6603d56ab4344a2b07cd6d21","avatarUrl":"/avatars/1569bb60166532317c85e80da722ba1c.svg","isPro":false,"fullname":"Huacan Wang","user":"Huacan-Wang","type":"user"},{"_id":"68922133959d7fc7272ce5d3","avatarUrl":"/avatars/c325334c042c293a760ce4d1955e1224.svg","isPro":false,"fullname":"WeiQuan Huang","user":"Quansir","type":"user"},{"_id":"64084fa192033c150738e4f2","avatarUrl":"/avatars/dfff2216eb235c635e5abe6fda3084f0.svg","isPro":false,"fullname":"Yu_xm","user":"Yu2020","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"65562edfb7bad186e877c724","avatarUrl":"/avatars/bb91f42b102e113208bbe3238916a015.svg","isPro":false,"fullname":"zhangshuo","user":"mcflurryshuoz","type":"user"},{"_id":"68e5cd2af7b5b87f951fdb13","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Cuf7wio5ENpxWys6fNa3W.png","isPro":false,"fullname":"CHENG ZIMING","user":"HarrytheOrange2","type":"user"},{"_id":"66979b5426ca6beeee7b9ad3","avatarUrl":"/avatars/183ec6d196649b8d17fe2bd35dded8e5.svg","isPro":false,"fullname":"Chengxiang Huang","user":"Chengxiang1122","type":"user"},{"_id":"65f40e83653c231cbaf7defe","avatarUrl":"/avatars/afa5ce72324112739e539865c9aee26b.svg","isPro":false,"fullname":"Jiayi Zhang","user":"didiforhugface","type":"user"},{"_id":"64b77c0f4a9cafaab5a57954","avatarUrl":"/avatars/53dbb4d679e6bf62dee085496673bf32.svg","isPro":false,"fullname":"wei","user":"smallwei","type":"user"},{"_id":"67eced92cf3e57ee31806ea9","avatarUrl":"/avatars/88d9026b7505596477bbda5ee9fa0972.svg","isPro":false,"fullname":"Gong Chuanzheng","user":"linkkkkkk","type":"user"},{"_id":"62dbeaf3d36b2070f922747f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1671954059773-62dbeaf3d36b2070f922747f.jpeg","isPro":false,"fullname":"Junyao Hu","user":"hujunyao","type":"user"},{"_id":"66015e8aa4d296af07de538e","avatarUrl":"/avatars/a1295c631cc2646282c545859975ce4c.svg","isPro":false,"fullname":"Owen","user":"Owen777","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1,"organization":{"_id":"68b33ab6a9ed99140481cf44","name":"QuantaAlpha","fullname":"QuantaAlpha","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63f7767fbd28622c9b9915e9/DRN8PvmnpKmn2MSLQ7qhF.jpeg"}}">
Papers
arxiv:2601.06943

Watching, Reasoning, and Searching: A Video Deep Research Benchmark on Open Web for Agentic Video Reasoning

Published on Jan 11
ยท Submitted by
Yu_xm
on Jan 13
#1 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

VideoDR benchmark enables video question answering by combining cross-frame visual extraction, web retrieval, and multi-hop reasoning in open-domain settings.

AI-generated summary

In real-world video question answering scenarios, videos often provide only localized visual cues, while verifiable answers are distributed across the open web; models therefore need to jointly perform cross-frame clue extraction, iterative retrieval, and multi-hop reasoning-based verification. To bridge this gap, we construct the first video deep research benchmark, VideoDR. VideoDR centers on video-conditioned open-domain video question answering, requiring cross-frame visual anchor extraction, interactive web retrieval, and multi-hop reasoning over joint video-web evidence; through rigorous human annotation and quality control, we obtain high-quality video deep research samples spanning six semantic domains. We evaluate multiple closed-source and open-source multimodal large language models under both the Workflow and Agentic paradigms, and the results show that Agentic is not consistently superior to Workflow: its gains depend on a model's ability to maintain the initial video anchors over long retrieval chains. Further analysis indicates that goal drift and long-horizon consistency are the core bottlenecks. In sum, VideoDR provides a systematic benchmark for studying video agents in open-web settings and reveals the key challenges for next-generation video deep research agents.

Community

Paper author Paper submitter

First video deep research benchmark.

good paper!

no another one better than the paper ๐Ÿ‘

I've created a podcast that explains the key concepts:
https://researchpod-share.vercel.app/episode/def8ab9d-82b2-44a3-847d-77135741a278

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

This paper is both interesting and practically significant, and it can indeed promote further expansion in the field of video understanding. I look forward to your team's next work.

arXivlens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/watching-reasoning-and-searching-a-video-deep-research-benchmark-on-open-web-for-agentic-video-reasoning-9505-80c82ba2

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.06943 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.06943 in a Space README.md to link it from this page.

Collections including this paper 4