Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - BrowseComp-V^3: A Visual, Vertical, and Verifiable Benchmark for Multimodal Browsing Agents
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-18T01:40:36.335Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7085984349250793},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.12876","authors":[{"_id":"6993e3db50fb2c0be4783d2d","name":"Huanyao Zhang","hidden":false},{"_id":"6993e3db50fb2c0be4783d2e","name":"Jiepeng Zhou","hidden":false},{"_id":"6993e3db50fb2c0be4783d2f","name":"Bo Li","hidden":false},{"_id":"6993e3db50fb2c0be4783d30","name":"Bowen Zhou","hidden":false},{"_id":"6993e3db50fb2c0be4783d31","name":"Yanzhe Dan","hidden":false},{"_id":"6993e3db50fb2c0be4783d32","name":"Haishan Lu","hidden":false},{"_id":"6993e3db50fb2c0be4783d33","name":"Zhiyong Cao","hidden":false},{"_id":"6993e3db50fb2c0be4783d34","name":"Jiaoyang Chen","hidden":false},{"_id":"6993e3db50fb2c0be4783d35","name":"Yuqian Han","hidden":false},{"_id":"6993e3db50fb2c0be4783d36","name":"Zinan Sheng","hidden":false},{"_id":"6993e3db50fb2c0be4783d37","name":"Zhengwei Tao","hidden":false},{"_id":"6993e3db50fb2c0be4783d38","name":"Hao Liang","hidden":false},{"_id":"6993e3db50fb2c0be4783d39","user":{"_id":"644a4fbc2166258fccc664bc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/8k3b44MbhQiWuo6i8BnYl.jpeg","isPro":false,"fullname":"Jialong Wu","user":"callanwu","type":"user"},"name":"Jialong Wu","status":"claimed_verified","statusLastChangedAt":"2026-02-20T08:37:43.360Z","hidden":false},{"_id":"6993e3db50fb2c0be4783d3a","user":{"_id":"673c7319d11b1c2e246ead9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/673c7319d11b1c2e246ead9c/IjFIO--N7Hm_BOEafhEQv.jpeg","isPro":false,"fullname":"Yang Shi","user":"DogNeverSleep","type":"user"},"name":"Yang Shi","status":"claimed_verified","statusLastChangedAt":"2026-02-18T09:07:01.848Z","hidden":false},{"_id":"6993e3db50fb2c0be4783d3b","name":"Yuanpeng He","hidden":false},{"_id":"6993e3db50fb2c0be4783d3c","name":"Jiaye Lin","hidden":false},{"_id":"6993e3db50fb2c0be4783d3d","name":"Qintong Zhang","hidden":false},{"_id":"6993e3db50fb2c0be4783d3e","name":"Guochen Yan","hidden":false},{"_id":"6993e3db50fb2c0be4783d3f","name":"Runhao Zhao","hidden":false},{"_id":"6993e3db50fb2c0be4783d40","name":"Zhengpin Li","hidden":false},{"_id":"6993e3db50fb2c0be4783d41","name":"Xiaohan Yu","hidden":false},{"_id":"6993e3db50fb2c0be4783d42","name":"Lang Mei","hidden":false},{"_id":"6993e3db50fb2c0be4783d43","name":"Chong Chen","hidden":false},{"_id":"6993e3db50fb2c0be4783d44","name":"Wentao Zhang","hidden":false},{"_id":"6993e3db50fb2c0be4783d45","name":"Bin Cui","hidden":false}],"publishedAt":"2026-02-13T12:25:13.000Z","submittedOnDailyAt":"2026-02-17T01:13:32.734Z","title":"BrowseComp-V^3: A Visual, Vertical, and Verifiable Benchmark for Multimodal Browsing Agents","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"Multimodal large language models (MLLMs), equipped with increasingly advanced planning and tool-use capabilities, are evolving into autonomous agents capable of performing multimodal web browsing and deep search in open-world environments. However, existing benchmarks for multimodal browsing remain limited in task complexity, evidence accessibility, and evaluation granularity, hindering comprehensive and reproducible assessments of deep search capabilities. To address these limitations, we introduce BrowseComp-V^3, a novel benchmark consisting of 300 carefully curated and challenging questions spanning diverse domains. The benchmark emphasizes deep, multi-level, and cross-modal multi-hop reasoning, where critical evidence is interleaved across textual and visual modalities within and across web pages. All supporting evidence is strictly required to be publicly searchable, ensuring fairness and reproducibility. Beyond final-answer accuracy, we incorporate an expert-validated, subgoal-driven process evaluation mechanism that enables fine-grained analysis of intermediate reasoning behaviors and systematic characterization of capability boundaries. In addition, we propose OmniSeeker, a unified multimodal browsing agent framework integrating diverse web search and visual perception tools. Comprehensive experiments demonstrate that even state-of-the-art models achieve only 36% accuracy on our benchmark, revealing critical bottlenecks in multimodal information integration and fine-grained perception. Our results highlight a fundamental gap between current model capabilities and robust multimodal deep search in real-world settings.","upvotes":6,"discussionId":"6993e3db50fb2c0be4783d46","ai_summary":"A new benchmark called BrowseComp-V3 challenges multimodal large language models with complex, multi-hop reasoning tasks requiring deep search across text and visual modalities, revealing significant gaps in current capabilities.","ai_keywords":["multimodal large language models","multimodal browsing","deep search","web browsing","multimodal information integration","fine-grained perception","multimodal browsing agent framework","web search","visual perception"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6434b6619bd5a84b5dcfa4de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6434b6619bd5a84b5dcfa4de/h8Q6kPNjFNc03wmdboHzq.jpeg","isPro":true,"fullname":"Young-Jun Lee","user":"passing2961","type":"user"},{"_id":"684d57f26e04c265777ead3f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/cuOj-bQqukSZreXgUJlfm.png","isPro":false,"fullname":"Joakim Lee","user":"Reinforcement4All","type":"user"},{"_id":"66935bdc5489e4f73c76bc7b","avatarUrl":"/avatars/129d1e86bbaf764b507501f4feb177db.svg","isPro":false,"fullname":"Abidoye Aanuoluwapo","user":"Aanuoluwapo65","type":"user"},{"_id":"662f733dc3a82e9f11192c4f","avatarUrl":"/avatars/29729889de22e437760c4814eee781f5.svg","isPro":false,"fullname":"Zhensong Zhang","user":"JasonCU","type":"user"},{"_id":"673c7319d11b1c2e246ead9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/673c7319d11b1c2e246ead9c/IjFIO--N7Hm_BOEafhEQv.jpeg","isPro":false,"fullname":"Yang Shi","user":"DogNeverSleep","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
A new benchmark called BrowseComp-V3 challenges multimodal large language models with complex, multi-hop reasoning tasks requiring deep search across text and visual modalities, revealing significant gaps in current capabilities.
AI-generated summary
Multimodal large language models (MLLMs), equipped with increasingly advanced planning and tool-use capabilities, are evolving into autonomous agents capable of performing multimodal web browsing and deep search in open-world environments. However, existing benchmarks for multimodal browsing remain limited in task complexity, evidence accessibility, and evaluation granularity, hindering comprehensive and reproducible assessments of deep search capabilities. To address these limitations, we introduce BrowseComp-V^3, a novel benchmark consisting of 300 carefully curated and challenging questions spanning diverse domains. The benchmark emphasizes deep, multi-level, and cross-modal multi-hop reasoning, where critical evidence is interleaved across textual and visual modalities within and across web pages. All supporting evidence is strictly required to be publicly searchable, ensuring fairness and reproducibility. Beyond final-answer accuracy, we incorporate an expert-validated, subgoal-driven process evaluation mechanism that enables fine-grained analysis of intermediate reasoning behaviors and systematic characterization of capability boundaries. In addition, we propose OmniSeeker, a unified multimodal browsing agent framework integrating diverse web search and visual perception tools. Comprehensive experiments demonstrate that even state-of-the-art models achieve only 36% accuracy on our benchmark, revealing critical bottlenecks in multimodal information integration and fine-grained perception. Our results highlight a fundamental gap between current model capabilities and robust multimodal deep search in real-world settings.
Introduces BrowseComp-V3, a 300-question multimodal web-browsing benchmark and OmniSeeker framework to evaluate deep, cross-modal reasoning and evidence-driven search in LLM-based agents.