Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Web-Shepherd: Advancing PRMs for Reinforcing Web Agents
\n","updatedAt":"2025-05-23T03:40:15.311Z","author":{"_id":"6813ee19c9b224a738fea856","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/g1uPHIKEgWe1ftHGHbo_U.png","fullname":"YJ","name":"yjh415","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"ru","probability":0.1375044584274292},"editors":["yjh415"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/g1uPHIKEgWe1ftHGHbo_U.png"],"reactions":[],"isReport":false}},{"id":"682fd107bf762029ddc80016","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-05-23T01:36:07.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning](https://huggingface.co/papers/2504.00891) (2025)\n* [ViLBench: A Suite for Vision-Language Process Reward Modeling](https://huggingface.co/papers/2503.20271) (2025)\n* [R-PRM: Reasoning-Driven Process Reward Modeling](https://huggingface.co/papers/2503.21295) (2025)\n* [General-Reasoner: Advancing LLM Reasoning Across All Domains](https://huggingface.co/papers/2505.14652) (2025)\n* [AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories](https://huggingface.co/papers/2504.08942) (2025)\n* [Efficient Process Reward Model Training via Active Learning](https://huggingface.co/papers/2504.10559) (2025)\n* [Reward Reasoning Model](https://huggingface.co/papers/2505.14674) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-05-23T01:36:07.712Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6849531531333923},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2505.15277","authors":[{"_id":"682e854551706f69070aca6b","user":{"_id":"64c8f4cec547ed5243ebd0a8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c8f4cec547ed5243ebd0a8/MiOH5YbMg8Gh9KYlQsLmX.jpeg","isPro":false,"fullname":"Hyungjoo Chae","user":"hyungjoochae","type":"user"},"name":"Hyungjoo Chae","status":"claimed_verified","statusLastChangedAt":"2025-05-22T07:16:37.301Z","hidden":false},{"_id":"682e854551706f69070aca6c","user":{"_id":"646a0897c37ca1e12308b026","avatarUrl":"/avatars/6d720a9e366db9bec15c8c10878c0c75.svg","isPro":false,"fullname":"Sunghwan Kim","user":"KimSHine","type":"user"},"name":"Sunghwan Kim","status":"claimed_verified","statusLastChangedAt":"2025-05-22T07:16:32.322Z","hidden":false},{"_id":"682e854551706f69070aca6d","name":"Junhee Cho","hidden":false},{"_id":"682e854551706f69070aca6e","user":{"_id":"6469949654873f0043b09c22","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6469949654873f0043b09c22/Lk7IJAR16Wa_sGJ2g81AQ.jpeg","isPro":true,"fullname":"Seungone Kim","user":"seungone","type":"user"},"name":"Seungone Kim","status":"claimed_verified","statusLastChangedAt":"2025-05-27T07:55:27.225Z","hidden":false},{"_id":"682e854551706f69070aca6f","name":"Seungjun Moon","hidden":false},{"_id":"682e854551706f69070aca70","name":"Gyeom Hwangbo","hidden":false},{"_id":"682e854551706f69070aca71","user":{"_id":"6683b8680b72be136701de35","avatarUrl":"/avatars/0c135e570b16b81ee2fb81ad65b01ba8.svg","isPro":false,"fullname":"Dongha Lim","user":"donghalim","type":"user"},"name":"Dongha Lim","status":"claimed_verified","statusLastChangedAt":"2025-05-22T07:16:34.741Z","hidden":false},{"_id":"682e854551706f69070aca72","name":"Minjin Kim","hidden":false},{"_id":"682e854551706f69070aca73","name":"Yeonjun Hwang","hidden":false},{"_id":"682e854551706f69070aca74","name":"Minju Gwak","hidden":false},{"_id":"682e854551706f69070aca75","user":{"_id":"654c263fbe11400417c93d9f","avatarUrl":"/avatars/eb5778e28091200efee2c6b68589a1a2.svg","isPro":false,"fullname":"choi dongwook","user":"dongwookchoi","type":"user"},"name":"Dongwook Choi","status":"claimed_verified","statusLastChangedAt":"2025-06-16T13:54:39.381Z","hidden":false},{"_id":"682e854551706f69070aca76","name":"Minseok Kang","hidden":false},{"_id":"682e854551706f69070aca77","name":"Gwanhoon Im","hidden":false},{"_id":"682e854551706f69070aca78","name":"ByeongUng Cho","hidden":false},{"_id":"682e854551706f69070aca79","name":"Hyojun Kim","hidden":false},{"_id":"682e854551706f69070aca7a","name":"Jun Hee Han","hidden":false},{"_id":"682e854551706f69070aca7b","user":{"_id":"636b529ef796304dd67d139c","avatarUrl":"/avatars/7a64d5095fcb1da558b52ad48177ad76.svg","isPro":false,"fullname":"Taeyoon Kwon","user":"Connoriginal","type":"user"},"name":"Taeyoon Kwon","status":"claimed_verified","statusLastChangedAt":"2025-05-27T07:55:29.445Z","hidden":false},{"_id":"682e854551706f69070aca7c","name":"Minju Kim","hidden":false},{"_id":"682e854551706f69070aca7d","name":"Beong-woo Kwak","hidden":false},{"_id":"682e854551706f69070aca7e","name":"Dongjin Kang","hidden":false},{"_id":"682e854551706f69070aca7f","name":"Jinyoung Yeo","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/64c8f4cec547ed5243ebd0a8/hXqaaoJTvW35xMW1lPVv0.png"],"publishedAt":"2025-05-21T08:56:55.000Z","submittedOnDailyAt":"2025-05-22T00:31:53.858Z","title":"Web-Shepherd: Advancing PRMs for Reinforcing Web Agents","submittedOnDailyBy":{"_id":"64c8f4cec547ed5243ebd0a8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c8f4cec547ed5243ebd0a8/MiOH5YbMg8Gh9KYlQsLmX.jpeg","isPro":false,"fullname":"Hyungjoo Chae","user":"hyungjoochae","type":"user"},"summary":"Web navigation is a unique domain that can automate many repetitive real-life\ntasks and is challenging as it requires long-horizon sequential decision making\nbeyond typical multimodal large language model (MLLM) tasks. Yet, specialized\nreward models for web navigation that can be utilized during both training and\ntest-time have been absent until now. Despite the importance of speed and\ncost-effectiveness, prior works have utilized MLLMs as reward models, which\nposes significant constraints for real-world deployment. To address this, in\nthis work, we propose the first process reward model (PRM) called Web-Shepherd\nwhich could assess web navigation trajectories in a step-level. To achieve\nthis, we first construct the WebPRM Collection, a large-scale dataset with 40K\nstep-level preference pairs and annotated checklists spanning diverse domains\nand difficulty levels. Next, we also introduce the WebRewardBench, the first\nmeta-evaluation benchmark for evaluating PRMs. In our experiments, we observe\nthat our Web-Shepherd achieves about 30 points better accuracy compared to\nusing GPT-4o on WebRewardBench. Furthermore, when testing on WebArena-lite by\nusing GPT-4o-mini as the policy and Web-Shepherd as the verifier, we achieve\n10.9 points better performance, in 10 less cost compared to using GPT-4o-mini\nas the verifier. Our model, dataset, and code are publicly available at LINK.","upvotes":104,"discussionId":"682e854951706f69070acbf0","githubRepo":"https://github.com/kyle8581/Web-Shepherd","githubRepoAddedBy":"user","ai_summary":"The paper introduces Web-Shepherd, a process reward model for web navigation, which improves accuracy and cost-effectiveness in step-level trajectory assessment compared to existing multimodal large language models.","ai_keywords":["multimodal large language model","process reward model","web navigation","webPRM collection","webrewardbench","long-horizon sequential decision making","preference pairs","annotated checklists","step-level assessment","webarena-lite","policy","verifier"],"githubStars":53},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64c8f4cec547ed5243ebd0a8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c8f4cec547ed5243ebd0a8/MiOH5YbMg8Gh9KYlQsLmX.jpeg","isPro":false,"fullname":"Hyungjoo Chae","user":"hyungjoochae","type":"user"},{"_id":"651e51f5560445b25b78c160","avatarUrl":"/avatars/f210a43baf0d50246dc14ec89738896d.svg","isPro":false,"fullname":"dongje yoo","user":"foryui","type":"user"},{"_id":"642d4f092320338df27ceb15","avatarUrl":"/avatars/b522613d2b3ddfcc2fa3a5a1a2500c86.svg","isPro":false,"fullname":"Wing.D","user":"Wingu","type":"user"},{"_id":"64184d05db24526c7c9cbef5","avatarUrl":"/avatars/b71e28a09290ae0929888187485b296a.svg","isPro":false,"fullname":"vive kang","user":"Vive-kang","type":"user"},{"_id":"6819b8eff0afe451dd8b714b","avatarUrl":"/avatars/f05a6b8dc680544c4545fe9cded2954d.svg","isPro":false,"fullname":"SoohyunOh","user":"oceann010315","type":"user"},{"_id":"6479896eed10250626fb92b4","avatarUrl":"/avatars/298274bcb7fcace462124f3dc93aff9e.svg","isPro":false,"fullname":"kim","user":"yujin731","type":"user"},{"_id":"64a5275af96e2100458d8d35","avatarUrl":"/avatars/e7e6426289d67c3ad3b50fe4710aa1b8.svg","isPro":false,"fullname":"Haeju Park","user":"haejuu","type":"user"},{"_id":"6549662a899653d1d1263321","avatarUrl":"/avatars/829eaf35fde853acd98a7a0203d9a5ec.svg","isPro":false,"fullname":"hyunjinCho","user":"Merenova","type":"user"},{"_id":"660371123de17851b8d04608","avatarUrl":"/avatars/03daa07bed18859061406278ce6eafa0.svg","isPro":false,"fullname":"Web-Shepherd","user":"Coffee-Gym","type":"user"},{"_id":"622316ac2d6c7fd64778b796","avatarUrl":"/avatars/cee583cf95dfef4da3142d4bd2b0eba1.svg","isPro":false,"fullname":"Hyungjoo Chae","user":"kyle8581","type":"user"},{"_id":"654c263fbe11400417c93d9f","avatarUrl":"/avatars/eb5778e28091200efee2c6b68589a1a2.svg","isPro":false,"fullname":"choi dongwook","user":"dongwookchoi","type":"user"},{"_id":"6635a672b0a5f86a2aeacd59","avatarUrl":"/avatars/371529d2d5a858d1c26858494ca9722e.svg","isPro":false,"fullname":"Minju Gwak","user":"talzoomanzoo","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
The paper introduces Web-Shepherd, a process reward model for web navigation, which improves accuracy and cost-effectiveness in step-level trajectory assessment compared to existing multimodal large language models.
AI-generated summary
Web navigation is a unique domain that can automate many repetitive real-life
tasks and is challenging as it requires long-horizon sequential decision making
beyond typical multimodal large language model (MLLM) tasks. Yet, specialized
reward models for web navigation that can be utilized during both training and
test-time have been absent until now. Despite the importance of speed and
cost-effectiveness, prior works have utilized MLLMs as reward models, which
poses significant constraints for real-world deployment. To address this, in
this work, we propose the first process reward model (PRM) called Web-Shepherd
which could assess web navigation trajectories in a step-level. To achieve
this, we first construct the WebPRM Collection, a large-scale dataset with 40K
step-level preference pairs and annotated checklists spanning diverse domains
and difficulty levels. Next, we also introduce the WebRewardBench, the first
meta-evaluation benchmark for evaluating PRMs. In our experiments, we observe
that our Web-Shepherd achieves about 30 points better accuracy compared to
using GPT-4o on WebRewardBench. Furthermore, when testing on WebArena-lite by
using GPT-4o-mini as the policy and Web-Shepherd as the verifier, we achieve
10.9 points better performance, in 10 less cost compared to using GPT-4o-mini
as the verifier. Our model, dataset, and code are publicly available at LINK.