Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories
[go: Go Back, main page]

\n\t\t\n\t\n\t\n\t\tAgentRewardBench\n\t\n\n

AgentRewardBench is a benchmark for assessing the effectiveness of automatic evaluation methods (such as LLM judges) for web agent trajectories. By comparing with human annotations across 5 web benchmarks, we can use AgentRewardBench to determine which LLM is the most capable at evaluating web agents

\n
\n\t\n\t\t\n\n\n\n\n\n\t\t\n\n\n\n\n\n\t
💾Code📄Paper🌐Website
🤗Dataset💻Demo🏆Leaderboard
\n
\n","updatedAt":"2025-04-15T02:29:48.338Z","author":{"_id":"5fa9ff3ea13e063b8b2b60cb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1633380224986-5fa9ff3ea13e063b8b2b60cb.jpeg","fullname":"Xing Han Lù","name":"xhluca","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":24,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5421897172927856},"editors":["xhluca"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1633380224986-5fa9ff3ea13e063b8b2b60cb.jpeg"],"reactions":[{"reaction":"🔥","users":["ncmeade"],"count":1}],"isReport":false}},{"id":"6800bcd2a8fd78028ee51dd1","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-04-17T08:33:22.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [\\`A la recherche du sens perdu: your favourite LLM might have more to say than you can understand](https://huggingface.co/papers/2503.00224) (2025)\n* [LLM-Enhanced Dialogue Management for Full-Duplex Spoken Dialogue Systems](https://huggingface.co/papers/2502.14145) (2025)\n* [Inner Thinking Transformer: Leveraging Dynamic Depth Scaling to Foster Adaptive Internal Thinking](https://huggingface.co/papers/2502.13842) (2025)\n* [Evaluating Language Models on Grooming Risk Estimation Using Fuzzy Theory](https://huggingface.co/papers/2502.12563) (2025)\n* [\"Nuclear Deployed!\": Analyzing Catastrophic Risks in Decision-making of Autonomous LLM Agents](https://huggingface.co/papers/2502.11355) (2025)\n* [Jointly Assigning Processes to Machines and Generating Plans for Autonomous Mobile Robots in a Smart Factory](https://huggingface.co/papers/2502.21101) (2025)\n* [Strength Estimation and Human-Like Strength Adjustment in Games](https://huggingface.co/papers/2502.17109) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-17T08:33:22.956Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7312595844268799},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2504.08942","authors":[{"_id":"67fdadafdc27362617bbe714","user":{"_id":"5fa9ff3ea13e063b8b2b60cb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1633380224986-5fa9ff3ea13e063b8b2b60cb.jpeg","isPro":false,"fullname":"Xing Han Lù","user":"xhluca","type":"user"},"name":"Xing Han Lù","status":"claimed_verified","statusLastChangedAt":"2025-04-15T07:55:07.746Z","hidden":false},{"_id":"67fdadafdc27362617bbe715","user":{"_id":"63458f12d54fb141dedac508","avatarUrl":"/avatars/3946fb9c23d1cd24037770cc0a3489bf.svg","isPro":false,"fullname":"Amirhossein Kazemnejad","user":"kazemnejad","type":"user"},"name":"Amirhossein Kazemnejad","status":"admin_assigned","statusLastChangedAt":"2025-04-15T08:22:50.671Z","hidden":false},{"_id":"67fdadafdc27362617bbe716","user":{"_id":"64527548fc4b47877aba7de0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64527548fc4b47877aba7de0/ht-mRRxNQT49A7NxArOGG.png","isPro":false,"fullname":"Nicholas Meade","user":"ncmeade","type":"user"},"name":"Nicholas Meade","status":"admin_assigned","statusLastChangedAt":"2025-04-15T08:22:56.742Z","hidden":false},{"_id":"67fdadafdc27362617bbe717","user":{"_id":"631a523c04f8ed65eff16fb4","avatarUrl":"/avatars/2b284403c88f140d7bef283f729f7a3e.svg","isPro":false,"fullname":"Arkil Patel","user":"arkilpatel","type":"user"},"name":"Arkil Patel","status":"claimed_verified","statusLastChangedAt":"2025-04-15T07:55:05.412Z","hidden":false},{"_id":"67fdadafdc27362617bbe718","user":{"_id":"619af75e7812aec847ee7729","avatarUrl":"/avatars/f50c05ee8b3105d20a8b291cc9f06ae4.svg","isPro":false,"fullname":"Dong Chan Shin","user":"dongchans","type":"user"},"name":"Dongchan Shin","status":"admin_assigned","statusLastChangedAt":"2025-04-15T08:23:03.255Z","hidden":false},{"_id":"67fdadafdc27362617bbe719","user":{"_id":"65f5133599c842dd93b7bacd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/nxxfWGP_K0hf7BG1oM7c0.png","isPro":false,"fullname":"Alejandra Zambrano","user":"alzambranolu","type":"user"},"name":"Alejandra Zambrano","status":"admin_assigned","statusLastChangedAt":"2025-04-15T08:23:09.097Z","hidden":false},{"_id":"67fdadafdc27362617bbe71a","user":{"_id":"60a66731e1db8bc33b8d4112","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60a66731e1db8bc33b8d4112/AY5Y0CnHh08u6lfEoQ6se.jpeg","isPro":false,"fullname":"Karolina Stanczak","user":"Karolina","type":"user"},"name":"Karolina Stańczak","status":"admin_assigned","statusLastChangedAt":"2025-04-15T08:23:14.768Z","hidden":false},{"_id":"67fdadafdc27362617bbe71b","user":{"_id":"631cf223fe95faea33561d5f","avatarUrl":"/avatars/ac0431955c6c5f4948461772a984a2ba.svg","isPro":false,"fullname":"Peter Shaw","user":"PeterShaw","type":"user"},"name":"Peter Shaw","status":"admin_assigned","statusLastChangedAt":"2025-04-15T08:23:22.550Z","hidden":false},{"_id":"67fdadafdc27362617bbe71c","name":"Christopher J. Pal","hidden":false},{"_id":"67fdadafdc27362617bbe71d","user":{"_id":"624734dc4c731bb6bfab8af7","avatarUrl":"/avatars/6b250b58710a3287b85e4733c1824558.svg","isPro":false,"fullname":"Siva Reddy","user":"sivareddyg","type":"user"},"name":"Siva Reddy","status":"extracted_pending","statusLastChangedAt":"2025-04-15T00:51:59.987Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/5fa9ff3ea13e063b8b2b60cb/-XsJQfRUJ6uZhuI8h9cDm.png"],"publishedAt":"2025-04-11T19:49:22.000Z","submittedOnDailyAt":"2025-04-15T00:59:48.327Z","title":"AgentRewardBench: Evaluating Automatic Evaluations of Web Agent\n Trajectories","submittedOnDailyBy":{"_id":"5fa9ff3ea13e063b8b2b60cb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1633380224986-5fa9ff3ea13e063b8b2b60cb.jpeg","isPro":false,"fullname":"Xing Han Lù","user":"xhluca","type":"user"},"summary":"Web agents enable users to perform tasks on web browsers through natural\nlanguage interaction. Evaluating web agents trajectories is an important\nproblem, since it helps us determine whether the agent successfully completed\nthe tasks. Rule-based methods are widely used for this purpose, but they are\nchallenging to extend to new tasks and may not always recognize successful\ntrajectories. We may achieve higher accuracy through human evaluation, but the\nprocess would be substantially slower and more expensive. Automatic evaluations\nwith LLMs may avoid the challenges of designing new rules and manually\nannotating trajectories, enabling faster and cost-effective evaluation.\nHowever, it is unclear how effective they are at evaluating web agents. To this\nend, we propose AgentRewardBench, the first benchmark to assess the\neffectiveness of LLM judges for evaluating web agents. AgentRewardBench\ncontains 1302 trajectories across 5 benchmarks and 4 LLMs. Each trajectory in\nAgentRewardBench is reviewed by an expert, who answers questions pertaining to\nthe success, side effects, and repetitiveness of the agent. Using our\nbenchmark, we evaluate 12 LLM judges and find that no single LLM excels across\nall benchmarks. We also find that the rule-based evaluation used by common\nbenchmarks tends to underreport the success rate of web agents, highlighting a\nkey weakness of rule-based evaluation and the need to develop more flexible\nautomatic evaluations. We release the benchmark at:\nhttps://agent-reward-bench.github.io","upvotes":28,"discussionId":"67fdadb0dc27362617bbe749","projectPage":"https://agent-reward-bench.github.io/","githubRepo":"https://github.com/McGill-NLP/agent-reward-bench","githubRepoAddedBy":"user","ai_summary":"AgentRewardBench is a benchmark evaluating the effectiveness of LLMs in assessing web agent performance, showing that rule-based methods may underreport success.","ai_keywords":["LLMs","AgentRewardBench","machine learning models","natural language interaction","web agents","trajectory evaluation","rule-based methods","expert review"],"githubStars":40},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"5fa9ff3ea13e063b8b2b60cb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1633380224986-5fa9ff3ea13e063b8b2b60cb.jpeg","isPro":false,"fullname":"Xing Han Lù","user":"xhluca","type":"user"},{"_id":"60a66731e1db8bc33b8d4112","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60a66731e1db8bc33b8d4112/AY5Y0CnHh08u6lfEoQ6se.jpeg","isPro":false,"fullname":"Karolina Stanczak","user":"Karolina","type":"user"},{"_id":"6470d0f97fd7ecdbd0ec3532","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6470d0f97fd7ecdbd0ec3532/EkdEtUgMnVrBUyi9J6zuC.jpeg","isPro":false,"fullname":"Tomás Vergara","user":"tvergara","type":"user"},{"_id":"631a523c04f8ed65eff16fb4","avatarUrl":"/avatars/2b284403c88f140d7bef283f729f7a3e.svg","isPro":false,"fullname":"Arkil Patel","user":"arkilpatel","type":"user"},{"_id":"63340b24f68a3fb7efa62b3a","avatarUrl":"/avatars/44c960437c037553d90b1ca24c952977.svg","isPro":false,"fullname":"Dongchan Shin","user":"ShinDC","type":"user"},{"_id":"627d5ead401f42c57b6ce54c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/627d5ead401f42c57b6ce54c/GajmN5G_MRUFRZs6ens0t.jpeg","isPro":false,"fullname":"Parishad BehnamGhader","user":"parishadbehnam","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"6270c58780d5f35f8dbe42be","avatarUrl":"/avatars/d4d6e5eadfe9b1f47bce1c66728b24fc.svg","isPro":true,"fullname":"Benno Krojer","user":"BennoKrojer","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"63458f12d54fb141dedac508","avatarUrl":"/avatars/3946fb9c23d1cd24037770cc0a3489bf.svg","isPro":false,"fullname":"Amirhossein Kazemnejad","user":"kazemnejad","type":"user"},{"_id":"6512e852a76fd5945b19e9a1","avatarUrl":"/avatars/751526fbc0c939a972bea684937a34bf.svg","isPro":false,"fullname":"Aditi Khandelwal","user":"aditi184","type":"user"},{"_id":"624734dc4c731bb6bfab8af7","avatarUrl":"/avatars/6b250b58710a3287b85e4733c1824558.svg","isPro":false,"fullname":"Siva Reddy","user":"sivareddyg","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2504.08942

AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories

Published on Apr 11, 2025
· Submitted by
Xing Han Lù
on Apr 15, 2025

Abstract

AgentRewardBench is a benchmark evaluating the effectiveness of LLMs in assessing web agent performance, showing that rule-based methods may underreport success.

AI-generated summary

Web agents enable users to perform tasks on web browsers through natural language interaction. Evaluating web agents trajectories is an important problem, since it helps us determine whether the agent successfully completed the tasks. Rule-based methods are widely used for this purpose, but they are challenging to extend to new tasks and may not always recognize successful trajectories. We may achieve higher accuracy through human evaluation, but the process would be substantially slower and more expensive. Automatic evaluations with LLMs may avoid the challenges of designing new rules and manually annotating trajectories, enabling faster and cost-effective evaluation. However, it is unclear how effective they are at evaluating web agents. To this end, we propose AgentRewardBench, the first benchmark to assess the effectiveness of LLM judges for evaluating web agents. AgentRewardBench contains 1302 trajectories across 5 benchmarks and 4 LLMs. Each trajectory in AgentRewardBench is reviewed by an expert, who answers questions pertaining to the success, side effects, and repetitiveness of the agent. Using our benchmark, we evaluate 12 LLM judges and find that no single LLM excels across all benchmarks. We also find that the rule-based evaluation used by common benchmarks tends to underreport the success rate of web agents, highlighting a key weakness of rule-based evaluation and the need to develop more flexible automatic evaluations. We release the benchmark at: https://agent-reward-bench.github.io

Community

Paper author Paper submitter

AgentRewardBench

AgentRewardBench is a benchmark for assessing the effectiveness of automatic evaluation methods (such as LLM judges) for web agent trajectories. By comparing with human annotations across 5 web benchmarks, we can use AgentRewardBench to determine which LLM is the most capable at evaluating web agents

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.08942 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 2

Collections including this paper 9