Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation
[go: Go Back, main page]

\"스크린샷

\n","updatedAt":"2024-10-21T06:11:07.441Z","author":{"_id":"64c8f4cec547ed5243ebd0a8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c8f4cec547ed5243ebd0a8/MiOH5YbMg8Gh9KYlQsLmX.jpeg","fullname":"Hyungjoo Chae","name":"hyungjoochae","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":15,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3862727880477905},"editors":["hyungjoochae"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/64c8f4cec547ed5243ebd0a8/MiOH5YbMg8Gh9KYlQsLmX.jpeg"],"reactions":[{"reaction":"👍","users":["yolay","linsa11"],"count":2},{"reaction":"😎","users":["vikasrajpootkogo"],"count":1}],"isReport":false}},{"id":"67170112fd698e5b2a89fb7c","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-10-22T01:34:10.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning](https://huggingface.co/papers/2410.02052) (2024)\n* [AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents](https://huggingface.co/papers/2410.13825) (2024)\n* [Enhancing Decision-Making for LLM Agents via Step-Level Q-Value Models](https://huggingface.co/papers/2409.09345) (2024)\n* [NNetscape Navigator: Complex Demonstrations for Web Agents Without a Demonstrator](https://huggingface.co/papers/2410.02907) (2024)\n* [Agent Workflow Memory](https://huggingface.co/papers/2409.07429) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-10-22T01:34:10.721Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7178288102149963},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2410.13232","authors":[{"_id":"6715f0338cb36b73b1dbf48d","user":{"_id":"64c8f4cec547ed5243ebd0a8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c8f4cec547ed5243ebd0a8/MiOH5YbMg8Gh9KYlQsLmX.jpeg","isPro":false,"fullname":"Hyungjoo Chae","user":"hyungjoochae","type":"user"},"name":"Hyungjoo Chae","status":"admin_assigned","statusLastChangedAt":"2024-10-21T10:26:53.304Z","hidden":false},{"_id":"6715f0338cb36b73b1dbf48e","user":{"_id":"6631e2b9fb55832ca41d78b1","avatarUrl":"/avatars/6115ac6dfb3cc6c8e6e94729f3d5a8c5.svg","isPro":false,"fullname":"Namyoung Kim","user":"kimnamssya","type":"user"},"name":"Namyoung Kim","status":"admin_assigned","statusLastChangedAt":"2024-10-21T11:45:10.010Z","hidden":false},{"_id":"6715f0338cb36b73b1dbf48f","user":{"_id":"640ec2fd2f9c7b364d182735","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/640ec2fd2f9c7b364d182735/G5WxQXYEOaVL39rAoOI3Y.jpeg","isPro":false,"fullname":"kai tzu-iunn ong","user":"ktio","type":"user"},"name":"Kai Tzu-iunn Ong","status":"admin_assigned","statusLastChangedAt":"2024-10-21T11:43:52.128Z","hidden":false},{"_id":"6715f0338cb36b73b1dbf490","user":{"_id":"6635a672b0a5f86a2aeacd59","avatarUrl":"/avatars/371529d2d5a858d1c26858494ca9722e.svg","isPro":false,"fullname":"Minju Gwak","user":"talzoomanzoo","type":"user"},"name":"Minju Gwak","status":"claimed_verified","statusLastChangedAt":"2024-10-23T07:35:16.372Z","hidden":false},{"_id":"6715f0338cb36b73b1dbf491","user":{"_id":"668d512d3d0bc7c4758a73cc","avatarUrl":"/avatars/054e4ed3c06f0464fb06b88fa7cec7f3.svg","isPro":false,"fullname":"Gwanwoo Song","user":"Gwanwoo","type":"user"},"name":"Gwanwoo Song","status":"admin_assigned","statusLastChangedAt":"2024-10-21T11:43:39.177Z","hidden":false},{"_id":"6715f0338cb36b73b1dbf492","user":{"_id":"66429ceee4a7476619a5ab64","avatarUrl":"/avatars/43cb4f087f4f9dc972196047fdb339f4.svg","isPro":false,"fullname":"Jihoon Kim","user":"jihoonkim25","type":"user"},"name":"Jihoon Kim","status":"admin_assigned","statusLastChangedAt":"2024-10-21T11:44:42.881Z","hidden":false},{"_id":"6715f0338cb36b73b1dbf493","user":{"_id":"646a0897c37ca1e12308b026","avatarUrl":"/avatars/6d720a9e366db9bec15c8c10878c0c75.svg","isPro":false,"fullname":"Sunghwan Kim","user":"KimSHine","type":"user"},"name":"Sunghwan Kim","status":"claimed_verified","statusLastChangedAt":"2024-10-23T15:06:54.954Z","hidden":false},{"_id":"6715f0338cb36b73b1dbf494","user":{"_id":"61d4bb8a34cff1c8e64b6a2a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61d4bb8a34cff1c8e64b6a2a/eEXKbsXB9VK1sR8QqNPi7.png","isPro":false,"fullname":"Dongha Lee","user":"donalee","type":"user"},"name":"Dongha Lee","status":"claimed_verified","statusLastChangedAt":"2024-10-21T12:14:10.468Z","hidden":false},{"_id":"6715f0338cb36b73b1dbf495","user":{"_id":"6419ad74060a651c415eec62","avatarUrl":"/avatars/e7b060d0a45e3af1c7439948dbd60e34.svg","isPro":false,"fullname":"Jinyoung Yeom","user":"geen02","type":"user"},"name":"Jinyoung Yeo","status":"admin_assigned","statusLastChangedAt":"2024-10-21T11:43:28.091Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/64c8f4cec547ed5243ebd0a8/Q3cNBL09-ITpyL-jk5lHQ.png"],"publishedAt":"2024-10-17T05:37:00.000Z","submittedOnDailyAt":"2024-10-21T04:41:07.414Z","title":"Web Agents with World Models: Learning and Leveraging Environment\n Dynamics in Web Navigation","submittedOnDailyBy":{"_id":"64c8f4cec547ed5243ebd0a8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c8f4cec547ed5243ebd0a8/MiOH5YbMg8Gh9KYlQsLmX.jpeg","isPro":false,"fullname":"Hyungjoo Chae","user":"hyungjoochae","type":"user"},"summary":"Large language models (LLMs) have recently gained much attention in building\nautonomous agents. However, the performance of current LLM-based web agents in\nlong-horizon tasks is far from optimal, often yielding errors such as\nrepeatedly buying a non-refundable flight ticket. By contrast, humans can avoid\nsuch an irreversible mistake, as we have an awareness of the potential outcomes\n(e.g., losing money) of our actions, also known as the \"world model\". Motivated\nby this, our study first starts with preliminary analyses, confirming the\nabsence of world models in current LLMs (e.g., GPT-4o, Claude-3.5-Sonnet,\netc.). Then, we present a World-model-augmented (WMA) web agent, which\nsimulates the outcomes of its actions for better decision-making. To overcome\nthe challenges in training LLMs as world models predicting next observations,\nsuch as repeated elements across observations and long HTML inputs, we propose\na transition-focused observation abstraction, where the prediction objectives\nare free-form natural language descriptions exclusively highlighting important\nstate differences between time steps. Experiments on WebArena and Mind2Web show\nthat our world models improve agents' policy selection without training and\ndemonstrate our agents' cost- and time-efficiency compared to recent\ntree-search-based agents.","upvotes":44,"discussionId":"6715f0358cb36b73b1dbf519","ai_summary":"A World-model-augmented web agent improves decision-making by simulating action outcomes, overcoming training challenges with free-form natural language descriptions of state differences.","ai_keywords":["world model","LLM-based web agents","long-horizon tasks","web agents","GPT-4o","Claude-3.5-Sonnet","World-model-augmented (WMA) web agent","transition-focused observation abstraction","WebArena","Mind2Web","cost-efficiency","time-efficiency","tree-search-based agents"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64c8f4cec547ed5243ebd0a8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c8f4cec547ed5243ebd0a8/MiOH5YbMg8Gh9KYlQsLmX.jpeg","isPro":false,"fullname":"Hyungjoo Chae","user":"hyungjoochae","type":"user"},{"_id":"642f7b0daa1dd0ebdf3f2e56","avatarUrl":"/avatars/930977928950fa951e68983a212b7d63.svg","isPro":false,"fullname":"suhcrates","user":"suhcrates","type":"user"},{"_id":"64184d05db24526c7c9cbef5","avatarUrl":"/avatars/b71e28a09290ae0929888187485b296a.svg","isPro":false,"fullname":"vive kang","user":"Vive-kang","type":"user"},{"_id":"636b529ef796304dd67d139c","avatarUrl":"/avatars/7a64d5095fcb1da558b52ad48177ad76.svg","isPro":false,"fullname":"Taeyoon Kwon","user":"Connoriginal","type":"user"},{"_id":"63be1cd13b0665ad51d29c37","avatarUrl":"/avatars/5acc9b9bbecac3d567e927e2d8667b00.svg","isPro":false,"fullname":"Seungwon Lim","user":"sngwon","type":"user"},{"_id":"6494f3aad7f81a0554f3b6bb","avatarUrl":"/avatars/3f5af82d96f7bc6a86582e0ae776f17e.svg","isPro":false,"fullname":"dahyun.lee","user":"leedhn","type":"user"},{"_id":"63dcc93b043d6c11093d3446","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/DOCIESh_YfgGwwfozMyrx.png","isPro":false,"fullname":"im9route","user":"im9route","type":"user"},{"_id":"655c44752205aab35222aca3","avatarUrl":"/avatars/57900539952382de0ce6892faf50b401.svg","isPro":false,"fullname":"Jaehyun Jeon","user":"jeochris","type":"user"},{"_id":"618ada0729415f4b68c5ee83","avatarUrl":"/avatars/c09b0caa30090420c3d1a8ceedfe8500.svg","isPro":false,"fullname":"yejinchoi","user":"yejinc","type":"user"},{"_id":"646aecb04c1cd18b497a50ee","avatarUrl":"/avatars/de15c724056f36a41cb4f375d05ed836.svg","isPro":false,"fullname":"Junhyeok Kim","user":"kjunh","type":"user"},{"_id":"641b754d1911d3be6745cce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/Ydjcjd4VuNUGj5Cd4QHdB.png","isPro":false,"fullname":"atayloraerospace","user":"Taylor658","type":"user"},{"_id":"62bc34d69d8c509c4b3be55b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62bc34d69d8c509c4b3be55b/PlBj1eHj5gavE0Y3BqyHE.jpeg","isPro":false,"fullname":"Fredric Cliver","user":"Fredric","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2410.13232

Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation

Published on Oct 17, 2024
· Submitted by
Hyungjoo Chae
on Oct 21, 2024
#2 Paper of the day

Abstract

A World-model-augmented web agent improves decision-making by simulating action outcomes, overcoming training challenges with free-form natural language descriptions of state differences.

AI-generated summary

Large language models (LLMs) have recently gained much attention in building autonomous agents. However, the performance of current LLM-based web agents in long-horizon tasks is far from optimal, often yielding errors such as repeatedly buying a non-refundable flight ticket. By contrast, humans can avoid such an irreversible mistake, as we have an awareness of the potential outcomes (e.g., losing money) of our actions, also known as the "world model". Motivated by this, our study first starts with preliminary analyses, confirming the absence of world models in current LLMs (e.g., GPT-4o, Claude-3.5-Sonnet, etc.). Then, we present a World-model-augmented (WMA) web agent, which simulates the outcomes of its actions for better decision-making. To overcome the challenges in training LLMs as world models predicting next observations, such as repeated elements across observations and long HTML inputs, we propose a transition-focused observation abstraction, where the prediction objectives are free-form natural language descriptions exclusively highlighting important state differences between time steps. Experiments on WebArena and Mind2Web show that our world models improve agents' policy selection without training and demonstrate our agents' cost- and time-efficiency compared to recent tree-search-based agents.

Community

Paper author Paper submitter

스크린샷 2024-10-21 오후 3.10.41.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.13232 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.13232 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 7