Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - GPT-4V(ision) is a Generalist Web Agent, if Grounded
[go: Go Back, main page]

\n\t\t\n\t\n\t\n\t\tUnleashing GPT-4V(ision): Revolutionizing Web Agents with Visual Grounding\n\t\n\n

\n\n

\n\t\n\t\t\n\t\n\t\n\t\tLinks ๐Ÿ”—:\n\t\n

\n

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-08T18:51:27.406Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":176,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5016820430755615},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2401.01614","authors":[{"_id":"65960e11b02e572eb0bfd00c","user":{"_id":"631a95cfa66151e36e54a905","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/631a95cfa66151e36e54a905/jM_4vy9_Kd8zomFVYtsTo.png","isPro":false,"fullname":"Boyuan Zheng","user":"boyuanzheng010","type":"user"},"name":"Boyuan Zheng","status":"claimed_verified","statusLastChangedAt":"2024-03-18T08:37:29.919Z","hidden":false},{"_id":"65960e11b02e572eb0bfd00d","user":{"_id":"6500870f1e14749e84f8f887","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6500870f1e14749e84f8f887/wfvx4BZvh2OyW-vpq5jEy.jpeg","isPro":false,"fullname":"Boyu Gou","user":"BoyuNLP","type":"user"},"name":"Boyu Gou","status":"claimed_verified","statusLastChangedAt":"2024-01-04T09:35:11.800Z","hidden":false},{"_id":"65960e11b02e572eb0bfd00e","name":"Jihyung Kil","hidden":false},{"_id":"65960e11b02e572eb0bfd00f","name":"Huan Sun","hidden":false},{"_id":"65960e11b02e572eb0bfd010","user":{"_id":"6477a323dbc2a416f8b852b3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6477a323dbc2a416f8b852b3/mRKW5kT9GASORT4YnaZz0.jpeg","isPro":false,"fullname":"Yu Su","user":"ysu-nlp","type":"user"},"name":"Yu Su","status":"claimed_verified","statusLastChangedAt":"2024-01-04T09:35:14.490Z","hidden":false}],"publishedAt":"2024-01-03T08:33:09.000Z","submittedOnDailyAt":"2024-01-03T23:17:00.535Z","title":"GPT-4V(ision) is a Generalist Web Agent, if Grounded","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"The recent development on large multimodal models (LMMs), especially\nGPT-4V(ision) and Gemini, has been quickly expanding the capability boundaries\nof multimodal models beyond traditional tasks like image captioning and visual\nquestion answering. In this work, we explore the potential of LMMs like GPT-4V\nas a generalist web agent that can follow natural language instructions to\ncomplete tasks on any given website. We propose SEEACT, a generalist web agent\nthat harnesses the power of LMMs for integrated visual understanding and acting\non the web. We evaluate on the recent MIND2WEB benchmark. In addition to\nstandard offline evaluation on cached websites, we enable a new online\nevaluation setting by developing a tool that allows running web agents on live\nwebsites. We show that GPT-4V presents a great potential for web agents - it\ncan successfully complete 50% of the tasks on live websites if we manually\nground its textual plans into actions on the websites. This substantially\noutperforms text-only LLMs like GPT-4 or smaller models (FLAN-T5 and BLIP-2)\nspecifically fine-tuned for web agents. However, grounding still remains a\nmajor challenge. Existing LMM grounding strategies like set-of-mark prompting\nturns out not effective for web agents, and the best grounding strategy we\ndevelop in this paper leverages both the HTML text and visuals. Yet, there is\nstill a substantial gap with oracle grounding, leaving ample room for further\nimprovement.","upvotes":22,"discussionId":"65960e14b02e572eb0bfd09e","githubRepo":"https://github.com/osu-nlp-group/seeact","githubRepoAddedBy":"auto","ai_summary":"LMMs like GPT-4V demonstrate potential as generalist web agents by following natural language instructions to complete tasks on live websites, outperforming text-only LLMs and smaller models with a combination of HTML text and visuals for grounding, though challenges remain.","ai_keywords":["GPT-4V","multimodal models","SEEACT","MIND2WEB benchmark","web agents","visual understanding","GPT-4","FLAN-T5","BLIP-2","set-of-mark prompting","oracle grounding"],"githubStars":824},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6477a323dbc2a416f8b852b3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6477a323dbc2a416f8b852b3/mRKW5kT9GASORT4YnaZz0.jpeg","isPro":false,"fullname":"Yu Su","user":"ysu-nlp","type":"user"},{"_id":"6500870f1e14749e84f8f887","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6500870f1e14749e84f8f887/wfvx4BZvh2OyW-vpq5jEy.jpeg","isPro":false,"fullname":"Boyu Gou","user":"BoyuNLP","type":"user"},{"_id":"6409915b5d82bf3a2624e09c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678865109748-6409915b5d82bf3a2624e09c.jpeg","isPro":false,"fullname":"HanWang","user":"eseedo","type":"user"},{"_id":"631d760344503b7227837242","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/631d760344503b7227837242/3b6JRusFX6GKJpsN9ZdeJ.png","isPro":false,"fullname":"Max Ku","user":"vinesmsuic","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"650b95e72ec41a31f3155cf6","avatarUrl":"/avatars/7d87dccf270925f42538ccf33433d799.svg","isPro":false,"fullname":"Ryo Kodama","user":"rkodama","type":"user"},{"_id":"64747f7e33192631bacd8831","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64747f7e33192631bacd8831/dstkZJ4sHJSeqLesV5cOC.jpeg","isPro":false,"fullname":"Taufiq Dwi Purnomo","user":"taufiqdp","type":"user"},{"_id":"6269e6ea124f03fb2e0c15d9","avatarUrl":"/avatars/4e7ac8f92d7fdbb129e0b1fbe2f24cb7.svg","isPro":false,"fullname":"Vincent Tu","user":"alckasoc","type":"user"},{"_id":"65676a0a461af93fca9f2329","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65676a0a461af93fca9f2329/-CB4C1C6yLM4gRU2K5gsS.jpeg","isPro":false,"fullname":"Juan Delgadillo","user":"juandelgadillo","type":"user"},{"_id":"6270d2ddbcef985363d774fa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270d2ddbcef985363d774fa/HOKAxx_FKVRF-87WpGQbF.png","isPro":true,"fullname":"jiakai","user":"real-jiakai","type":"user"},{"_id":"639c379cdb7c5f35004066cb","avatarUrl":"/avatars/3e435506ee85aa7d2d0ec2174a07462f.svg","isPro":false,"fullname":"Zhenran Xu","user":"imryanxu","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1}">
Papers
arxiv:2401.01614

GPT-4V(ision) is a Generalist Web Agent, if Grounded

Published on Jan 3, 2024
ยท Submitted by
AK
on Jan 3, 2024
#1 Paper of the day
Authors:
,
,

Abstract

LMMs like GPT-4V demonstrate potential as generalist web agents by following natural language instructions to complete tasks on live websites, outperforming text-only LLMs and smaller models with a combination of HTML text and visuals for grounding, though challenges remain.

AI-generated summary

The recent development on large multimodal models (LMMs), especially GPT-4V(ision) and Gemini, has been quickly expanding the capability boundaries of multimodal models beyond traditional tasks like image captioning and visual question answering. In this work, we explore the potential of LMMs like GPT-4V as a generalist web agent that can follow natural language instructions to complete tasks on any given website. We propose SEEACT, a generalist web agent that harnesses the power of LMMs for integrated visual understanding and acting on the web. We evaluate on the recent MIND2WEB benchmark. In addition to standard offline evaluation on cached websites, we enable a new online evaluation setting by developing a tool that allows running web agents on live websites. We show that GPT-4V presents a great potential for web agents - it can successfully complete 50% of the tasks on live websites if we manually ground its textual plans into actions on the websites. This substantially outperforms text-only LLMs like GPT-4 or smaller models (FLAN-T5 and BLIP-2) specifically fine-tuned for web agents. However, grounding still remains a major challenge. Existing LMM grounding strategies like set-of-mark prompting turns out not effective for web agents, and the best grounding strategy we develop in this paper leverages both the HTML text and visuals. Yet, there is still a substantial gap with oracle grounding, leaving ample room for further improvement.

Community

Unleashing GPT-4V(ision): Revolutionizing Web Agents with Visual Grounding

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 5

Browse 5 models citing this paper

Datasets citing this paper 5

Browse 5 datasets citing this paper

Spaces citing this paper 4

Collections including this paper 9