Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents
[go: Go Back, main page]

https://github.com/EmbodiedBench/EmbodiedBench

\n","updatedAt":"2025-02-19T18:19:08.320Z","author":{"_id":"64d45451c34a346181b130dd","avatarUrl":"/avatars/9bb8205b889337df5d321539c9b5d69d.svg","fullname":"Rui Yang","name":"Ray2333","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":15,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.9572520852088928},"editors":["Ray2333"],"editorAvatarUrls":["/avatars/9bb8205b889337df5d321539c9b5d69d.svg"],"reactions":[],"isReport":false}},{"id":"67afeee2e338b1f74e5b9bbe","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-02-15T01:33:22.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ECBench: Can Multi-modal Foundation Models Understand the Egocentric World? A Holistic Embodied Cognition Benchmark](https://huggingface.co/papers/2501.05031) (2025)\n* [PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding](https://huggingface.co/papers/2501.16411) (2025)\n* [EmbodiedEval: Evaluate Multimodal LLMs as Embodied Agents](https://huggingface.co/papers/2501.11858) (2025)\n* [UP-VLA: A Unified Understanding and Prediction Model for Embodied Agent](https://huggingface.co/papers/2501.18867) (2025)\n* [Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuray](https://huggingface.co/papers/2502.05177) (2025)\n* [iVISPAR -- An Interactive Visual-Spatial Reasoning Benchmark for VLMs](https://huggingface.co/papers/2502.03214) (2025)\n* [MINDSTORES: Memory-Informed Neural Decision Synthesis for Task-Oriented Reinforcement in Embodied Systems](https://huggingface.co/papers/2501.19318) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-02-15T01:33:22.374Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7693086862564087},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.09560","authors":[{"_id":"67aec4285b9801b819449b84","user":{"_id":"64d45451c34a346181b130dd","avatarUrl":"/avatars/9bb8205b889337df5d321539c9b5d69d.svg","isPro":true,"fullname":"Rui Yang","user":"Ray2333","type":"user"},"name":"Rui Yang","status":"claimed_verified","statusLastChangedAt":"2025-02-18T09:34:05.913Z","hidden":false},{"_id":"67aec4285b9801b819449b85","user":{"_id":"6700b1f93381f2db06857fb5","avatarUrl":"/avatars/c8b9ec7c00773c5a4055ba50de0c6b2f.svg","isPro":false,"fullname":"Hanyang Chen","user":"Hanyang81","type":"user"},"name":"Hanyang Chen","status":"claimed_verified","statusLastChangedAt":"2025-02-14T08:01:08.365Z","hidden":false},{"_id":"67aec4285b9801b819449b86","user":{"_id":"6719bfd07c6e6c83a388aeae","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6719bfd07c6e6c83a388aeae/jHxryk04dzHo23TX5F5sz.png","isPro":false,"fullname":"Junyu Zhang","user":"jyzhang1208","type":"user"},"name":"Junyu Zhang","status":"claimed_verified","statusLastChangedAt":"2025-02-18T09:34:03.521Z","hidden":false},{"_id":"67aec4285b9801b819449b87","name":"Mark Zhao","hidden":false},{"_id":"67aec4285b9801b819449b88","user":{"_id":"665e121c6007027038fd4005","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/sIVBJAGM-Kneq9KMf8aXb.png","isPro":false,"fullname":"Cheng Qian","user":"chengq9","type":"user"},"name":"Cheng Qian","status":"claimed_verified","statusLastChangedAt":"2025-04-23T08:38:49.914Z","hidden":false},{"_id":"67aec4285b9801b819449b89","user":{"_id":"66f2b8602ef2817ec3cb65f6","avatarUrl":"/avatars/c8c5b2706644fb45a75f13af99fa7ae9.svg","isPro":false,"fullname":"Kangrui Wang","user":"JamesK2W","type":"user"},"name":"Kangrui Wang","status":"admin_assigned","statusLastChangedAt":"2025-02-14T12:47:36.061Z","hidden":false},{"_id":"67aec4285b9801b819449b8a","user":{"_id":"640c3b28c5fa12d61a50cd92","avatarUrl":"/avatars/81556de3214c848b3c3e118f50fd2968.svg","isPro":false,"fullname":"Qineng Wang","user":"Inevitablevalor","type":"user"},"name":"Qineng Wang","status":"admin_assigned","statusLastChangedAt":"2025-02-14T12:46:06.895Z","hidden":false},{"_id":"67aec4285b9801b819449b8b","name":"Teja Venkat Koripella","hidden":false},{"_id":"67aec4285b9801b819449b8c","user":{"_id":"64e10c263209bf4194912319","avatarUrl":"/avatars/02f1a9e2ce333ff521d901cf83fcdff3.svg","isPro":false,"fullname":"Marziyeh Movahedi","user":"Marzimv","type":"user"},"name":"Marziyeh Movahedi","status":"admin_assigned","statusLastChangedAt":"2025-02-14T12:46:13.104Z","hidden":false},{"_id":"67aec4285b9801b819449b8d","user":{"_id":"6746a140f2ca2162e3bcfe2b","avatarUrl":"/avatars/d9d8cfb5f112e6ed7f6152fc230135d3.svg","isPro":false,"fullname":"Manling Li","user":"ManlingLi","type":"user"},"name":"Manling Li","status":"admin_assigned","statusLastChangedAt":"2025-02-14T12:46:50.911Z","hidden":false},{"_id":"67aec4285b9801b819449b8e","name":"Heng Ji","hidden":false},{"_id":"67aec4285b9801b819449b8f","user":{"_id":"6719d581a6cad13741b8bc7f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6719d581a6cad13741b8bc7f/w4EttqfXRgWZJc6HpYOS9.jpeg","isPro":false,"fullname":"Huan Zhang","user":"huanzhang12","type":"user"},"name":"Huan Zhang","status":"admin_assigned","statusLastChangedAt":"2025-02-14T12:47:14.741Z","hidden":false},{"_id":"67aec4285b9801b819449b90","name":"Tong Zhang","hidden":false}],"publishedAt":"2025-02-13T18:11:34.000Z","submittedOnDailyAt":"2025-02-14T01:53:42.492Z","title":"EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language\n Models for Vision-Driven Embodied Agents","submittedOnDailyBy":{"_id":"64d45451c34a346181b130dd","avatarUrl":"/avatars/9bb8205b889337df5d321539c9b5d69d.svg","isPro":true,"fullname":"Rui Yang","user":"Ray2333","type":"user"},"summary":"Leveraging Multi-modal Large Language Models (MLLMs) to create embodied\nagents offers a promising avenue for tackling real-world tasks. While\nlanguage-centric embodied agents have garnered substantial attention,\nMLLM-based embodied agents remain underexplored due to the lack of\ncomprehensive evaluation frameworks. To bridge this gap, we introduce\nEmbodiedBench, an extensive benchmark designed to evaluate vision-driven\nembodied agents. EmbodiedBench features: (1) a diverse set of 1,128 testing\ntasks across four environments, ranging from high-level semantic tasks (e.g.,\nhousehold) to low-level tasks involving atomic actions (e.g., navigation and\nmanipulation); and (2) six meticulously curated subsets evaluating essential\nagent capabilities like commonsense reasoning, complex instruction\nunderstanding, spatial awareness, visual perception, and long-term planning.\nThrough extensive experiments, we evaluated 13 leading proprietary and\nopen-source MLLMs within EmbodiedBench. Our findings reveal that: MLLMs excel\nat high-level tasks but struggle with low-level manipulation, with the best\nmodel, GPT-4o, scoring only 28.9% on average. EmbodiedBench provides a\nmultifaceted standardized evaluation platform that not only highlights existing\nchallenges but also offers valuable insights to advance MLLM-based embodied\nagents. Our code is available at https://embodiedbench.github.io.","upvotes":35,"discussionId":"67aec42b5b9801b819449bf5","projectPage":"https://embodiedbench.github.io","githubRepo":"https://github.com/EmbodiedBench/EmbodiedBench","githubRepoAddedBy":"user","ai_summary":"EmbodiedBench is a benchmark comprising diverse tasks to evaluate MLLMs in embodied agents, revealing strengths and weaknesses across high-level and low-level capabilities.","ai_keywords":["Multi-modal Large Language Models (MLLMs)","embodied agents","EmbodiedBench","vision-driven embodied agents","commonsense reasoning","complex instruction understanding","spatial awareness","visual perception","long-term planning","GPT-4o"],"githubStars":264},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64d45451c34a346181b130dd","avatarUrl":"/avatars/9bb8205b889337df5d321539c9b5d69d.svg","isPro":true,"fullname":"Rui Yang","user":"Ray2333","type":"user"},{"_id":"66f9bb2dd5575ad6914756ce","avatarUrl":"/avatars/221d915a5386cbb11c007dc7c41d6b0a.svg","isPro":true,"fullname":"Feng Luo","user":"feng0929","type":"user"},{"_id":"640c3b28c5fa12d61a50cd92","avatarUrl":"/avatars/81556de3214c848b3c3e118f50fd2968.svg","isPro":false,"fullname":"Qineng Wang","user":"Inevitablevalor","type":"user"},{"_id":"66130462438aed66bd8b13b1","avatarUrl":"/avatars/ad242097458a06072064b2a445e6f85b.svg","isPro":false,"fullname":"Mark Zhao","user":"markyyds","type":"user"},{"_id":"6700b1f93381f2db06857fb5","avatarUrl":"/avatars/c8b9ec7c00773c5a4055ba50de0c6b2f.svg","isPro":false,"fullname":"Hanyang Chen","user":"Hanyang81","type":"user"},{"_id":"6746a140f2ca2162e3bcfe2b","avatarUrl":"/avatars/d9d8cfb5f112e6ed7f6152fc230135d3.svg","isPro":false,"fullname":"Manling Li","user":"ManlingLi","type":"user"},{"_id":"665e121c6007027038fd4005","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/sIVBJAGM-Kneq9KMf8aXb.png","isPro":false,"fullname":"Cheng Qian","user":"chengq9","type":"user"},{"_id":"66f2b8602ef2817ec3cb65f6","avatarUrl":"/avatars/c8c5b2706644fb45a75f13af99fa7ae9.svg","isPro":false,"fullname":"Kangrui Wang","user":"JamesK2W","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"67af2bdcce6b3323d9c5b682","avatarUrl":"/avatars/70cda022d7743c037dbb3a5303c23926.svg","isPro":false,"fullname":"Eric Li","user":"Ericl5678","type":"user"},{"_id":"67af2a728dbc394ee6826847","avatarUrl":"/avatars/af624ffc2925f8f461878f0ff05da4a6.svg","isPro":false,"fullname":"huo siqi","user":"hsq20050814","type":"user"},{"_id":"638324f862badff43269e588","avatarUrl":"/avatars/907a39a9b44fc8b7f3fad35858b01fb7.svg","isPro":false,"fullname":"Asaf Yehudai","user":"Asaf-Yehudai","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2502.09560

EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents

Published on Feb 13, 2025
· Submitted by
Rui Yang
on Feb 14, 2025
Authors:
,
,
,

Abstract

EmbodiedBench is a benchmark comprising diverse tasks to evaluate MLLMs in embodied agents, revealing strengths and weaknesses across high-level and low-level capabilities.

AI-generated summary

Leveraging Multi-modal Large Language Models (MLLMs) to create embodied agents offers a promising avenue for tackling real-world tasks. While language-centric embodied agents have garnered substantial attention, MLLM-based embodied agents remain underexplored due to the lack of comprehensive evaluation frameworks. To bridge this gap, we introduce EmbodiedBench, an extensive benchmark designed to evaluate vision-driven embodied agents. EmbodiedBench features: (1) a diverse set of 1,128 testing tasks across four environments, ranging from high-level semantic tasks (e.g., household) to low-level tasks involving atomic actions (e.g., navigation and manipulation); and (2) six meticulously curated subsets evaluating essential agent capabilities like commonsense reasoning, complex instruction understanding, spatial awareness, visual perception, and long-term planning. Through extensive experiments, we evaluated 13 leading proprietary and open-source MLLMs within EmbodiedBench. Our findings reveal that: MLLMs excel at high-level tasks but struggle with low-level manipulation, with the best model, GPT-4o, scoring only 28.9% on average. EmbodiedBench provides a multifaceted standardized evaluation platform that not only highlights existing challenges but also offers valuable insights to advance MLLM-based embodied agents. Our code is available at https://embodiedbench.github.io.

Community

Paper author Paper submitter
edited Feb 19, 2025

This paper introduces a comprehensive benchmark, EmbodiedBench, to evaluate Multi-modal Large Language Models (MLLMs) as embodied agents. It not only reveals key challenges in embodied AI but also offers actionable insights to advance MLLM-driven embodied agents.

The code is here: https://github.com/EmbodiedBench/EmbodiedBench

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.09560 in a model README.md to link it from this page.

Datasets citing this paper 6

Browse 6 datasets citing this paper

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.09560 in a Space README.md to link it from this page.

Collections including this paper 9