Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation
[go: Go Back, main page]

https://hero-humanoid.github.io/

\n","updatedAt":"2026-02-19T02:51:31.786Z","author":{"_id":"6201fc5d91d53938a6432fbf","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg","fullname":"Runpei Dong","name":"RunpeiDong","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":7,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.4542011022567749},"editors":["RunpeiDong"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg"],"reactions":[],"isReport":false}},{"id":"69971c4b16c45b48b858a767","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-02-19T14:20:59.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/learning-humanoid-end-effector-control-for-open-vocabulary-visual-loco-manipulation-7624-028ee103\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"

arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/learning-humanoid-end-effector-control-for-open-vocabulary-visual-loco-manipulation-7624-028ee103

\n
    \n
  • Executive Summary
  • \n
  • Detailed Breakdown
  • \n
  • Practical Applications
  • \n
\n","updatedAt":"2026-02-19T14:20:59.369Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6005261540412903},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}},{"id":"6997bb450eaf507342c5feb8","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-20T01:39:17.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Humanoid Manipulation Interface: Humanoid Whole-Body Manipulation from Robot-Free Demonstrations](https://huggingface.co/papers/2602.06643) (2026)\n* [Coordinated Humanoid Manipulation with Choice Policies](https://huggingface.co/papers/2512.25072) (2025)\n* [EgoHumanoid: Unlocking In-the-Wild Loco-Manipulation with Robot-Free Egocentric Demonstration](https://huggingface.co/papers/2602.10106) (2026)\n* [AdaptManip: Learning Adaptive Whole-Body Object Lifting and Delivery with Online Recurrent State Estimation](https://huggingface.co/papers/2602.14363) (2026)\n* [PILOT: A Perceptive Integrated Low-level Controller for Loco-manipulation over Unstructured Scenes](https://huggingface.co/papers/2601.17440) (2026)\n* [Dexterous Manipulation Policies from RGB Human Videos via 3D Hand-Object Trajectory Reconstruction](https://huggingface.co/papers/2602.09013) (2026)\n* [GR-Dexter Technical Report](https://huggingface.co/papers/2512.24210) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-20T01:39:17.874Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7018163800239563},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.16705","authors":[{"_id":"69967a291268a6b79e0d02bb","user":{"_id":"6201fc5d91d53938a6432fbf","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg","isPro":false,"fullname":"Runpei Dong","user":"RunpeiDong","type":"user"},"name":"Runpei Dong","status":"claimed_verified","statusLastChangedAt":"2026-02-19T09:51:52.994Z","hidden":false},{"_id":"69967a291268a6b79e0d02bc","name":"Ziyan Li","hidden":false},{"_id":"69967a291268a6b79e0d02bd","name":"Xialin He","hidden":false},{"_id":"69967a291268a6b79e0d02be","name":"Saurabh Gupta","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/6201fc5d91d53938a6432fbf/g69sakZn_StFwG0neUtqz.mp4"],"publishedAt":"2026-02-18T18:55:02.000Z","submittedOnDailyAt":"2026-02-19T00:21:31.776Z","title":"Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation","submittedOnDailyBy":{"_id":"6201fc5d91d53938a6432fbf","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg","isPro":false,"fullname":"Runpei Dong","user":"RunpeiDong","type":"user"},"summary":"Visual loco-manipulation of arbitrary objects in the wild with humanoid robots requires accurate end-effector (EE) control and a generalizable understanding of the scene via visual inputs (e.g., RGB-D images). Existing approaches are based on real-world imitation learning and exhibit limited generalization due to the difficulty in collecting large-scale training datasets. This paper presents a new paradigm, HERO, for object loco-manipulation with humanoid robots that combines the strong generalization and open-vocabulary understanding of large vision models with strong control performance from simulated training. We achieve this by designing an accurate residual-aware EE tracking policy. This EE tracking policy combines classical robotics with machine learning. It uses a) inverse kinematics to convert residual end-effector targets into reference trajectories, b) a learned neural forward model for accurate forward kinematics, c) goal adjustment, and d) replanning. Together, these innovations help us cut down the end-effector tracking error by 3.2x. We use this accurate end-effector tracker to build a modular system for loco-manipulation, where we use open-vocabulary large vision models for strong visual generalization. Our system is able to operate in diverse real-world environments, from offices to coffee shops, where the robot is able to reliably manipulate various everyday objects (e.g., mugs, apples, toys) on surfaces ranging from 43cm to 92cm in height. Systematic modular and end-to-end tests in simulation and the real world demonstrate the effectiveness of our proposed design. We believe the advances in this paper can open up new ways of training humanoid robots to interact with daily objects.","upvotes":26,"discussionId":"69967a2a1268a6b79e0d02bf","projectPage":"https://hero-humanoid.github.io/","ai_summary":"HERO enables humanoid robots to perform object manipulation in diverse real-world environments by combining accurate end-effector control with open-vocabulary vision models for generalizable scene understanding.","ai_keywords":["end-effector tracking policy","inverse kinematics","neural forward model","goal adjustment","replanning","open-vocabulary large vision models","loco-manipulation","visual generalization"],"organization":{"_id":"65448bef5b5d9185ba3202b9","name":"UIUC-CS","fullname":"University of Illinois at Urbana-Champaign","avatar":"https://cdn-uploads.huggingface.co/production/uploads/65448b21fcb96b8b48733729/ycqcXFayMTTD_KpE37067.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6201fc5d91d53938a6432fbf","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg","isPro":false,"fullname":"Runpei Dong","user":"RunpeiDong","type":"user"},{"_id":"65146f07ff29e7923bb7c386","avatarUrl":"/avatars/5105e1ed958964a04cf9fc9ec5e99a48.svg","isPro":false,"fullname":"XiaoHan","user":"XiaoHanCom","type":"user"},{"_id":"650d7fe64d77a095226a4e72","avatarUrl":"/avatars/32c0998cd45ce9f1d4c7f30fa56b08db.svg","isPro":false,"fullname":"Wei","user":"WuiBen","type":"user"},{"_id":"650d6d482a602ba3492452d2","avatarUrl":"/avatars/e4796293ef60af1459b1b7c528c3ee49.svg","isPro":false,"fullname":"Luo","user":"JeckLuo","type":"user"},{"_id":"650d7f8a5d57feb847220631","avatarUrl":"/avatars/0effc6cc4691afec6dbf0c985b6f4be4.svg","isPro":false,"fullname":"Rong","user":"ARongJun","type":"user"},{"_id":"650c6af3e65b12d3b9623bc9","avatarUrl":"/avatars/4d1a1ac88c351f916f255ac2d76c0878.svg","isPro":false,"fullname":"Kunkun","user":"LarryKun","type":"user"},{"_id":"650c4dd17eea23e8cb226d0d","avatarUrl":"/avatars/85173da490301505255363fdab92b8c3.svg","isPro":false,"fullname":"Ben","user":"TombeNT","type":"user"},{"_id":"650c593f23e8028a89464a48","avatarUrl":"/avatars/050483b9e63dded891d8ba1d20787423.svg","isPro":false,"fullname":"Dex","user":"DexGR","type":"user"},{"_id":"67b4208f28631e51d09a3d91","avatarUrl":"/avatars/181c2c395100f49d6147fc06fc43e32b.svg","isPro":false,"fullname":"Xialin He","user":"xialinhe2","type":"user"},{"_id":"6534a434e778506c5b1e5be8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6534a434e778506c5b1e5be8/349SdAnjEdIQJSzWvKfZ4.png","isPro":true,"fullname":"Xirui Li","user":"AIcell","type":"user"},{"_id":"66935bdc5489e4f73c76bc7b","avatarUrl":"/avatars/129d1e86bbaf764b507501f4feb177db.svg","isPro":false,"fullname":"Abidoye Aanuoluwapo","user":"Aanuoluwapo65","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"65448bef5b5d9185ba3202b9","name":"UIUC-CS","fullname":"University of Illinois at Urbana-Champaign","avatar":"https://cdn-uploads.huggingface.co/production/uploads/65448b21fcb96b8b48733729/ycqcXFayMTTD_KpE37067.jpeg"}}">
Papers
arxiv:2602.16705

Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation

Published on Feb 18
· Submitted by
Runpei Dong
on Feb 19
Authors:
,
,

Abstract

HERO enables humanoid robots to perform object manipulation in diverse real-world environments by combining accurate end-effector control with open-vocabulary vision models for generalizable scene understanding.

AI-generated summary

Visual loco-manipulation of arbitrary objects in the wild with humanoid robots requires accurate end-effector (EE) control and a generalizable understanding of the scene via visual inputs (e.g., RGB-D images). Existing approaches are based on real-world imitation learning and exhibit limited generalization due to the difficulty in collecting large-scale training datasets. This paper presents a new paradigm, HERO, for object loco-manipulation with humanoid robots that combines the strong generalization and open-vocabulary understanding of large vision models with strong control performance from simulated training. We achieve this by designing an accurate residual-aware EE tracking policy. This EE tracking policy combines classical robotics with machine learning. It uses a) inverse kinematics to convert residual end-effector targets into reference trajectories, b) a learned neural forward model for accurate forward kinematics, c) goal adjustment, and d) replanning. Together, these innovations help us cut down the end-effector tracking error by 3.2x. We use this accurate end-effector tracker to build a modular system for loco-manipulation, where we use open-vocabulary large vision models for strong visual generalization. Our system is able to operate in diverse real-world environments, from offices to coffee shops, where the robot is able to reliably manipulate various everyday objects (e.g., mugs, apples, toys) on surfaces ranging from 43cm to 92cm in height. Systematic modular and end-to-end tests in simulation and the real world demonstrate the effectiveness of our proposed design. We believe the advances in this paper can open up new ways of training humanoid robots to interact with daily objects.

Community

Paper author Paper submitter

"HERO: Learning Humanoid End-Effector ContROl for Open-Vocabulary Visual Loco-Manipulation"
Project page: https://hero-humanoid.github.io/

arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/learning-humanoid-end-effector-control-for-open-vocabulary-visual-loco-manipulation-7624-028ee103

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.16705 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.16705 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.16705 in a Space README.md to link it from this page.

Collections including this paper 2