Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - InterPrior: Scaling Generative Control for Physics-Based Human-Object Interactions
\n","updatedAt":"2026-02-06T21:10:27.769Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7414215207099915},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}},{"id":"69869854df8c75f6dfbd47f5","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-07T01:41:40.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [HoRD: Robust Humanoid Control via History-Conditioned Reinforcement Learning and Online Distillation](https://huggingface.co/papers/2602.04412) (2026)\n* [Decoupled Generative Modeling for Human-Object Interaction Synthesis](https://huggingface.co/papers/2512.19049) (2025)\n* [HumanX: Toward Agile and Generalizable Humanoid Interaction Skills from Human Videos](https://huggingface.co/papers/2602.02473) (2026)\n* [Learning Generalizable Hand-Object Tracking from Synthetic Demonstrations](https://huggingface.co/papers/2512.19583) (2025)\n* [ZEST: Zero-shot Embodied Skill Transfer for Athletic Robot Control](https://huggingface.co/papers/2602.00401) (2026)\n* [Learning Whole-Body Human-Humanoid Interaction from Human-Human Demonstrations](https://huggingface.co/papers/2601.09518) (2026)\n* [Robust and Generalized Humanoid Motion Tracking](https://huggingface.co/papers/2601.23080) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-07T01:41:40.920Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7005541324615479},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.06035","authors":[{"_id":"69855ee24ad556f294b7eb7c","user":{"_id":"64f17c31b4344f592fb2821e","avatarUrl":"/avatars/11c8edb0967491822277a8a0d3ff3d31.svg","isPro":false,"fullname":"Sirui Xu","user":"xusirui","type":"user"},"name":"Sirui Xu","status":"claimed_verified","statusLastChangedAt":"2026-02-06T18:51:52.134Z","hidden":false},{"_id":"69855ee24ad556f294b7eb7d","name":"Samuel Schulter","hidden":false},{"_id":"69855ee24ad556f294b7eb7e","name":"Morteza Ziyadi","hidden":false},{"_id":"69855ee24ad556f294b7eb7f","name":"Xialin He","hidden":false},{"_id":"69855ee24ad556f294b7eb80","name":"Xiaohan Fei","hidden":false},{"_id":"69855ee24ad556f294b7eb81","name":"Yu-Xiong Wang","hidden":false},{"_id":"69855ee24ad556f294b7eb82","name":"Liangyan Gui","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/64f17c31b4344f592fb2821e/oRCakC7Qp41hukkXM-IHZ.qt"],"publishedAt":"2026-02-05T18:59:27.000Z","submittedOnDailyAt":"2026-02-06T02:16:24.979Z","title":"InterPrior: Scaling Generative Control for Physics-Based Human-Object Interactions","submittedOnDailyBy":{"_id":"64f17c31b4344f592fb2821e","avatarUrl":"/avatars/11c8edb0967491822277a8a0d3ff3d31.svg","isPro":false,"fullname":"Sirui Xu","user":"xusirui","type":"user"},"summary":"Humans rarely plan whole-body interactions with objects at the level of explicit whole-body movements. High-level intentions, such as affordance, define the goal, while coordinated balance, contact, and manipulation can emerge naturally from underlying physical and motor priors. Scaling such priors is key to enabling humanoids to compose and generalize loco-manipulation skills across diverse contexts while maintaining physically coherent whole-body coordination. To this end, we introduce InterPrior, a scalable framework that learns a unified generative controller through large-scale imitation pretraining and post-training by reinforcement learning. InterPrior first distills a full-reference imitation expert into a versatile, goal-conditioned variational policy that reconstructs motion from multimodal observations and high-level intent. While the distilled policy reconstructs training behaviors, it does not generalize reliably due to the vast configuration space of large-scale human-object interactions. To address this, we apply data augmentation with physical perturbations, and then perform reinforcement learning finetuning to improve competence on unseen goals and initializations. Together, these steps consolidate the reconstructed latent skills into a valid manifold, yielding a motion prior that generalizes beyond the training data, e.g., it can incorporate new behaviors such as interactions with unseen objects. We further demonstrate its effectiveness for user-interactive control and its potential for real robot deployment.","upvotes":23,"discussionId":"69855ee24ad556f294b7eb83","projectPage":"https://sirui-xu.github.io/InterPrior/","ai_summary":"A scalable framework called InterPrior learns a unified generative controller through imitation learning and reinforcement learning to enable humanoids to generalize loco-manipulation skills across diverse contexts while maintaining physically coherent whole-body coordination.","ai_keywords":["variational policy","imitation learning","reinforcement learning","motion prior","latent skills","physical perturbations","goal-conditioned policy","multimodal observations","whole-body coordination","loco-manipulation skills"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64f005d8f87b014e81716593","avatarUrl":"/avatars/cfef589b87bd424e333b2a25063528f8.svg","isPro":false,"fullname":"QIUSI ZHAN","user":"qiusizhan","type":"user"},{"_id":"64f17c31b4344f592fb2821e","avatarUrl":"/avatars/11c8edb0967491822277a8a0d3ff3d31.svg","isPro":false,"fullname":"Sirui Xu","user":"xusirui","type":"user"},{"_id":"64cd28311b981b2e0c841d80","avatarUrl":"/avatars/4ca4b6a97c8647773ba943a5346af00c.svg","isPro":false,"fullname":"JiangshanGong","user":"Frank-Gong123","type":"user"},{"_id":"653560efb3852ed1ceded22c","avatarUrl":"/avatars/06b7340ea54c1319143afd954c1f83b7.svg","isPro":false,"fullname":"Xiyan","user":"Ajatar","type":"user"},{"_id":"67b4208f28631e51d09a3d91","avatarUrl":"/avatars/181c2c395100f49d6147fc06fc43e32b.svg","isPro":false,"fullname":"Xialin He","user":"xialinhe2","type":"user"},{"_id":"67b420f84c100b5106a244d0","avatarUrl":"/avatars/67f767907af72427e0ef5e86b72f4a20.svg","isPro":false,"fullname":"Xialin He","user":"xialinhe3","type":"user"},{"_id":"67b42039cca7aff798979e80","avatarUrl":"/avatars/d410b3617395cd7e2a9c0c89ff12f23d.svg","isPro":false,"fullname":"Xialin He","user":"XialinHe","type":"user"},{"_id":"6332b75df0b2af7f685810da","avatarUrl":"/avatars/078f582f555a9dee3f7ab4d155c0a65c.svg","isPro":false,"fullname":"Xialin He","user":"MuLinjiu","type":"user"},{"_id":"67d92f5216d94730da55fc78","avatarUrl":"/avatars/cbdb2faded9fc2ba92fb56471a9286df.svg","isPro":false,"fullname":"YUCHENG Zhang","user":"siris756","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"679c795d6da6fbd8df013009","avatarUrl":"/avatars/b20cba296c2faeab954f986ca1417ab4.svg","isPro":false,"fullname":"Ziyin Wang","user":"zwasasdsaas","type":"user"},{"_id":"61e52be53d6dbb1da842316a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61e52be53d6dbb1da842316a/gx0WGPcOCClXPymoKglc4.jpeg","isPro":false,"fullname":"Börje Karlsson","user":"tellarin","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
A scalable framework called InterPrior learns a unified generative controller through imitation learning and reinforcement learning to enable humanoids to generalize loco-manipulation skills across diverse contexts while maintaining physically coherent whole-body coordination.
AI-generated summary
Humans rarely plan whole-body interactions with objects at the level of explicit whole-body movements. High-level intentions, such as affordance, define the goal, while coordinated balance, contact, and manipulation can emerge naturally from underlying physical and motor priors. Scaling such priors is key to enabling humanoids to compose and generalize loco-manipulation skills across diverse contexts while maintaining physically coherent whole-body coordination. To this end, we introduce InterPrior, a scalable framework that learns a unified generative controller through large-scale imitation pretraining and post-training by reinforcement learning. InterPrior first distills a full-reference imitation expert into a versatile, goal-conditioned variational policy that reconstructs motion from multimodal observations and high-level intent. While the distilled policy reconstructs training behaviors, it does not generalize reliably due to the vast configuration space of large-scale human-object interactions. To address this, we apply data augmentation with physical perturbations, and then perform reinforcement learning finetuning to improve competence on unseen goals and initializations. Together, these steps consolidate the reconstructed latent skills into a valid manifold, yielding a motion prior that generalizes beyond the training data, e.g., it can incorporate new behaviors such as interactions with unseen objects. We further demonstrate its effectiveness for user-interactive control and its potential for real robot deployment.
Distillation reconstructs motor skills, while RL fine-tuning interpolates and consolidates the latent space into a coherent skill manifold for versatile whole-body loco-manipulation.