Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - World Action Models are Zero-shot Policies
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-20T01:39:53.076Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7030633091926575},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.15922","authors":[{"_id":"6996793f1268a6b79e0d028d","name":"Seonghyeon Ye","hidden":false},{"_id":"6996793f1268a6b79e0d028e","name":"Yunhao Ge","hidden":false},{"_id":"6996793f1268a6b79e0d028f","name":"Kaiyuan Zheng","hidden":false},{"_id":"6996793f1268a6b79e0d0290","name":"Shenyuan Gao","hidden":false},{"_id":"6996793f1268a6b79e0d0291","name":"Sihyun Yu","hidden":false},{"_id":"6996793f1268a6b79e0d0292","name":"George Kurian","hidden":false},{"_id":"6996793f1268a6b79e0d0293","name":"Suneel Indupuru","hidden":false},{"_id":"6996793f1268a6b79e0d0294","name":"You Liang Tan","hidden":false},{"_id":"6996793f1268a6b79e0d0295","name":"Chuning Zhu","hidden":false},{"_id":"6996793f1268a6b79e0d0296","name":"Jiannan Xiang","hidden":false},{"_id":"6996793f1268a6b79e0d0297","name":"Ayaan Malik","hidden":false},{"_id":"6996793f1268a6b79e0d0298","name":"Kyungmin Lee","hidden":false},{"_id":"6996793f1268a6b79e0d0299","name":"William Liang","hidden":false},{"_id":"6996793f1268a6b79e0d029a","name":"Nadun Ranawaka","hidden":false},{"_id":"6996793f1268a6b79e0d029b","name":"Jiasheng Gu","hidden":false},{"_id":"6996793f1268a6b79e0d029c","name":"Yinzhen Xu","hidden":false},{"_id":"6996793f1268a6b79e0d029d","name":"Guanzhi Wang","hidden":false},{"_id":"6996793f1268a6b79e0d029e","name":"Fengyuan Hu","hidden":false},{"_id":"6996793f1268a6b79e0d029f","name":"Avnish Narayan","hidden":false},{"_id":"6996793f1268a6b79e0d02a0","name":"Johan Bjorck","hidden":false},{"_id":"6996793f1268a6b79e0d02a1","name":"Jing Wang","hidden":false},{"_id":"6996793f1268a6b79e0d02a2","name":"Gwanghyun Kim","hidden":false},{"_id":"6996793f1268a6b79e0d02a3","name":"Dantong Niu","hidden":false},{"_id":"6996793f1268a6b79e0d02a4","name":"Ruijie Zheng","hidden":false},{"_id":"6996793f1268a6b79e0d02a5","name":"Yuqi Xie","hidden":false},{"_id":"6996793f1268a6b79e0d02a6","name":"Jimmy Wu","hidden":false},{"_id":"6996793f1268a6b79e0d02a7","name":"Qi Wang","hidden":false},{"_id":"6996793f1268a6b79e0d02a8","name":"Ryan Julian","hidden":false},{"_id":"6996793f1268a6b79e0d02a9","name":"Danfei Xu","hidden":false},{"_id":"6996793f1268a6b79e0d02aa","name":"Yilun Du","hidden":false},{"_id":"6996793f1268a6b79e0d02ab","name":"Yevgen Chebotar","hidden":false},{"_id":"6996793f1268a6b79e0d02ac","name":"Scott Reed","hidden":false},{"_id":"6996793f1268a6b79e0d02ad","name":"Jan Kautz","hidden":false},{"_id":"6996793f1268a6b79e0d02ae","name":"Yuke Zhu","hidden":false},{"_id":"6996793f1268a6b79e0d02af","name":"Linxi \"Jim\" Fan","hidden":false},{"_id":"6996793f1268a6b79e0d02b0","name":"Joel Jang","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/6039478ab3ecf716b1a5fd4d/aeAgRW6Fq1wOWJzNGPdUm.mp4"],"publishedAt":"2026-02-17T15:04:02.000Z","submittedOnDailyAt":"2026-02-19T00:17:09.596Z","title":"World Action Models are Zero-shot Policies","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"State-of-the-art Vision-Language-Action (VLA) models excel at semantic generalization but struggle to generalize to unseen physical motions in novel environments. We introduce DreamZero, a World Action Model (WAM) built upon a pretrained video diffusion backbone. Unlike VLAs, WAMs learn physical dynamics by predicting future world states and actions, using video as a dense representation of how the world evolves. By jointly modeling video and action, DreamZero learns diverse skills effectively from heterogeneous robot data without relying on repetitive demonstrations. This results in over 2x improvement in generalization to new tasks and environments compared to state-of-the-art VLAs in real robot experiments. Crucially, through model and system optimizations, we enable a 14B autoregressive video diffusion model to perform real-time closed-loop control at 7Hz. Finally, we demonstrate two forms of cross-embodiment transfer: video-only demonstrations from other robots or humans yield a relative improvement of over 42% on unseen task performance with just 10-20 minutes of data. More surprisingly, DreamZero enables few-shot embodiment adaptation, transferring to a new embodiment with only 30 minutes of play data while retaining zero-shot generalization.","upvotes":9,"discussionId":"699679401268a6b79e0d02b1","projectPage":"https://dreamzero0.github.io/","githubRepo":"https://github.com/dreamzero0/dreamzero","githubRepoAddedBy":"user","ai_summary":"DreamZero is a World Action Model that leverages video diffusion to enable better generalization of physical motions across novel environments and embodiments compared to vision-language-action models.","ai_keywords":["World Action Model","video diffusion","video backbone","physical dynamics","autoregressive video diffusion model","closed-loop control","cross-embodiment transfer","few-shot embodiment adaptation"],"githubStars":758,"organization":{"_id":"676eec5d639faf44bc95708b","name":"NVIDIA-DIR","fullname":"NVIDIA Deep Imagination Research"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"66935bdc5489e4f73c76bc7b","avatarUrl":"/avatars/129d1e86bbaf764b507501f4feb177db.svg","isPro":false,"fullname":"Abidoye Aanuoluwapo","user":"Aanuoluwapo65","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"67beefc6a0cc5bf3e7ca7321","avatarUrl":"/avatars/7f74f7bc5311ac4243a7ec90c22cf6be.svg","isPro":false,"fullname":"Dmitrii Karanov","user":"karanov","type":"user"},{"_id":"62551f7767f0b85962624047","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1664552038624-62551f7767f0b85962624047.png","isPro":false,"fullname":"Seonghyeon Ye","user":"seonghyeonye","type":"user"},{"_id":"613e1a9267835521a6816b04","avatarUrl":"/avatars/49edaa425bbce04dff92bbfb12a6b41c.svg","isPro":false,"fullname":"Joel Jang","user":"wkddydpf","type":"user"},{"_id":"697bdfb63c83952398aff058","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/697bdfb63c83952398aff058/fpFaR1p48UgCrYt-dymaV.png","isPro":true,"fullname":"Dreaming Zebra","user":"GEAR-Dreams","type":"user"},{"_id":"684d57f26e04c265777ead3f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/cuOj-bQqukSZreXgUJlfm.png","isPro":false,"fullname":"Joakim Lee","user":"Reinforcement4All","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"676eec5d639faf44bc95708b","name":"NVIDIA-DIR","fullname":"NVIDIA Deep Imagination Research"}}">
DreamZero is a World Action Model that leverages video diffusion to enable better generalization of physical motions across novel environments and embodiments compared to vision-language-action models.
AI-generated summary
State-of-the-art Vision-Language-Action (VLA) models excel at semantic generalization but struggle to generalize to unseen physical motions in novel environments. We introduce DreamZero, a World Action Model (WAM) built upon a pretrained video diffusion backbone. Unlike VLAs, WAMs learn physical dynamics by predicting future world states and actions, using video as a dense representation of how the world evolves. By jointly modeling video and action, DreamZero learns diverse skills effectively from heterogeneous robot data without relying on repetitive demonstrations. This results in over 2x improvement in generalization to new tasks and environments compared to state-of-the-art VLAs in real robot experiments. Crucially, through model and system optimizations, we enable a 14B autoregressive video diffusion model to perform real-time closed-loop control at 7Hz. Finally, we demonstrate two forms of cross-embodiment transfer: video-only demonstrations from other robots or humans yield a relative improvement of over 42% on unseen task performance with just 10-20 minutes of data. More surprisingly, DreamZero enables few-shot embodiment adaptation, transferring to a new embodiment with only 30 minutes of play data while retaining zero-shot generalization.