Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Olaf-World: Orienting Latent Actions for Video World Modeling
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-12T01:41:38.155Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7223671078681946},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.10104","authors":[{"_id":"698bfbeb6052d3bed9630ae1","user":{"_id":"64e84d40d50f3979be9afcbb","avatarUrl":"/avatars/6a706a4916132c1f1cda63d11dc46b87.svg","isPro":false,"fullname":"Jiang Yuxin","user":"YuxinJ","type":"user"},"name":"Yuxin Jiang","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:14:13.438Z","hidden":false},{"_id":"698bfbeb6052d3bed9630ae2","name":"Yuchao Gu","hidden":false},{"_id":"698bfbeb6052d3bed9630ae3","name":"Ivor W. Tsang","hidden":false},{"_id":"698bfbeb6052d3bed9630ae4","name":"Mike Zheng Shou","hidden":false}],"publishedAt":"2026-02-10T18:58:41.000Z","submittedOnDailyAt":"2026-02-11T01:40:37.554Z","title":"Olaf-World: Orienting Latent Actions for Video World Modeling","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"Scaling action-controllable world models is limited by the scarcity of action labels. While latent action learning promises to extract control interfaces from unlabeled video, learned latents often fail to transfer across contexts: they entangle scene-specific cues and lack a shared coordinate system. This occurs because standard objectives operate only within each clip, providing no mechanism to align action semantics across contexts. Our key insight is that although actions are unobserved, their semantic effects are observable and can serve as a shared reference. We introduce SeqΔ-REPA, a sequence-level control-effect alignment objective that anchors integrated latent action to temporal feature differences from a frozen, self-supervised video encoder. Building on this, we present Olaf-World, a pipeline that pretrains action-conditioned video world models from large-scale passive video. Extensive experiments demonstrate that our method learns a more structured latent action space, leading to stronger zero-shot action transfer and more data-efficient adaptation to new control interfaces than state-of-the-art baselines.","upvotes":27,"discussionId":"698bfbeb6052d3bed9630ae5","projectPage":"https://showlab.github.io/Olaf-World/","githubRepo":"https://github.com/showlab/Olaf-World","githubRepoAddedBy":"user","ai_summary":"Sequence-level control-effect alignment enables structured latent action space learning for zero-shot action transfer in video world models.","ai_keywords":["action-controllable world models","latent action learning","temporal feature differences","self-supervised video encoder","sequence-level control-effect alignment","action-conditioned video world models","zero-shot action transfer","data-efficient adaptation"],"githubStars":72},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"68fc3ddcdc9e5cbf49cbc716","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68fc3ddcdc9e5cbf49cbc716/gzvksq-XgWnekB6Xl25pw.jpeg","isPro":false,"fullname":"EasonYe","user":"EasonUwU","type":"user"},{"_id":"677272184d148b904333e874","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/5dUau7gxLk4Wm1TiiJJri.jpeg","isPro":false,"fullname":"Efstathios Karypidis","user":"Sta8is","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6345a93afe134dfd7a0cfabd","avatarUrl":"/avatars/65130ce06b1c72ab1066678419731d88.svg","isPro":false,"fullname":"wu weijia","user":"weijiawu","type":"user"},{"_id":"647896de5bf35e70ab5da887","avatarUrl":"/avatars/50a874a0048047e51f25746c5fbe85bb.svg","isPro":false,"fullname":"Liu Hengyu","user":"Piang","type":"user"},{"_id":"64e84d40d50f3979be9afcbb","avatarUrl":"/avatars/6a706a4916132c1f1cda63d11dc46b87.svg","isPro":false,"fullname":"Jiang Yuxin","user":"YuxinJ","type":"user"},{"_id":"66fcfa6e05638227c44233a9","avatarUrl":"/avatars/4a88765c7f5c5ca77da6d21eb01f73e0.svg","isPro":false,"fullname":"Haiyang Mei","user":"meihaiyang","type":"user"},{"_id":"63021630a35b21bd8a53305a","avatarUrl":"/avatars/7a7e8b39749eda61e57d8a1908726558.svg","isPro":true,"fullname":"Gu Yuchao","user":"guyuchao","type":"user"},{"_id":"634e2217c1ce28f1de921708","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/634e2217c1ce28f1de921708/XTMB6alYUM0KAUptM98kP.jpeg","isPro":false,"fullname":"yyyang404","user":"yyyang","type":"user"},{"_id":"670a5c886f31d354bc8c1cd1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/670a5c886f31d354bc8c1cd1/D2Mueg9eQy4fzJhOmCrnu.jpeg","isPro":false,"fullname":"Wenzheng Zeng","user":"wenzhengzeng","type":"user"},{"_id":"66c45954ab8f09b10b7ab6a8","avatarUrl":"/avatars/f9946c775c4d70b8e044865ac34ef121.svg","isPro":false,"fullname":"Zhu","user":"ZaynZhu","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Sequence-level control-effect alignment enables structured latent action space learning for zero-shot action transfer in video world models.
AI-generated summary
Scaling action-controllable world models is limited by the scarcity of action labels. While latent action learning promises to extract control interfaces from unlabeled video, learned latents often fail to transfer across contexts: they entangle scene-specific cues and lack a shared coordinate system. This occurs because standard objectives operate only within each clip, providing no mechanism to align action semantics across contexts. Our key insight is that although actions are unobserved, their semantic effects are observable and can serve as a shared reference. We introduce SeqΔ-REPA, a sequence-level control-effect alignment objective that anchors integrated latent action to temporal feature differences from a frozen, self-supervised video encoder. Building on this, we present Olaf-World, a pipeline that pretrains action-conditioned video world models from large-scale passive video. Extensive experiments demonstrate that our method learns a more structured latent action space, leading to stronger zero-shot action transfer and more data-efficient adaptation to new control interfaces than state-of-the-art baselines.