Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control
[go: Go Back, main page]

https://fuxiao0719.github.io/projects/robomaster/\n
  • Code: https://github.com/KwaiVGI/RoboMaster
  • \n\n","updatedAt":"2025-06-03T03:27:30.525Z","author":{"_id":"63aef2cafcca84593e6682db","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1672409763337-noauth.jpeg","fullname":"Xiao Fu","name":"lemonaddie","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":17,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6119757890701294},"editors":["lemonaddie"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1672409763337-noauth.jpeg"],"reactions":[],"isReport":false}},{"id":"6840f60559fa0307aa7ac2db","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-06-05T01:42:29.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ATI: Any Trajectory Instruction for Controllable Video Generation](https://huggingface.co/papers/2505.22944) (2025)\n* [ManipDreamer: Boosting Robotic Manipulation World Model with Action Tree and Visual Guidance](https://huggingface.co/papers/2504.16464) (2025)\n* [RoboTransfer: Geometry-Consistent Video Diffusion for Robotic Visual Policy Transfer](https://huggingface.co/papers/2505.23171) (2025)\n* [MotionPro: A Precise Motion Controller for Image-to-Video Generation](https://huggingface.co/papers/2505.20287) (2025)\n* [TokenMotion: Decoupled Motion Control via Token Disentanglement for Human-centric Video Generation](https://huggingface.co/papers/2504.08181) (2025)\n* [SViMo: Synchronized Diffusion for Video and Motion Generation in Hand-object Interaction Scenarios](https://huggingface.co/papers/2506.02444) (2025)\n* [ReVision: High-Quality, Low-Cost Video Generation with Explicit 3D Physics Modeling for Complex Motion and Interaction](https://huggingface.co/papers/2504.21855) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

    This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

    \n

    The following papers were recommended by the Semantic Scholar API

    \n\n

    Please give a thumbs up to this comment if you found it helpful!

    \n

    If you want recommendations for any Paper on Hugging Face checkout this Space

    \n

    You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

    \n","updatedAt":"2025-06-05T01:42:29.600Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6743200421333313},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2506.01943","authors":[{"_id":"683e6b6424742a21489ec9f8","name":"Xiao Fu","hidden":false},{"_id":"683e6b6424742a21489ec9f9","user":{"_id":"60e272ca6c78a8c122b12127","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60e272ca6c78a8c122b12127/xldEGBzGrU-bX6IwAw0Ie.jpeg","isPro":false,"fullname":"Xintao Wang","user":"Xintao","type":"user"},"name":"Xintao Wang","status":"admin_assigned","statusLastChangedAt":"2025-06-03T12:36:21.590Z","hidden":false},{"_id":"683e6b6424742a21489ec9fa","name":"Xian Liu","hidden":false},{"_id":"683e6b6424742a21489ec9fb","user":{"_id":"6530bf50f145530101ec03a2","avatarUrl":"/avatars/c61c00c314cf202b64968e51e855694d.svg","isPro":false,"fullname":"Jianhong Bai","user":"jianhongbai","type":"user"},"name":"Jianhong Bai","status":"admin_assigned","statusLastChangedAt":"2025-06-03T12:36:29.349Z","hidden":false},{"_id":"683e6b6424742a21489ec9fc","user":{"_id":"6458b1103b81018d6b93defb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6458b1103b81018d6b93defb/JFr5j-3UO0xxlEdUksikU.jpeg","isPro":false,"fullname":"Runsen Xu","user":"RunsenXu","type":"user"},"name":"Runsen Xu","status":"admin_assigned","statusLastChangedAt":"2025-06-03T12:36:13.754Z","hidden":false},{"_id":"683e6b6424742a21489ec9fd","name":"Pengfei Wan","hidden":false},{"_id":"683e6b6424742a21489ec9fe","name":"Di Zhang","hidden":false},{"_id":"683e6b6424742a21489ec9ff","user":{"_id":"636317ed80c1a705a6eff396","avatarUrl":"/avatars/3db090e101b916d9256d0d3e043db71d.svg","isPro":false,"fullname":"Dahua Lin","user":"lindahua","type":"user"},"name":"Dahua Lin","status":"admin_assigned","statusLastChangedAt":"2025-06-03T12:35:57.663Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/63aef2cafcca84593e6682db/9mFDJaCOc6KLHlhboYA59.mp4"],"publishedAt":"2025-06-02T17:57:06.000Z","submittedOnDailyAt":"2025-06-03T01:57:30.514Z","title":"Learning Video Generation for Robotic Manipulation with Collaborative\n Trajectory Control","submittedOnDailyBy":{"_id":"63aef2cafcca84593e6682db","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1672409763337-noauth.jpeg","isPro":false,"fullname":"Xiao Fu","user":"lemonaddie","type":"user"},"summary":"Recent advances in video diffusion models have demonstrated strong potential\nfor generating robotic decision-making data, with trajectory conditions further\nenabling fine-grained control. However, existing trajectory-based methods\nprimarily focus on individual object motion and struggle to capture\nmulti-object interaction crucial in complex robotic manipulation. This\nlimitation arises from multi-feature entanglement in overlapping regions, which\nleads to degraded visual fidelity. To address this, we present RoboMaster, a\nnovel framework that models inter-object dynamics through a collaborative\ntrajectory formulation. Unlike prior methods that decompose objects, our core\nis to decompose the interaction process into three sub-stages: pre-interaction,\ninteraction, and post-interaction. Each stage is modeled using the feature of\nthe dominant object, specifically the robotic arm in the pre- and\npost-interaction phases and the manipulated object during interaction, thereby\nmitigating the drawback of multi-object feature fusion present during\ninteraction in prior work. To further ensure subject semantic consistency\nthroughout the video, we incorporate appearance- and shape-aware latent\nrepresentations for objects. Extensive experiments on the challenging Bridge V2\ndataset, as well as in-the-wild evaluation, demonstrate that our method\noutperforms existing approaches, establishing new state-of-the-art performance\nin trajectory-controlled video generation for robotic manipulation.","upvotes":25,"discussionId":"683e6b6724742a21489eca8d","projectPage":"https://fuxiao0719.github.io/projects/robomaster/","githubRepo":"https://github.com/KwaiVGI/RoboMaster","githubRepoAddedBy":"auto","ai_summary":"A novel framework, RoboMaster, enhances trajectory-controlled video generation for robotic manipulation by modeling inter-object dynamics through a collaborative trajectory formulation, achieving state-of-the-art performance on the Bridge V2 dataset.","ai_keywords":["video diffusion models","trajectory conditions","multi-object interaction","multi-feature entanglement","visual fidelity","collaborative trajectory formulation","pre-interaction","interaction","post-interaction","appearance-aware latent representations","shape-aware latent representations","trajectory-controlled video generation","robotic manipulation","Bridge V2 dataset"],"githubStars":95},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63aef2cafcca84593e6682db","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1672409763337-noauth.jpeg","isPro":false,"fullname":"Xiao Fu","user":"lemonaddie","type":"user"},{"_id":"641af5fcf902cc42730b47e2","avatarUrl":"/avatars/73ac99dec226f0e814a16d2f1dbfbce8.svg","isPro":false,"fullname":"Xiaoyu Shi","user":"btwbtm","type":"user"},{"_id":"64f94370c3c12b377cc51086","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64f94370c3c12b377cc51086/6CXcHhqAoykqXcShqM8Rd.jpeg","isPro":false,"fullname":"Minghong Cai","user":"onevfall","type":"user"},{"_id":"6553316bf151de82f6a23e1d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6553316bf151de82f6a23e1d/GTBkSj4Fa3OoyM6Muz_Sc.jpeg","isPro":false,"fullname":"Gongye Liu","user":"liuhuohuo","type":"user"},{"_id":"6530bf50f145530101ec03a2","avatarUrl":"/avatars/c61c00c314cf202b64968e51e855694d.svg","isPro":false,"fullname":"Jianhong Bai","user":"jianhongbai","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"639be86b59473c6ae02ef9c4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/639be86b59473c6ae02ef9c4/gw34RBCVZCOkcAA79xUr3.png","isPro":true,"fullname":"Jie Liu","user":"jieliu","type":"user"},{"_id":"6342796a0875f2c99cfd313b","avatarUrl":"/avatars/98575092404c4197b20c929a6499a015.svg","isPro":false,"fullname":"Yuseung \"Phillip\" Lee","user":"phillipinseoul","type":"user"},{"_id":"662f93942510ef5735d7ad00","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/662f93942510ef5735d7ad00/ZIDIPm63sncIHFTT5b0uR.png","isPro":false,"fullname":"magicwpf","user":"magicwpf","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64049ae20ab5e22719f35103","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678023295407-noauth.jpeg","isPro":false,"fullname":"Dongyu Yan","user":"StarYDY","type":"user"},{"_id":"646f3418a6a58aa29505fd30","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646f3418a6a58aa29505fd30/1z13rnpb6rsUgQsYumWPg.png","isPro":false,"fullname":"QINGHE WANG","user":"Qinghew","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
    Papers
    arxiv:2506.01943

    Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control

    Published on Jun 2, 2025
    · Submitted by
    Xiao Fu
    on Jun 3, 2025
    Authors:
    ,
    ,
    ,
    ,

    Abstract

    A novel framework, RoboMaster, enhances trajectory-controlled video generation for robotic manipulation by modeling inter-object dynamics through a collaborative trajectory formulation, achieving state-of-the-art performance on the Bridge V2 dataset.

    AI-generated summary

    Recent advances in video diffusion models have demonstrated strong potential for generating robotic decision-making data, with trajectory conditions further enabling fine-grained control. However, existing trajectory-based methods primarily focus on individual object motion and struggle to capture multi-object interaction crucial in complex robotic manipulation. This limitation arises from multi-feature entanglement in overlapping regions, which leads to degraded visual fidelity. To address this, we present RoboMaster, a novel framework that models inter-object dynamics through a collaborative trajectory formulation. Unlike prior methods that decompose objects, our core is to decompose the interaction process into three sub-stages: pre-interaction, interaction, and post-interaction. Each stage is modeled using the feature of the dominant object, specifically the robotic arm in the pre- and post-interaction phases and the manipulated object during interaction, thereby mitigating the drawback of multi-object feature fusion present during interaction in prior work. To further ensure subject semantic consistency throughout the video, we incorporate appearance- and shape-aware latent representations for objects. Extensive experiments on the challenging Bridge V2 dataset, as well as in-the-wild evaluation, demonstrate that our method outperforms existing approaches, establishing new state-of-the-art performance in trajectory-controlled video generation for robotic manipulation.

    Community

    This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

    The following papers were recommended by the Semantic Scholar API

    Please give a thumbs up to this comment if you found it helpful!

    If you want recommendations for any Paper on Hugging Face checkout this Space

    You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

    Sign up or log in to comment

    Models citing this paper 0

    No model linking this paper

    Cite arxiv.org/abs/2506.01943 in a model README.md to link it from this page.

    Datasets citing this paper 0

    No dataset linking this paper

    Cite arxiv.org/abs/2506.01943 in a dataset README.md to link it from this page.

    Spaces citing this paper 0

    No Space linking this paper

    Cite arxiv.org/abs/2506.01943 in a Space README.md to link it from this page.

    Collections including this paper 2