Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - GenCompositor: Generative Video Compositing with Diffusion Transformer
[go: Go Back, main page]

@Xiaoyu521\n\t - Thanks for sharing! Feel free to claim the paper with your HF account by clicking your name on the page. ๐Ÿค—

\n","updatedAt":"2025-09-03T09:00:44.713Z","author":{"_id":"63a369d98c0c89dcae3b8329","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/AiH2zjy1cnt9OADAAZMLD.jpeg","fullname":"Adina Yakefu","name":"AdinaY","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":1145,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8631497025489807},"editors":["AdinaY"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/AiH2zjy1cnt9OADAAZMLD.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"68b8056300cb8eb339fe4b17","author":{"_id":"66d8284b6bddfb32e77ddafb","avatarUrl":"/avatars/edd371ecef6e7d58b945a30bbc6095ee.svg","fullname":"Xiaoyu Li","name":"Xiaoyu521","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2,"isUserFollowing":false},"createdAt":"2025-09-03T09:07:47.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Done! Thanks for the reminder.\n","html":"

Done! Thanks for the reminder.

\n","updatedAt":"2025-09-03T09:07:47.713Z","author":{"_id":"66d8284b6bddfb32e77ddafb","avatarUrl":"/avatars/edd371ecef6e7d58b945a30bbc6095ee.svg","fullname":"Xiaoyu Li","name":"Xiaoyu521","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7947710156440735},"editors":["Xiaoyu521"],"editorAvatarUrls":["/avatars/edd371ecef6e7d58b945a30bbc6095ee.svg"],"reactions":[{"reaction":"๐Ÿ”ฅ","users":["AdinaY"],"count":1},{"reaction":"๐Ÿ‘","users":["AdinaY"],"count":1}],"isReport":false,"parentCommentId":"68b803bcf359c986357866b3"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2509.02460","authors":[{"_id":"68b7e0a3295f15ff609114ba","user":{"_id":"637db2dbb61b6d662af326e8","avatarUrl":"/avatars/6d543ec2847755e95943041f695634b9.svg","isPro":false,"fullname":"Shuzhou Yang","user":"Ysz2022","type":"user"},"name":"Shuzhou Yang","status":"claimed_verified","statusLastChangedAt":"2025-09-03T08:26:00.961Z","hidden":false},{"_id":"68b7e0a3295f15ff609114bb","user":{"_id":"66d8284b6bddfb32e77ddafb","avatarUrl":"/avatars/edd371ecef6e7d58b945a30bbc6095ee.svg","isPro":false,"fullname":"Xiaoyu Li","user":"Xiaoyu521","type":"user"},"name":"Xiaoyu Li","status":"claimed_verified","statusLastChangedAt":"2025-09-03T09:06:28.411Z","hidden":false},{"_id":"68b7e0a3295f15ff609114bc","user":{"_id":"63184c517ca1b876d99b7e0e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63184c517ca1b876d99b7e0e/b-qDExoeJuDXK0cJBZKnz.jpeg","isPro":false,"fullname":"Xiaodong Cun","user":"vinthony","type":"user"},"name":"Xiaodong Cun","status":"claimed_verified","statusLastChangedAt":"2025-09-09T13:49:54.054Z","hidden":false},{"_id":"68b7e0a3295f15ff609114bd","name":"Guangzhi Wang","hidden":false},{"_id":"68b7e0a3295f15ff609114be","name":"Lingen Li","hidden":false},{"_id":"68b7e0a3295f15ff609114bf","name":"Ying Shan","hidden":false},{"_id":"68b7e0a3295f15ff609114c0","name":"Jian Zhang","hidden":false}],"publishedAt":"2025-09-02T16:10:13.000Z","submittedOnDailyAt":"2025-09-03T05:03:44.984Z","title":"GenCompositor: Generative Video Compositing with Diffusion Transformer","submittedOnDailyBy":{"_id":"66d8284b6bddfb32e77ddafb","avatarUrl":"/avatars/edd371ecef6e7d58b945a30bbc6095ee.svg","isPro":false,"fullname":"Xiaoyu Li","user":"Xiaoyu521","type":"user"},"summary":"Video compositing combines live-action footage to create video production,\nserving as a crucial technique in video creation and film production.\nTraditional pipelines require intensive labor efforts and expert collaboration,\nresulting in lengthy production cycles and high manpower costs. To address this\nissue, we automate this process with generative models, called generative video\ncompositing. This new task strives to adaptively inject identity and motion\ninformation of foreground video to the target video in an interactive manner,\nallowing users to customize the size, motion trajectory, and other attributes\nof the dynamic elements added in final video. Specifically, we designed a novel\nDiffusion Transformer (DiT) pipeline based on its intrinsic properties. To\nmaintain consistency of the target video before and after editing, we revised a\nlight-weight DiT-based background preservation branch with masked token\ninjection. As to inherit dynamic elements from other sources, a DiT fusion\nblock is proposed using full self-attention, along with a simple yet effective\nforeground augmentation for training. Besides, for fusing background and\nforeground videos with different layouts based on user control, we developed a\nnovel position embedding, named Extended Rotary Position Embedding (ERoPE).\nFinally, we curated a dataset comprising 61K sets of videos for our new task,\ncalled VideoComp. This data includes complete dynamic elements and high-quality\ntarget videos. Experiments demonstrate that our method effectively realizes\ngenerative video compositing, outperforming existing possible solutions in\nfidelity and consistency.","upvotes":26,"discussionId":"68b7e0a3295f15ff609114c1","projectPage":"https://gencompositor.github.io/","githubRepo":"https://github.com/TencentARC/GenCompositor","githubRepoAddedBy":"user","ai_summary":"A novel Diffusion Transformer pipeline automates video compositing by adaptively injecting identity and motion information, maintaining consistency and enabling user customization.","ai_keywords":["Diffusion Transformer","DiT","masked token injection","full self-attention","foreground augmentation","Extended Rotary Position Embedding","ERoPE","generative video compositing"],"githubStars":148},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"66d8284b6bddfb32e77ddafb","avatarUrl":"/avatars/edd371ecef6e7d58b945a30bbc6095ee.svg","isPro":false,"fullname":"Xiaoyu Li","user":"Xiaoyu521","type":"user"},{"_id":"6362801380c1a705a6ea54ac","avatarUrl":"/avatars/041ad5abf9be42e336938f51ebb8746c.svg","isPro":false,"fullname":"Yaowei Li","user":"Yw22","type":"user"},{"_id":"685a6501c2308d721cddc8e6","avatarUrl":"/avatars/8cc147eec53df6470c913aa29af8f8b3.svg","isPro":false,"fullname":"zhang","user":"xuanyu111","type":"user"},{"_id":"66837d3c48edefb453b0640a","avatarUrl":"/avatars/b16385eaa612578728e2c6460a76b38f.svg","isPro":false,"fullname":"Lingen Li","user":"l-li","type":"user"},{"_id":"6422b973ef9e8971003cdd22","avatarUrl":"/avatars/8564a2e984e2e79e46d90cc9c35e5773.svg","isPro":false,"fullname":"Guangzhi Wang","user":"daoyuan98","type":"user"},{"_id":"6536359a35ad8b9e5805bf3c","avatarUrl":"/avatars/f0552c4522a6f9ca31dbed2d9fc78472.svg","isPro":false,"fullname":"Kong","user":"Fucius","type":"user"},{"_id":"637c2334ca8542a0ba8e38fe","avatarUrl":"/avatars/feec3823791299aa4dabc8b04efe73e2.svg","isPro":false,"fullname":"Jorah Gao","user":"gqk","type":"user"},{"_id":"6458ff31990172cd1d71f95a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/h3oAnhVXskrrj_k3k2au7.jpeg","isPro":false,"fullname":"Tang Zhenyu","user":"Tzy010822","type":"user"},{"_id":"65df28053f47ff92b1bdee54","avatarUrl":"/avatars/8e8ff3c68b1a9a4b928240a049461316.svg","isPro":false,"fullname":"wen chengxiang","user":"wecx","type":"user"},{"_id":"64ea59beb36ed038b6638ece","avatarUrl":"/avatars/a74b3c8b63b5ca8ebb3a00455f6f803f.svg","isPro":false,"fullname":"Slava","user":"wertlon","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"64d98ef7a4839890b25eb78b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64d98ef7a4839890b25eb78b/215-CSVLl81z6CAq0ECWU.jpeg","isPro":true,"fullname":"Fangyuan Yu","user":"Ksgk-fy","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2509.02460

GenCompositor: Generative Video Compositing with Diffusion Transformer

Published on Sep 2, 2025
ยท Submitted by
Xiaoyu Li
on Sep 3, 2025
Authors:
,
,
,

Abstract

A novel Diffusion Transformer pipeline automates video compositing by adaptively injecting identity and motion information, maintaining consistency and enabling user customization.

AI-generated summary

Video compositing combines live-action footage to create video production, serving as a crucial technique in video creation and film production. Traditional pipelines require intensive labor efforts and expert collaboration, resulting in lengthy production cycles and high manpower costs. To address this issue, we automate this process with generative models, called generative video compositing. This new task strives to adaptively inject identity and motion information of foreground video to the target video in an interactive manner, allowing users to customize the size, motion trajectory, and other attributes of the dynamic elements added in final video. Specifically, we designed a novel Diffusion Transformer (DiT) pipeline based on its intrinsic properties. To maintain consistency of the target video before and after editing, we revised a light-weight DiT-based background preservation branch with masked token injection. As to inherit dynamic elements from other sources, a DiT fusion block is proposed using full self-attention, along with a simple yet effective foreground augmentation for training. Besides, for fusing background and foreground videos with different layouts based on user control, we developed a novel position embedding, named Extended Rotary Position Embedding (ERoPE). Finally, we curated a dataset comprising 61K sets of videos for our new task, called VideoComp. This data includes complete dynamic elements and high-quality target videos. Experiments demonstrate that our method effectively realizes generative video compositing, outperforming existing possible solutions in fidelity and consistency.

Community

Paper author Paper submitter

GenCompositor is capable of effortlessly compositing different videos guided by user-specified trajectories and scales.

Paper author

GenCompositor is the prioneer work of generative video compositing.

Hi @Xiaoyu521 - Thanks for sharing! Feel free to claim the paper with your HF account by clicking your name on the page. ๐Ÿค—

ยท
Paper author

Done! Thanks for the reminder.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.02460 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.02460 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.02460 in a Space README.md to link it from this page.

Collections including this paper 5