Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - SkyReels-A2: Compose Anything in Video Diffusion Transformers
[go: Go Back, main page]

\"Screenshot

\n","updatedAt":"2025-04-04T02:03:57.009Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9180,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3687865436077118},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"67ef63b2fb6b2bf56be2b6e0","author":{"_id":"63b4147f7af2e415f2599659","avatarUrl":"/avatars/7d8989ddefab16d31b377870e56e0550.svg","fullname":"hakkyu kim","name":"HAKKYU","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2,"isUserFollowing":false},"createdAt":"2025-04-04T04:44:34.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"project page - METHOD:\n - The descriptions of the spatial feature branch and semantic feature branch are swapped. Image and description mismatch.\n","html":"

project page - METHOD:

\n
    \n
  • The descriptions of the spatial feature branch and semantic feature branch are swapped. Image and description mismatch.
  • \n
\n","updatedAt":"2025-04-04T04:44:34.657Z","author":{"_id":"63b4147f7af2e415f2599659","avatarUrl":"/avatars/7d8989ddefab16d31b377870e56e0550.svg","fullname":"hakkyu kim","name":"HAKKYU","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8226158022880554},"editors":["HAKKYU"],"editorAvatarUrls":["/avatars/7d8989ddefab16d31b377870e56e0550.svg"],"reactions":[],"isReport":false}},{"id":"67f088dea6f1a1183fcb0741","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-04-05T01:35:26.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [CINEMA: Coherent Multi-Subject Video Generation via MLLM-Based Guidance](https://huggingface.co/papers/2503.10391) (2025)\n* [CustomVideoX: 3D Reference Attention Driven Dynamic Adaptation for Zero-Shot Customized Video Diffusion Transformers](https://huggingface.co/papers/2502.06527) (2025)\n* [Goku: Flow Based Video Generative Foundation Models](https://huggingface.co/papers/2502.04896) (2025)\n* [Phantom: Subject-consistent video generation via cross-modal alignment](https://huggingface.co/papers/2502.11079) (2025)\n* [Raccoon: Multi-stage Diffusion Training with Coarse-to-Fine Curating Videos](https://huggingface.co/papers/2502.21314) (2025)\n* [RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models](https://huggingface.co/papers/2503.10406) (2025)\n* [Get In Video: Add Anything You Want to the Video](https://huggingface.co/papers/2503.06268) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-05T01:35:26.910Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6765714287757874},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2504.02436","authors":[{"_id":"67ef3dfae8b932ae7a832950","user":{"_id":"617ba1820e4237bd1731b867","avatarUrl":"/avatars/f9de06363e64bddd7dc977e96e85df8a.svg","isPro":false,"fullname":"zhengcong fei","user":"onion","type":"user"},"name":"Zhengcong Fei","status":"admin_assigned","statusLastChangedAt":"2025-04-04T07:19:16.548Z","hidden":false},{"_id":"67ef3dfae8b932ae7a832951","user":{"_id":"65dc3a850af7e21ba40e939f","avatarUrl":"/avatars/e129c64617675edd05d4317d39604318.svg","isPro":false,"fullname":"Li","user":"Debang","type":"user"},"name":"Debang Li","status":"admin_assigned","statusLastChangedAt":"2025-04-04T07:19:27.042Z","hidden":false},{"_id":"67ef3dfae8b932ae7a832952","user":{"_id":"65bef422fdb8d33cefeaccc3","avatarUrl":"/avatars/d40b0d7dda21fa1a68c291d11bc357ec.svg","isPro":false,"fullname":"Qiu Di","user":"diqiu7","type":"user"},"name":"Di Qiu","status":"admin_assigned","statusLastChangedAt":"2025-04-04T07:19:41.458Z","hidden":false},{"_id":"67ef3dfae8b932ae7a832953","name":"Jiahua Wang","hidden":false},{"_id":"67ef3dfae8b932ae7a832954","name":"Yikun Dou","hidden":false},{"_id":"67ef3dfae8b932ae7a832955","user":{"_id":"62e0f1314db2175cd270ad08","avatarUrl":"/avatars/1d3d6af6c63557f4abf0484e028fa942.svg","isPro":false,"fullname":"Rui Wang","user":"ruiwang","type":"user"},"name":"Rui Wang","status":"admin_assigned","statusLastChangedAt":"2025-04-04T07:20:11.206Z","hidden":false},{"_id":"67ef3dfae8b932ae7a832956","user":{"_id":"666a674967c686801acf25bb","avatarUrl":"/avatars/c1f3edd63fd378dfb555e6413a966932.svg","isPro":false,"fullname":"jingtao xu","user":"raul678","type":"user"},"name":"Jingtao Xu","status":"admin_assigned","statusLastChangedAt":"2025-04-04T07:20:20.880Z","hidden":false},{"_id":"67ef3dfae8b932ae7a832957","user":{"_id":"634672bfb7b4e71c7f45360f","avatarUrl":"/avatars/4b646fc3e271be90b9ec619d42ce3e99.svg","isPro":false,"fullname":"Fan Mingyuan","user":"MichaelFan","type":"user"},"name":"Mingyuan Fan","status":"admin_assigned","statusLastChangedAt":"2025-04-04T07:20:32.597Z","hidden":false},{"_id":"67ef3dfae8b932ae7a832958","name":"Guibin Chen","hidden":false},{"_id":"67ef3dfae8b932ae7a832959","name":"Yang Li","hidden":false},{"_id":"67ef3dfae8b932ae7a83295a","name":"Yahui Zhou","hidden":false}],"publishedAt":"2025-04-03T09:50:50.000Z","submittedOnDailyAt":"2025-04-04T00:33:57.000Z","title":"SkyReels-A2: Compose Anything in Video Diffusion Transformers","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"This paper presents SkyReels-A2, a controllable video generation framework\ncapable of assembling arbitrary visual elements (e.g., characters, objects,\nbackgrounds) into synthesized videos based on textual prompts while maintaining\nstrict consistency with reference images for each element. We term this task\nelements-to-video (E2V), whose primary challenges lie in preserving the\nfidelity of each reference element, ensuring coherent composition of the scene,\nand achieving natural outputs. To address these, we first design a\ncomprehensive data pipeline to construct prompt-reference-video triplets for\nmodel training. Next, we propose a novel image-text joint embedding model to\ninject multi-element representations into the generative process, balancing\nelement-specific consistency with global coherence and text alignment. We also\noptimize the inference pipeline for both speed and output stability. Moreover,\nwe introduce a carefully curated benchmark for systematic evaluation, i.e, A2\nBench. Experiments demonstrate that our framework can generate diverse,\nhigh-quality videos with precise element control. SkyReels-A2 is the first\nopen-source commercial grade model for the generation of E2V, performing\nfavorably against advanced closed-source commercial models. We anticipate\nSkyReels-A2 will advance creative applications such as drama and virtual\ne-commerce, pushing the boundaries of controllable video generation.","upvotes":39,"discussionId":"67ef3dfee8b932ae7a832a97","githubRepo":"https://github.com/SkyworkAI/SkyReels-A2","githubRepoAddedBy":"auto","ai_summary":"SkyReels-A2, an open-source framework, generates high-quality, element-controlled videos from textual prompts using a novel image-text embedding model, optimized inference pipeline, and A2 Bench for systematic evaluation.","ai_keywords":["elements-to-video (E2V)","image-text joint embedding","generative process","prompt-reference-video triplets","output stability","A2 Bench"],"githubStars":701},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63468720dd6d90d82ccf3450","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg","isPro":false,"fullname":"YSH","user":"BestWishYsh","type":"user"},{"_id":"63ddc7b80f6d2d6c3efe3600","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63ddc7b80f6d2d6c3efe3600/RX5q9T80Jl3tn6z03ls0l.jpeg","isPro":false,"fullname":"J","user":"dashfunnydashdash","type":"user"},{"_id":"635964636a61954080850e1d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/635964636a61954080850e1d/0bfExuDTrHTtm8c-40cDM.png","isPro":false,"fullname":"William Lamkin","user":"phanes","type":"user"},{"_id":"6683fc5344a65be1aab25dc0","avatarUrl":"/avatars/e13cde3f87b59e418838d702807df3b5.svg","isPro":false,"fullname":"hjkim","user":"hojie11","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"66f71a048ad02d06fb0123de","avatarUrl":"/avatars/423328834946bc1fdbcdf741b4baa06b.svg","isPro":false,"fullname":"Phil Quist","user":"philquist","type":"user"},{"_id":"67ef88d818ee7ec5982c644c","avatarUrl":"/avatars/15a053accd604159b5f25bd6ac903585.svg","isPro":false,"fullname":"Steph Moreland","user":"smoreland","type":"user"},{"_id":"67ef92467c9803451be0bef5","avatarUrl":"/avatars/9d5fe1a9465a0e220544c5af08923918.svg","isPro":false,"fullname":"libin xiong","user":"lbxiong","type":"user"},{"_id":"641d9c125b4c7eb277d1f29d","avatarUrl":"/avatars/4c24f0a6a2d0466386574a560bce920e.svg","isPro":false,"fullname":"Gharbali","user":"AliG62","type":"user"},{"_id":"6528a57bf0042c8301d217dc","avatarUrl":"/avatars/b7e1398aec545a0342c05c67c5493c8b.svg","isPro":false,"fullname":"HanSaem Kim","user":"kensaem","type":"user"},{"_id":"64f15d2662a7109a6e72be2a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64f15d2662a7109a6e72be2a/NDM5sKZ4eN3FSeOIW3U1I.jpeg","isPro":false,"fullname":"luokai","user":"iamluokai","type":"user"},{"_id":"65089ae54afcb7378d1e3fcb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65089ae54afcb7378d1e3fcb/jau-lPKCnry75SBJ0mjjc.jpeg","isPro":false,"fullname":"Bugrahan","user":"nuwandaa","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2504.02436

SkyReels-A2: Compose Anything in Video Diffusion Transformers

Published on Apr 3, 2025
· Submitted by
AK
on Apr 4, 2025
Authors:
Di Qiu ,
,
,
,
,

Abstract

SkyReels-A2, an open-source framework, generates high-quality, element-controlled videos from textual prompts using a novel image-text embedding model, optimized inference pipeline, and A2 Bench for systematic evaluation.

AI-generated summary

This paper presents SkyReels-A2, a controllable video generation framework capable of assembling arbitrary visual elements (e.g., characters, objects, backgrounds) into synthesized videos based on textual prompts while maintaining strict consistency with reference images for each element. We term this task elements-to-video (E2V), whose primary challenges lie in preserving the fidelity of each reference element, ensuring coherent composition of the scene, and achieving natural outputs. To address these, we first design a comprehensive data pipeline to construct prompt-reference-video triplets for model training. Next, we propose a novel image-text joint embedding model to inject multi-element representations into the generative process, balancing element-specific consistency with global coherence and text alignment. We also optimize the inference pipeline for both speed and output stability. Moreover, we introduce a carefully curated benchmark for systematic evaluation, i.e, A2 Bench. Experiments demonstrate that our framework can generate diverse, high-quality videos with precise element control. SkyReels-A2 is the first open-source commercial grade model for the generation of E2V, performing favorably against advanced closed-source commercial models. We anticipate SkyReels-A2 will advance creative applications such as drama and virtual e-commerce, pushing the boundaries of controllable video generation.

Community

Paper submitter

Screenshot 2025-04-03 at 10.03.45 PM.png

project page - METHOD:

  • The descriptions of the spatial feature branch and semantic feature branch are swapped. Image and description mismatch.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.02436 in a dataset README.md to link it from this page.

Spaces citing this paper 3

Collections including this paper 7