Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Cross-Frame Representation Alignment for Fine-Tuning Video Diffusion Models
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-06-13T01:36:47.342Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6744614243507385},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2506.09229","authors":[{"_id":"684ae250dbd21a9cc27b114c","name":"Sungwon Hwang","hidden":false},{"_id":"684ae250dbd21a9cc27b114d","name":"Hyojin Jang","hidden":false},{"_id":"684ae250dbd21a9cc27b114e","user":{"_id":"64797735a68454566356b708","avatarUrl":"/avatars/3424d022dd8ad29b56eb41814c5c3dee.svg","isPro":false,"fullname":"Kinam Kim","user":"kinam0252","type":"user"},"name":"Kinam Kim","status":"claimed_verified","statusLastChangedAt":"2025-10-28T15:40:11.243Z","hidden":false},{"_id":"684ae250dbd21a9cc27b114f","user":{"_id":"630461624ec2dfa82a5ad7e7","avatarUrl":"/avatars/6696e21069494552b81a28a899a28fd1.svg","isPro":false,"fullname":"Minho Park","user":"mpark","type":"user"},"name":"Minho Park","status":"claimed_verified","statusLastChangedAt":"2025-10-28T15:40:13.495Z","hidden":false},{"_id":"684ae250dbd21a9cc27b1150","name":"Jaegul choo","hidden":false}],"publishedAt":"2025-06-10T20:34:47.000Z","submittedOnDailyAt":"2025-06-12T12:54:25.858Z","title":"Cross-Frame Representation Alignment for Fine-Tuning Video Diffusion\n Models","submittedOnDailyBy":{"_id":"642fcfc0a043f0ac7deeaae0","avatarUrl":"/avatars/6cc46dd480cdc0d86c7a509e22782a13.svg","isPro":false,"fullname":"Sungwon Hwang","user":"sungwon95","type":"user"},"summary":"Fine-tuning Video Diffusion Models (VDMs) at the user level to generate\nvideos that reflect specific attributes of training data presents notable\nchallenges, yet remains underexplored despite its practical importance.\nMeanwhile, recent work such as Representation Alignment (REPA) has shown\npromise in improving the convergence and quality of DiT-based image diffusion\nmodels by aligning, or assimilating, its internal hidden states with external\npretrained visual features, suggesting its potential for VDM fine-tuning. In\nthis work, we first propose a straightforward adaptation of REPA for VDMs and\nempirically show that, while effective for convergence, it is suboptimal in\npreserving semantic consistency across frames. To address this limitation, we\nintroduce Cross-frame Representation Alignment (CREPA), a novel regularization\ntechnique that aligns hidden states of a frame with external features from\nneighboring frames. Empirical evaluations on large-scale VDMs, including\nCogVideoX-5B and Hunyuan Video, demonstrate that CREPA improves both visual\nfidelity and cross-frame semantic coherence when fine-tuned with\nparameter-efficient methods such as LoRA. We further validate CREPA across\ndiverse datasets with varying attributes, confirming its broad applicability.\nProject page: https://crepavideo.github.io","upvotes":7,"discussionId":"684ae250dbd21a9cc27b1151","projectPage":"https://crepavideo.github.io","githubRepo":"https://github.com/deepshwang/crepa","githubRepoAddedBy":"user","ai_summary":"Cross-frame Representation Alignment (CREPA) enhances video diffusion model fine-tuning by improving visual fidelity and semantic coherence across frames using parameter-efficient methods.","ai_keywords":["fine-tuning","Video Diffusion Models (VDMs)","Representation Alignment (REPA)","DiT-based image diffusion models","Cross-frame Representation Alignment (CREPA)","hidden states","external features","CogVideoX-5B","Hunyuan Video","visual fidelity","cross-frame semantic coherence","LoRA"],"githubStars":13},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"65a4567e212d6aca9a3e8f5a","avatarUrl":"/avatars/ed944797230b5460381209bf76e4a0e4.svg","isPro":false,"fullname":"Catherine Liu","user":"Liu12uiL","type":"user"},{"_id":"6726857c88f2f9df27225d48","avatarUrl":"/avatars/6a4e09d1759f1c2fa241e51ad85f9f00.svg","isPro":false,"fullname":"Hoiyeong Jin","user":"myyzzzoooo","type":"user"},{"_id":"642fcfc0a043f0ac7deeaae0","avatarUrl":"/avatars/6cc46dd480cdc0d86c7a509e22782a13.svg","isPro":false,"fullname":"Sungwon Hwang","user":"sungwon95","type":"user"},{"_id":"6964495f8b0bef761ba67486","avatarUrl":"/avatars/cab2a4274208c7332a8c6cb5fd668fdf.svg","isPro":false,"fullname":"mimi rax","user":"Raxad","type":"user"},{"_id":"63177d85f957903db971a173","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1665094764329-63177d85f957903db971a173.png","isPro":false,"fullname":"Artem","user":"kabachuha","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2506.09229

Cross-Frame Representation Alignment for Fine-Tuning Video Diffusion Models

Published on Jun 10, 2025
· Submitted by
Sungwon Hwang
on Jun 12, 2025
Authors:
,
,

Abstract

Cross-frame Representation Alignment (CREPA) enhances video diffusion model fine-tuning by improving visual fidelity and semantic coherence across frames using parameter-efficient methods.

AI-generated summary

Fine-tuning Video Diffusion Models (VDMs) at the user level to generate videos that reflect specific attributes of training data presents notable challenges, yet remains underexplored despite its practical importance. Meanwhile, recent work such as Representation Alignment (REPA) has shown promise in improving the convergence and quality of DiT-based image diffusion models by aligning, or assimilating, its internal hidden states with external pretrained visual features, suggesting its potential for VDM fine-tuning. In this work, we first propose a straightforward adaptation of REPA for VDMs and empirically show that, while effective for convergence, it is suboptimal in preserving semantic consistency across frames. To address this limitation, we introduce Cross-frame Representation Alignment (CREPA), a novel regularization technique that aligns hidden states of a frame with external features from neighboring frames. Empirical evaluations on large-scale VDMs, including CogVideoX-5B and Hunyuan Video, demonstrate that CREPA improves both visual fidelity and cross-frame semantic coherence when fine-tuned with parameter-efficient methods such as LoRA. We further validate CREPA across diverse datasets with varying attributes, confirming its broad applicability. Project page: https://crepavideo.github.io

Community

Paper submitter

Start discussing the paper

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.09229 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.09229 in a Space README.md to link it from this page.

Collections including this paper 1