Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - CatV2TON: Taming Diffusion Transformers for Vision-Based Virtual Try-On with Temporal Concatenation
[go: Go Back, main page]

\n\t\t\n\t\n\t\n\t\tAbstract\n\t\n\n

Virtual try-on (VTON) technology has gained attention due to its potential to transform online retail by enabling realistic clothing visualization of images and videos. However, most existing methods struggle to achieve high-quality results across image and video try-on tasks, especially in long video scenarios. In this work, we introduce CatV2TON, a simple and effective vision-based virtual try-on (V2TON) method that supports both image and video try-on tasks with a single diffusion transformer model. By temporally concatenating garment and person inputs and training on a mix of image and video datasets, CatV2TON achieves robust try-on performance across static and dynamic settings. For efficient long-video generation, we propose an overlapping clip-based inference strategy that uses sequential frame guidance and Adaptive Clip Normalization (AdaCN) to maintain temporal consistency with reduced resource demands. We also present ViViD-S, a refined video try-on dataset, achieved by filtering back-facing frames and applying 3D mask smoothing for enhanced temporal consistency. Comprehensive experiments demonstrate that CatV2TON outperforms existing methods in both image and video try-on tasks, offering a versatile and reliable solution for realistic virtual try-ons across diverse scenarios.

\n","updatedAt":"2025-01-27T17:48:02.019Z","author":{"_id":"6381847a471a4550ff298c63","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6381847a471a4550ff298c63/RTKepvX67R6pLiiUidpUO.png","fullname":"Jun","name":"zxbsmk","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":43,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8959310054779053},"editors":["zxbsmk"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6381847a471a4550ff298c63/RTKepvX67R6pLiiUidpUO.png"],"reactions":[],"isReport":false}},{"id":"679833fefe39a535d261971b","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-01-28T01:33:50.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Dynamic Try-On: Taming Video Virtual Try-on with Dynamic Attention Mechanism](https://huggingface.co/papers/2412.09822) (2024)\n* [VividFace: A Diffusion-Based Hybrid Framework for High-Fidelity Video Face Swapping](https://huggingface.co/papers/2412.11279) (2024)\n* [Learning Implicit Features with Flow Infused Attention for Realistic Virtual Try-On](https://huggingface.co/papers/2412.11435) (2024)\n* [MC-VTON: Minimal Control Virtual Try-On Diffusion Transformer](https://huggingface.co/papers/2501.03630) (2025)\n* [Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Diffusion Transformer Networks](https://huggingface.co/papers/2412.00733) (2024)\n* [SwiftTry: Fast and Consistent Video Virtual Try-On with Diffusion Models](https://huggingface.co/papers/2412.10178) (2024)\n* [CPA: Camera-pose-awareness Diffusion Transformer for Video Generation](https://huggingface.co/papers/2412.01429) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-01-28T01:33:50.739Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6771384477615356},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"67985529c383bf27f2019ff7","author":{"_id":"638160992dd1f3e7bf5a00f3","avatarUrl":"/avatars/937631efc6cdc04d9cd8cea94f40c5e4.svg","fullname":"Jon Gray","name":"jgray","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-01-28T03:55:21.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"Excellent paper! Are there plans for releasing this as open source?","html":"

Excellent paper! Are there plans for releasing this as open source?

\n","updatedAt":"2025-01-28T03:55:40.484Z","author":{"_id":"638160992dd1f3e7bf5a00f3","avatarUrl":"/avatars/937631efc6cdc04d9cd8cea94f40c5e4.svg","fullname":"Jon Gray","name":"jgray","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.9818745255470276},"editors":["jgray"],"editorAvatarUrls":["/avatars/937631efc6cdc04d9cd8cea94f40c5e4.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2501.11325","authors":[{"_id":"6795f11746f22e87c8ab5895","user":{"_id":"646446517572c66a8e652e94","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646446517572c66a8e652e94/A4LIdECGW03ixc0HfIhfo.png","isPro":false,"fullname":"ZhengChong","user":"zhengchong","type":"user"},"name":"Zheng Chong","status":"claimed_verified","statusLastChangedAt":"2025-01-29T08:55:29.276Z","hidden":false},{"_id":"6795f11746f22e87c8ab5896","name":"Wenqing Zhang","hidden":false},{"_id":"6795f11746f22e87c8ab5897","name":"Shiyue Zhang","hidden":false},{"_id":"6795f11746f22e87c8ab5898","user":{"_id":"6381847a471a4550ff298c63","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6381847a471a4550ff298c63/RTKepvX67R6pLiiUidpUO.png","isPro":false,"fullname":"Jun","user":"zxbsmk","type":"user"},"name":"Jun Zheng","status":"claimed_verified","statusLastChangedAt":"2025-01-26T11:38:25.373Z","hidden":false},{"_id":"6795f11746f22e87c8ab5899","name":"Xiao Dong","hidden":false},{"_id":"6795f11746f22e87c8ab589a","name":"Haoxiang Li","hidden":false},{"_id":"6795f11746f22e87c8ab589b","name":"Yiling Wu","hidden":false},{"_id":"6795f11746f22e87c8ab589c","name":"Dongmei Jiang","hidden":false},{"_id":"6795f11746f22e87c8ab589d","name":"Xiaodan Liang","hidden":false}],"publishedAt":"2025-01-20T08:09:36.000Z","submittedOnDailyAt":"2025-01-27T15:18:02.005Z","title":"CatV2TON: Taming Diffusion Transformers for Vision-Based Virtual Try-On\n with Temporal Concatenation","submittedOnDailyBy":{"_id":"6381847a471a4550ff298c63","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6381847a471a4550ff298c63/RTKepvX67R6pLiiUidpUO.png","isPro":false,"fullname":"Jun","user":"zxbsmk","type":"user"},"summary":"Virtual try-on (VTON) technology has gained attention due to its potential to\ntransform online retail by enabling realistic clothing visualization of images\nand videos. However, most existing methods struggle to achieve high-quality\nresults across image and video try-on tasks, especially in long video\nscenarios. In this work, we introduce CatV2TON, a simple and effective\nvision-based virtual try-on (V2TON) method that supports both image and video\ntry-on tasks with a single diffusion transformer model. By temporally\nconcatenating garment and person inputs and training on a mix of image and\nvideo datasets, CatV2TON achieves robust try-on performance across static and\ndynamic settings. For efficient long-video generation, we propose an\noverlapping clip-based inference strategy that uses sequential frame guidance\nand Adaptive Clip Normalization (AdaCN) to maintain temporal consistency with\nreduced resource demands. We also present ViViD-S, a refined video try-on\ndataset, achieved by filtering back-facing frames and applying 3D mask\nsmoothing for enhanced temporal consistency. Comprehensive experiments\ndemonstrate that CatV2TON outperforms existing methods in both image and video\ntry-on tasks, offering a versatile and reliable solution for realistic virtual\ntry-ons across diverse scenarios.","upvotes":5,"discussionId":"6795f11846f22e87c8ab5934","githubRepo":"https://github.com/zheng-chong/catv2ton","githubRepoAddedBy":"auto","ai_summary":"CatV2TON, a vision-based virtual try-on method using a diffusion transformer model, achieves high-quality results for both image and video try-on tasks, including efficient long-video generation through overlapping clip-based inference and adaptive clip normalization.","ai_keywords":["vision-based virtual try-on","diffusion transformer model","temporally concatenating","adaptive clip normalization","overlapping clip-based inference","video try-on dataset","3D mask smoothing","temporal consistency"],"githubStars":195},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"638160992dd1f3e7bf5a00f3","avatarUrl":"/avatars/937631efc6cdc04d9cd8cea94f40c5e4.svg","isPro":true,"fullname":"Jon Gray","user":"jgray","type":"user"},{"_id":"65a4567e212d6aca9a3e8f5a","avatarUrl":"/avatars/ed944797230b5460381209bf76e4a0e4.svg","isPro":false,"fullname":"Catherine Liu","user":"Liu12uiL","type":"user"},{"_id":"65d6c2164aae9a31e502ab87","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/MOVS5TXMC-I7VYE-1zCfB.png","isPro":false,"fullname":"walkingwithGod","user":"walkingwithGod","type":"user"},{"_id":"663ccbff3a74a20189d4aa2e","avatarUrl":"/avatars/83a54455e0157480f65c498cd9057cf2.svg","isPro":false,"fullname":"Nguyen Van Thanh","user":"NguyenVanThanhHust","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2501.11325

CatV2TON: Taming Diffusion Transformers for Vision-Based Virtual Try-On with Temporal Concatenation

Published on Jan 20, 2025
· Submitted by
Jun
on Jan 27, 2025
Authors:
,
,
,
,
,
,

Abstract

CatV2TON, a vision-based virtual try-on method using a diffusion transformer model, achieves high-quality results for both image and video try-on tasks, including efficient long-video generation through overlapping clip-based inference and adaptive clip normalization.

AI-generated summary

Virtual try-on (VTON) technology has gained attention due to its potential to transform online retail by enabling realistic clothing visualization of images and videos. However, most existing methods struggle to achieve high-quality results across image and video try-on tasks, especially in long video scenarios. In this work, we introduce CatV2TON, a simple and effective vision-based virtual try-on (V2TON) method that supports both image and video try-on tasks with a single diffusion transformer model. By temporally concatenating garment and person inputs and training on a mix of image and video datasets, CatV2TON achieves robust try-on performance across static and dynamic settings. For efficient long-video generation, we propose an overlapping clip-based inference strategy that uses sequential frame guidance and Adaptive Clip Normalization (AdaCN) to maintain temporal consistency with reduced resource demands. We also present ViViD-S, a refined video try-on dataset, achieved by filtering back-facing frames and applying 3D mask smoothing for enhanced temporal consistency. Comprehensive experiments demonstrate that CatV2TON outperforms existing methods in both image and video try-on tasks, offering a versatile and reliable solution for realistic virtual try-ons across diverse scenarios.

Community

Paper author Paper submitter

Abstract

Virtual try-on (VTON) technology has gained attention due to its potential to transform online retail by enabling realistic clothing visualization of images and videos. However, most existing methods struggle to achieve high-quality results across image and video try-on tasks, especially in long video scenarios. In this work, we introduce CatV2TON, a simple and effective vision-based virtual try-on (V2TON) method that supports both image and video try-on tasks with a single diffusion transformer model. By temporally concatenating garment and person inputs and training on a mix of image and video datasets, CatV2TON achieves robust try-on performance across static and dynamic settings. For efficient long-video generation, we propose an overlapping clip-based inference strategy that uses sequential frame guidance and Adaptive Clip Normalization (AdaCN) to maintain temporal consistency with reduced resource demands. We also present ViViD-S, a refined video try-on dataset, achieved by filtering back-facing frames and applying 3D mask smoothing for enhanced temporal consistency. Comprehensive experiments demonstrate that CatV2TON outperforms existing methods in both image and video try-on tasks, offering a versatile and reliable solution for realistic virtual try-ons across diverse scenarios.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Excellent paper! Are there plans for releasing this as open source?

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2501.11325 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.11325 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.11325 in a Space README.md to link it from this page.

Collections including this paper 1