Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-02-17T01:22:02.123Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7314161658287048},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2402.09812","authors":[{"_id":"65ceea67e83461df6cbcca9c","name":"Jisu Nam","hidden":false},{"_id":"65ceea67e83461df6cbcca9d","name":"Heesu Kim","hidden":false},{"_id":"65ceea67e83461df6cbcca9e","name":"DongJae Lee","hidden":false},{"_id":"65ceea67e83461df6cbcca9f","user":{"_id":"640dd32a3c82bd463ee38e02","avatarUrl":"/avatars/0d02bb8245785b7ad7cbbee855e2cf1f.svg","isPro":false,"fullname":"Siyoon Jin","user":"clwm01","type":"user"},"name":"Siyoon Jin","status":"admin_assigned","statusLastChangedAt":"2024-02-16T14:25:37.321Z","hidden":false},{"_id":"65ceea67e83461df6cbccaa0","user":{"_id":"65cf717450818a335a1d3021","avatarUrl":"/avatars/382a0e0f40f661cda1b2531e3e6ea2ee.svg","isPro":false,"fullname":"Seungryong Kim","user":"seungryong","type":"user"},"name":"Seungryong Kim","status":"claimed_verified","statusLastChangedAt":"2024-02-16T14:32:00.849Z","hidden":false},{"_id":"65ceea67e83461df6cbccaa1","name":"Seunggyu Chang","hidden":false}],"publishedAt":"2024-02-15T09:21:16.000Z","submittedOnDailyAt":"2024-02-16T02:24:03.426Z","title":"DreamMatcher: Appearance Matching Self-Attention for\n Semantically-Consistent Text-to-Image Personalization","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"The objective of text-to-image (T2I) personalization is to customize a\ndiffusion model to a user-provided reference concept, generating diverse images\nof the concept aligned with the target prompts. Conventional methods\nrepresenting the reference concepts using unique text embeddings often fail to\naccurately mimic the appearance of the reference. To address this, one solution\nmay be explicitly conditioning the reference images into the target denoising\nprocess, known as key-value replacement. However, prior works are constrained\nto local editing since they disrupt the structure path of the pre-trained T2I\nmodel. To overcome this, we propose a novel plug-in method, called\nDreamMatcher, which reformulates T2I personalization as semantic matching.\nSpecifically, DreamMatcher replaces the target values with reference values\naligned by semantic matching, while leaving the structure path unchanged to\npreserve the versatile capability of pre-trained T2I models for generating\ndiverse structures. We also introduce a semantic-consistent masking strategy to\nisolate the personalized concept from irrelevant regions introduced by the\ntarget prompts. Compatible with existing T2I models, DreamMatcher shows\nsignificant improvements in complex scenarios. Intensive analyses demonstrate\nthe effectiveness of our approach.","upvotes":16,"discussionId":"65ceea6be83461df6cbccb64","githubRepo":"https://github.com/KU-CVLAB/DreamMatcher","githubRepoAddedBy":"auto","ai_summary":"DreamMatcher personalizes text-to-image (T2I) models by semantically matching reference concepts to target prompts, improving accuracy and diversity without altering the model's structure.","ai_keywords":["text-to-image (T2I)","diffusion model","key-value replacement","semantic matching","semantic-consistent masking strategy"],"githubStars":173},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"637f0eb22438d7485b8ef5d7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/637f0eb22438d7485b8ef5d7/70h7dekqj7LuBobOXckmJ.jpeg","isPro":false,"fullname":"Ming Li","user":"limingcv","type":"user"},{"_id":"647fa27d095af0bf116cd1b3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/647fa27d095af0bf116cd1b3/NIB5O1OgBQkIxwOag5FKj.jpeg","isPro":false,"fullname":"Meher Shashwat Nigam","user":"MeherShashwat","type":"user"},{"_id":"6342796a0875f2c99cfd313b","avatarUrl":"/avatars/98575092404c4197b20c929a6499a015.svg","isPro":false,"fullname":"Yuseung \"Phillip\" Lee","user":"phillipinseoul","type":"user"},{"_id":"6108956e7602f8e9ed8bb5d8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1672966209331-6108956e7602f8e9ed8bb5d8.png","isPro":false,"fullname":"adakoda","user":"adakoda","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"64a8189dfb84212429fa2bc1","avatarUrl":"/avatars/baba5b01bba687c54b4f47d0d1ee45eb.svg","isPro":true,"fullname":"Jiwon Kang","user":"Jiwon-Kang","type":"user"},{"_id":"6587379aae21a8ff28396264","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nDthHREruVpTg5svY-FXn.png","isPro":false,"fullname":"Lance Legel","user":"3co","type":"user"},{"_id":"61868ce808aae0b5499a2a95","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61868ce808aae0b5499a2a95/F6BA0anbsoY_Z7M1JrwOe.jpeg","isPro":true,"fullname":"Sylvain Filoni","user":"fffiloni","type":"user"},{"_id":"656f439a7f50602340a120f1","avatarUrl":"/avatars/d385bc03a22f8f60299414f3dc2687ac.svg","isPro":false,"fullname":"Minjae Kim","user":"kwjames98","type":"user"},{"_id":"64cbce5ebf67d9b76e8aa6e5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/AadfIVTfx_SKwrgJo0I3_.png","isPro":false,"fullname":"mytoon","user":"mytoon","type":"user"},{"_id":"6527e89a8808d80ccff88b7a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/CuGNmF1Et8KMQ0mCd1NEJ.jpeg","isPro":true,"fullname":"Not Lain","user":"not-lain","type":"user"},{"_id":"66897f607ea384a9f81bdd4f","avatarUrl":"/avatars/47963b7a66a6ed3079a8a7d6ea0620d0.svg","isPro":false,"fullname":"Li Zhang","user":"zhaling","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2402.09812

DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization

Published on Feb 15, 2024
· Submitted by
AK
on Feb 16, 2024
Authors:
,
,
,

Abstract

DreamMatcher personalizes text-to-image (T2I) models by semantically matching reference concepts to target prompts, improving accuracy and diversity without altering the model's structure.

AI-generated summary

The objective of text-to-image (T2I) personalization is to customize a diffusion model to a user-provided reference concept, generating diverse images of the concept aligned with the target prompts. Conventional methods representing the reference concepts using unique text embeddings often fail to accurately mimic the appearance of the reference. To address this, one solution may be explicitly conditioning the reference images into the target denoising process, known as key-value replacement. However, prior works are constrained to local editing since they disrupt the structure path of the pre-trained T2I model. To overcome this, we propose a novel plug-in method, called DreamMatcher, which reformulates T2I personalization as semantic matching. Specifically, DreamMatcher replaces the target values with reference values aligned by semantic matching, while leaving the structure path unchanged to preserve the versatile capability of pre-trained T2I models for generating diverse structures. We also introduce a semantic-consistent masking strategy to isolate the personalized concept from irrelevant regions introduced by the target prompts. Compatible with existing T2I models, DreamMatcher shows significant improvements in complex scenarios. Intensive analyses demonstrate the effectiveness of our approach.

Community

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2402.09812 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2402.09812 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2402.09812 in a Space README.md to link it from this page.

Collections including this paper 6