Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens
[go: Go Back, main page]

https://tacju.github.io/projects/maskgen.html

\n","updatedAt":"2025-01-15T04:11:25.149Z","author":{"_id":"661c9059bcd78151e5c06ea1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661c9059bcd78151e5c06ea1/27bfNo1LZeZQ77vWuAa10.png","fullname":"Ju He","name":"turkeyju","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":9,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8492845296859741},"editors":["turkeyju"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/661c9059bcd78151e5c06ea1/27bfNo1LZeZQ77vWuAa10.png"],"reactions":[],"isReport":false}},{"id":"678861de652132375518f2e3","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-01-16T01:33:18.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer](https://huggingface.co/papers/2412.10958) (2024)\n* [Factorized Visual Tokenization and Generation](https://huggingface.co/papers/2411.16681) (2024)\n* [MuLan: Adapting Multilingual Diffusion Models for Hundreds of Languages with Negligible Cost](https://huggingface.co/papers/2412.01271) (2024)\n* [Language-Guided Image Tokenization for Generation](https://huggingface.co/papers/2412.05796) (2024)\n* [CAT: Content-Adaptive Image Tokenization](https://huggingface.co/papers/2501.03120) (2025)\n* [Liquid: Language Models are Scalable Multi-modal Generators](https://huggingface.co/papers/2412.04332) (2024)\n* [Hierarchical Vision-Language Alignment for Text-to-Image Generation via Diffusion Models](https://huggingface.co/papers/2501.00917) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-01-16T01:33:18.070Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6821547150611877},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"6790f88b2b692e3bfda30ec4","author":{"_id":"6391928091023bed86fa5657","avatarUrl":"/avatars/1e519ec033c0045187bc2f20b5af42a4.svg","fullname":"Gyanendra Das","name":"luckygyana","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-01-22T13:54:19.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Checkout Detailed Walkthrough of the paper: https://gyanendradas.substack.com/p/ta-titok-paper-explained","html":"

Checkout Detailed Walkthrough of the paper: https://gyanendradas.substack.com/p/ta-titok-paper-explained

\n","updatedAt":"2025-01-22T13:54:19.030Z","author":{"_id":"6391928091023bed86fa5657","avatarUrl":"/avatars/1e519ec033c0045187bc2f20b5af42a4.svg","fullname":"Gyanendra Das","name":"luckygyana","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.645260751247406},"editors":["luckygyana"],"editorAvatarUrls":["/avatars/1e519ec033c0045187bc2f20b5af42a4.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2501.07730","authors":[{"_id":"678734168c1e7b6c4a6e5ff9","user":{"_id":"64dc5208c38427829de81b16","avatarUrl":"/avatars/43a08e46a7a78f1e3d1f6645a9b1d26b.svg","isPro":false,"fullname":"Dongwon","user":"kdwon","type":"user"},"name":"Dongwon Kim","status":"claimed_verified","statusLastChangedAt":"2025-01-16T08:31:43.848Z","hidden":false},{"_id":"678734168c1e7b6c4a6e5ffa","user":{"_id":"661c9059bcd78151e5c06ea1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661c9059bcd78151e5c06ea1/27bfNo1LZeZQ77vWuAa10.png","isPro":false,"fullname":"Ju He","user":"turkeyju","type":"user"},"name":"Ju He","status":"claimed_verified","statusLastChangedAt":"2025-01-15T08:48:25.249Z","hidden":false},{"_id":"678734168c1e7b6c4a6e5ffb","user":{"_id":"677b60e17279b5c57354108b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/677b60e17279b5c57354108b/YOwDhVf9DkeRjOCOLErb6.png","isPro":false,"fullname":"QihangYu","user":"QihangYu","type":"user"},"name":"Qihang Yu","status":"admin_assigned","statusLastChangedAt":"2025-01-15T16:54:50.093Z","hidden":false},{"_id":"678734168c1e7b6c4a6e5ffc","name":"Chenglin Yang","hidden":false},{"_id":"678734168c1e7b6c4a6e5ffd","user":{"_id":"6430aa1b32a732121cd81f98","avatarUrl":"/avatars/5419f8d6d4d36fa5ac83e30667b9fd99.svg","isPro":false,"fullname":"Xiaohui Shen","user":"XiaohuiShen","type":"user"},"name":"Xiaohui Shen","status":"admin_assigned","statusLastChangedAt":"2025-01-15T16:55:07.851Z","hidden":false},{"_id":"678734168c1e7b6c4a6e5ffe","name":"Suha Kwak","hidden":false},{"_id":"678734168c1e7b6c4a6e5fff","name":"Liang-Chieh Chen","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/661c9059bcd78151e5c06ea1/-7KLvUVPYbrrljt6Nx6iS.png","https://cdn-uploads.huggingface.co/production/uploads/661c9059bcd78151e5c06ea1/SH9x-3yrSYBXH9Nkn9uNy.png","https://cdn-uploads.huggingface.co/production/uploads/661c9059bcd78151e5c06ea1/ITR8a3J2vHuYGmCWfX-xS.png"],"publishedAt":"2025-01-13T22:37:17.000Z","submittedOnDailyAt":"2025-01-15T01:41:25.137Z","title":"Democratizing Text-to-Image Masked Generative Models with Compact\n Text-Aware One-Dimensional Tokens","submittedOnDailyBy":{"_id":"661c9059bcd78151e5c06ea1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661c9059bcd78151e5c06ea1/27bfNo1LZeZQ77vWuAa10.png","isPro":false,"fullname":"Ju He","user":"turkeyju","type":"user"},"summary":"Image tokenizers form the foundation of modern text-to-image generative\nmodels but are notoriously difficult to train. Furthermore, most existing\ntext-to-image models rely on large-scale, high-quality private datasets, making\nthem challenging to replicate. In this work, we introduce Text-Aware\nTransformer-based 1-Dimensional Tokenizer (TA-TiTok), an efficient and powerful\nimage tokenizer that can utilize either discrete or continuous 1-dimensional\ntokens. TA-TiTok uniquely integrates textual information during the tokenizer\ndecoding stage (i.e., de-tokenization), accelerating convergence and enhancing\nperformance. TA-TiTok also benefits from a simplified, yet effective, one-stage\ntraining process, eliminating the need for the complex two-stage distillation\nused in previous 1-dimensional tokenizers. This design allows for seamless\nscalability to large datasets. Building on this, we introduce a family of\ntext-to-image Masked Generative Models (MaskGen), trained exclusively on open\ndata while achieving comparable performance to models trained on private data.\nWe aim to release both the efficient, strong TA-TiTok tokenizers and the\nopen-data, open-weight MaskGen models to promote broader access and democratize\nthe field of text-to-image masked generative models.","upvotes":18,"discussionId":"678734178c1e7b6c4a6e6071","ai_summary":"TA-TiTok, a text-aware transformer-based tokenizer, integrates textual information during decoding, improving training efficiency and enabling MaskGen models to achieve comparable performance using open datasets.","ai_keywords":["image tokenizers","text-to-image generative models","TA-TiTok","transformer-based","1-dimensional tokens","textual information","tokenizer decoding","de-tokenization","convergence","performance","one-stage training","distillation","MaskGen","open data","open-weight"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"661c9059bcd78151e5c06ea1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661c9059bcd78151e5c06ea1/27bfNo1LZeZQ77vWuAa10.png","isPro":false,"fullname":"Ju He","user":"turkeyju","type":"user"},{"_id":"648010b00b9d0f49849adb19","avatarUrl":"/avatars/54419bb6e6ee788aba10cd64cd921204.svg","isPro":false,"fullname":"Qihang Yu","user":"yucornetto","type":"user"},{"_id":"66e0b013733965882099cc37","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66e0b013733965882099cc37/CkTK2kV2v-TfdYiwsW6Tx.jpeg","isPro":false,"fullname":"Tiezheng Zhang","user":"PatZhang11","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"67877eb716c02260b4e8528c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67877eb716c02260b4e8528c/yYWyQRY7G25H5X5uiSSQq.png","isPro":false,"fullname":"Chenc","user":"USTC-Chen","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"6787dd9617367322625e6f65","avatarUrl":"/avatars/d8861ae9bdb7fec0b946ec62c5403e17.svg","isPro":false,"fullname":"Yushaoo","user":"outlookx","type":"user"},{"_id":"6787e230e6eadad8572ae070","avatarUrl":"/avatars/c02eddeb7efcb239aef27bd93e6295f3.svg","isPro":false,"fullname":"Miyazaki","user":"miiyazaki","type":"user"},{"_id":"666a02f33f178c0a8fe6ee5e","avatarUrl":"/avatars/2654ec53a1513c0fbb27763b5f06cc90.svg","isPro":false,"fullname":"Chenglin","user":"Chenglin-Yang","type":"user"},{"_id":"639f1e519f1f2baab2f00d22","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/639f1e519f1f2baab2f00d22/pFjd51WZuVZ3A11rItvmk.jpeg","isPro":false,"fullname":"Qihao Liu","user":"QHL067","type":"user"},{"_id":"644e1b1d9b4e87c31bab0a14","avatarUrl":"/avatars/88bb4c4a67dc8958069e9014f5e73a0b.svg","isPro":false,"fullname":"Michael Barry","user":"MichaelBarryUK","type":"user"},{"_id":"661d7942481a59a19dfb30c3","avatarUrl":"/avatars/29d213212869d6be9556a45f684b74fb.svg","isPro":false,"fullname":"Jay Chen","user":"jay-lcchen","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2501.07730

Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens

Published on Jan 13, 2025
· Submitted by
Ju He
on Jan 15, 2025
Authors:
Ju He ,
,
,

Abstract

TA-TiTok, a text-aware transformer-based tokenizer, integrates textual information during decoding, improving training efficiency and enabling MaskGen models to achieve comparable performance using open datasets.

AI-generated summary

Image tokenizers form the foundation of modern text-to-image generative models but are notoriously difficult to train. Furthermore, most existing text-to-image models rely on large-scale, high-quality private datasets, making them challenging to replicate. In this work, we introduce Text-Aware Transformer-based 1-Dimensional Tokenizer (TA-TiTok), an efficient and powerful image tokenizer that can utilize either discrete or continuous 1-dimensional tokens. TA-TiTok uniquely integrates textual information during the tokenizer decoding stage (i.e., de-tokenization), accelerating convergence and enhancing performance. TA-TiTok also benefits from a simplified, yet effective, one-stage training process, eliminating the need for the complex two-stage distillation used in previous 1-dimensional tokenizers. This design allows for seamless scalability to large datasets. Building on this, we introduce a family of text-to-image Masked Generative Models (MaskGen), trained exclusively on open data while achieving comparable performance to models trained on private data. We aim to release both the efficient, strong TA-TiTok tokenizers and the open-data, open-weight MaskGen models to promote broader access and democratize the field of text-to-image masked generative models.

Community

Paper author Paper submitter

We introduce TA-TiTok, a novel text-aware, transformer-based 1D tokenizer capable of processing both discrete and continuous tokens while ensuring accurate alignment between reconstructions and textual descriptions. Building upon TA-TiTok, we present MaskGen, a family of text-to-image masked generative models trained exclusively on open data. MaskGen achieves performance on par with models trained on proprietary datasets, while significantly reducing training costs and delivering substantially faster inference speeds.

Project page: https://tacju.github.io/projects/maskgen.html

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Checkout Detailed Walkthrough of the paper: https://gyanendradas.substack.com/p/ta-titok-paper-explained

Sign up or log in to comment

Models citing this paper 10

Browse 10 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.07730 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.07730 in a Space README.md to link it from this page.

Collections including this paper 3