Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - LeX-Art: Rethinking Text Generation via Scalable High-Quality Data Synthesis
[go: Go Back, main page]

\"Screenshot

\n","updatedAt":"2025-03-28T02:24:19.621Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9179,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3263329565525055},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"67e74e3f048f3bf1ddb2308b","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-03-29T01:34:55.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DesignDiffusion: High-Quality Text-to-Design Image Generation with Diffusion Models](https://huggingface.co/papers/2503.01645) (2025)\n* [LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation](https://huggingface.co/papers/2502.18302) (2025)\n* [TextInVision: Text and Prompt Complexity Driven Visual Text Generation Benchmark](https://huggingface.co/papers/2503.13730) (2025)\n* [REAL: Realism Evaluation of Text-to-Image Generation Models for Effective Data Augmentation](https://huggingface.co/papers/2502.10663) (2025)\n* [POSTA: A Go-to Framework for Customized Artistic Poster Generation](https://huggingface.co/papers/2503.14908) (2025)\n* [Beyond Words: Advancing Long-Text Image Generation via Multimodal Autoregressive Models](https://huggingface.co/papers/2503.20198) (2025)\n* [Text-driven 3D Human Generation via Contrastive Preference Optimization](https://huggingface.co/papers/2502.08977) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-03-29T01:34:55.858Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6568891406059265},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.21749","authors":[{"_id":"67e6041d9a97e46f3102f7cc","user":{"_id":"62c66504031996c36c86976a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62c66504031996c36c86976a/wIq0YJhkWnEhlzsh-TGYO.png","isPro":false,"fullname":"steve z","user":"stzhao","type":"user"},"name":"Shitian Zhao","status":"claimed_verified","statusLastChangedAt":"2025-03-28T08:37:47.519Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7cd","user":{"_id":"64379d79fac5ea753f1c10f3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64379d79fac5ea753f1c10f3/clfjIaMTVDTG9K04dRud_.png","isPro":false,"fullname":"Jerry Wu","user":"QJerry","type":"user"},"name":"Qilong Wu","status":"claimed_verified","statusLastChangedAt":"2025-03-28T08:37:55.109Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7ce","user":{"_id":"66aba287b0f0b7411f511a47","avatarUrl":"/avatars/1450f182c38e80066ae5ea5df4fa218f.svg","isPro":false,"fullname":"Xinyue Li","user":"Xxxy13","type":"user"},"name":"Xinyue Li","status":"claimed_verified","statusLastChangedAt":"2025-03-28T08:37:52.434Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7cf","user":{"_id":"643dfd235aafbdca3a5792c0","avatarUrl":"/avatars/ce8553cf5936012c692e08054ee27937.svg","isPro":false,"fullname":"Bo Zhang","user":"BoZhang","type":"user"},"name":"Bo Zhang","status":"claimed_verified","statusLastChangedAt":"2025-03-31T08:14:50.759Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7d0","user":{"_id":"6794cd79b72b1721ea69f4f2","avatarUrl":"/avatars/4e4fb9e9e127a0c031131ace705687cd.svg","isPro":false,"fullname":"Ming Li","user":"afdsafas","type":"user"},"name":"Ming Li","status":"claimed_verified","statusLastChangedAt":"2025-03-28T08:37:49.525Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7d1","user":{"_id":"66bb136002fd8eb58bc84ffb","avatarUrl":"/avatars/122cb8f59c502392768099b3c2afe043.svg","isPro":false,"fullname":"qinqi","user":"Dakerqi","type":"user"},"name":"Qi Qin","status":"claimed_verified","statusLastChangedAt":"2025-03-28T08:58:27.757Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7d2","user":{"_id":"646f1bef075e11ca78da3bb7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646f1bef075e11ca78da3bb7/gNS-ikyZXYeMrf4a7HTQE.jpeg","isPro":false,"fullname":"Dongyang Liu (Chris Liu)","user":"Cxxs","type":"user"},"name":"Dongyang Liu","status":"admin_assigned","statusLastChangedAt":"2025-03-28T08:49:23.924Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7d3","user":{"_id":"63527f4e7d071f23d085ad45","avatarUrl":"/avatars/99a51adef5673b3ac1a8c02eb47759c4.svg","isPro":false,"fullname":"KAIPENG ZHANG","user":"kpzhang","type":"user"},"name":"Kaipeng Zhang","status":"admin_assigned","statusLastChangedAt":"2025-03-28T08:49:30.453Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7d4","user":{"_id":"65c04e9c27a5fdca81abcbd9","avatarUrl":"/avatars/12a155683c824fa23da4a9e2bed4f64e.svg","isPro":false,"fullname":"Hongsheng LI","user":"hsli-cuhk","type":"user"},"name":"Hongsheng Li","status":"admin_assigned","statusLastChangedAt":"2025-03-28T08:49:37.542Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7d5","name":"Yu Qiao","hidden":false},{"_id":"67e6041d9a97e46f3102f7d6","user":{"_id":"67b299cc6f6dc4376d9e6c76","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/UniMpmfOUlyiSOrf47wuT.png","isPro":false,"fullname":"Peng Gao","user":"cosumosu25","type":"user"},"name":"Peng Gao","status":"admin_assigned","statusLastChangedAt":"2025-03-28T08:49:44.411Z","hidden":false},{"_id":"67e6041d9a97e46f3102f7d7","name":"Bin Fu","hidden":false},{"_id":"67e6041d9a97e46f3102f7d8","user":{"_id":"6285a9133ab6642179158944","avatarUrl":"/avatars/6e10fa07c94141fcdbe0cab02bb731ca.svg","isPro":false,"fullname":"Zhen Li","user":"Paper99","type":"user"},"name":"Zhen Li","status":"claimed_verified","statusLastChangedAt":"2025-03-28T08:58:29.550Z","hidden":false}],"publishedAt":"2025-03-27T17:56:15.000Z","submittedOnDailyAt":"2025-03-28T00:54:19.590Z","title":"LeX-Art: Rethinking Text Generation via Scalable High-Quality Data\n Synthesis","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"We introduce LeX-Art, a comprehensive suite for high-quality text-image\nsynthesis that systematically bridges the gap between prompt expressiveness and\ntext rendering fidelity. Our approach follows a data-centric paradigm,\nconstructing a high-quality data synthesis pipeline based on Deepseek-R1 to\ncurate LeX-10K, a dataset of 10K high-resolution, aesthetically refined\n1024times1024 images. Beyond dataset construction, we develop LeX-Enhancer,\na robust prompt enrichment model, and train two text-to-image models, LeX-FLUX\nand LeX-Lumina, achieving state-of-the-art text rendering performance. To\nsystematically evaluate visual text generation, we introduce LeX-Bench, a\nbenchmark that assesses fidelity, aesthetics, and alignment, complemented by\nPairwise Normalized Edit Distance (PNED), a novel metric for robust text\naccuracy evaluation. Experiments demonstrate significant improvements, with\nLeX-Lumina achieving a 79.81% PNED gain on CreateBench, and LeX-FLUX\noutperforming baselines in color (+3.18%), positional (+4.45%), and font\naccuracy (+3.81%). Our codes, models, datasets, and demo are publicly\navailable.","upvotes":26,"discussionId":"67e6041f9a97e46f3102f89b","projectPage":"https://zhaoshitian.github.io/lexart/","githubRepo":"https://github.com/zhaoshitian/LeX-Art","githubRepoAddedBy":"user","ai_summary":"A suite called LeX-Art for high-quality text-image synthesis includes data-centric pipeline, prompt enrichment, and text-to-image models, achieving state-of-the-art performance with a new benchmark and metric.","ai_keywords":["Deepseek-R1","LeX-10K","LeX-Enhancer","LeX-FLUX","LeX-Lumina","LeX-Bench","Pairwise Normalized Edit Distance (PNED)","CreateBench"],"githubStars":78},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64379d79fac5ea753f1c10f3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64379d79fac5ea753f1c10f3/clfjIaMTVDTG9K04dRud_.png","isPro":false,"fullname":"Jerry Wu","user":"QJerry","type":"user"},{"_id":"66bb136002fd8eb58bc84ffb","avatarUrl":"/avatars/122cb8f59c502392768099b3c2afe043.svg","isPro":false,"fullname":"qinqi","user":"Dakerqi","type":"user"},{"_id":"6285a9133ab6642179158944","avatarUrl":"/avatars/6e10fa07c94141fcdbe0cab02bb731ca.svg","isPro":false,"fullname":"Zhen Li","user":"Paper99","type":"user"},{"_id":"66aba287b0f0b7411f511a47","avatarUrl":"/avatars/1450f182c38e80066ae5ea5df4fa218f.svg","isPro":false,"fullname":"Xinyue Li","user":"Xxxy13","type":"user"},{"_id":"64296a5c8136224fee066141","avatarUrl":"/avatars/5fa93970bbb2cbb32159c4ad102204bc.svg","isPro":false,"fullname":"Hongcheng Duan","user":"Stuprosur","type":"user"},{"_id":"67235cfc515f82f8184243ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67235cfc515f82f8184243ef/zaweKA1WFv0vlcnqfUSAQ.png","isPro":false,"fullname":"Hao Wang","user":"haowang","type":"user"},{"_id":"6794cd79b72b1721ea69f4f2","avatarUrl":"/avatars/4e4fb9e9e127a0c031131ace705687cd.svg","isPro":false,"fullname":"Ming Li","user":"afdsafas","type":"user"},{"_id":"6355eaf660c1b72f6269bc64","avatarUrl":"/avatars/dd176b9d6db2ac63c19c3170566a3f35.svg","isPro":false,"fullname":"Jiaming Li","user":"Geaming","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"67e624621bd3274255e7e0fe","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/-ZFZDLHFOZj2c3iz60hG1.jpeg","isPro":false,"fullname":"Yanuar Saputra","user":"Yansa","type":"user"},{"_id":"66fbe619ac1c8e2672036e21","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/jBRPeY2W10nKE1FSkw_2g.jpeg","isPro":false,"fullname":"Kaiyuan Yang","user":"Garfield-Kaiyuan","type":"user"},{"_id":"642d3a8284bf892b8fa921c9","avatarUrl":"/avatars/d1e9debcb02177c42094d3aaba42c13f.svg","isPro":false,"fullname":"wuuu","user":"tivon","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2503.21749

LeX-Art: Rethinking Text Generation via Scalable High-Quality Data Synthesis

Published on Mar 27, 2025
· Submitted by
AK
on Mar 28, 2025

Abstract

A suite called LeX-Art for high-quality text-image synthesis includes data-centric pipeline, prompt enrichment, and text-to-image models, achieving state-of-the-art performance with a new benchmark and metric.

AI-generated summary

We introduce LeX-Art, a comprehensive suite for high-quality text-image synthesis that systematically bridges the gap between prompt expressiveness and text rendering fidelity. Our approach follows a data-centric paradigm, constructing a high-quality data synthesis pipeline based on Deepseek-R1 to curate LeX-10K, a dataset of 10K high-resolution, aesthetically refined 1024times1024 images. Beyond dataset construction, we develop LeX-Enhancer, a robust prompt enrichment model, and train two text-to-image models, LeX-FLUX and LeX-Lumina, achieving state-of-the-art text rendering performance. To systematically evaluate visual text generation, we introduce LeX-Bench, a benchmark that assesses fidelity, aesthetics, and alignment, complemented by Pairwise Normalized Edit Distance (PNED), a novel metric for robust text accuracy evaluation. Experiments demonstrate significant improvements, with LeX-Lumina achieving a 79.81% PNED gain on CreateBench, and LeX-FLUX outperforming baselines in color (+3.18%), positional (+4.45%), and font accuracy (+3.81%). Our codes, models, datasets, and demo are publicly available.

Community

Paper submitter

Screenshot 2025-03-27 at 10.22.54 PM.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 5

Browse 5 datasets citing this paper

Spaces citing this paper 3

Collections including this paper 8