Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - UniWeTok: An Unified Binary Tokenizer with Codebook Size 2^{128} for Unified Multimodal Large Language Model
[go: Go Back, main page]

https://github.com/shallowdream204/BitDance

\n","updatedAt":"2026-02-17T04:49:06.365Z","author":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","fullname":"taesiri","name":"taesiri","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":235,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6041756868362427},"editors":["taesiri"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg"],"reactions":[],"isReport":false}},{"id":"699518ccfc58fb46e5533a88","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-18T01:41:32.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [NextFlow: Unified Sequential Modeling Activates Multimodal Understanding and Generation](https://huggingface.co/papers/2601.02204) (2026)\n* [Improving Flexible Image Tokenizers for Autoregressive Image Generation](https://huggingface.co/papers/2601.01535) (2026)\n* [MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models](https://huggingface.co/papers/2602.10934) (2026)\n* [Language-Guided Transformer Tokenizer for Human Motion Generation](https://huggingface.co/papers/2602.08337) (2026)\n* [Kelix Technical Report](https://huggingface.co/papers/2602.09843) (2026)\n* [PyraTok: Language-Aligned Pyramidal Tokenizer for Video Understanding and Generation](https://huggingface.co/papers/2601.16210) (2026)\n* [STACodec: Semantic Token Assignment for Balancing Acoustic Fidelity and Semantic Information in Audio Codecs](https://huggingface.co/papers/2602.06180) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-18T01:41:32.662Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6856207251548767},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.14178","authors":[{"_id":"6993e66350fb2c0be4783d6b","name":"Shaobin Zhuang","hidden":false},{"_id":"6993e66350fb2c0be4783d6c","user":{"_id":"65709464b3501cbcb8f4007e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65709464b3501cbcb8f4007e/lD6_mJvhiGPqYxIXhkKal.jpeg","isPro":true,"fullname":"Yuang Ai","user":"shallowdream204","type":"user"},"name":"Yuang Ai","status":"claimed_verified","statusLastChangedAt":"2026-02-18T09:07:05.519Z","hidden":false},{"_id":"6993e66350fb2c0be4783d6d","user":{"_id":"62318c0386753f5f41d0e261","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62318c0386753f5f41d0e261/xO_5PvOf7lXhQPnQLcmnq.jpeg","isPro":false,"fullname":"Jiaming Han","user":"csuhan","type":"user"},"name":"Jiaming Han","status":"claimed_verified","statusLastChangedAt":"2026-02-18T09:07:03.664Z","hidden":false},{"_id":"6993e66350fb2c0be4783d6e","name":"Weijia Mao","hidden":false},{"_id":"6993e66350fb2c0be4783d6f","name":"Xiaohui Li","hidden":false},{"_id":"6993e66350fb2c0be4783d70","name":"Fangyikang Wang","hidden":false},{"_id":"6993e66350fb2c0be4783d71","name":"Xiao Wang","hidden":false},{"_id":"6993e66350fb2c0be4783d72","name":"Yan Li","hidden":false},{"_id":"6993e66350fb2c0be4783d73","name":"Shanchuan Lin","hidden":false},{"_id":"6993e66350fb2c0be4783d74","name":"Kun Xu","hidden":false},{"_id":"6993e66350fb2c0be4783d75","name":"Zhenheng Yang","hidden":false},{"_id":"6993e66350fb2c0be4783d76","name":"Huaibo Huang","hidden":false},{"_id":"6993e66350fb2c0be4783d77","name":"Xiangyu Yue","hidden":false},{"_id":"6993e66350fb2c0be4783d78","name":"Hao Chen","hidden":false},{"_id":"6993e66350fb2c0be4783d79","name":"Yali Wang","hidden":false}],"publishedAt":"2026-02-15T15:07:19.000Z","submittedOnDailyAt":"2026-02-17T01:24:15.735Z","title":"UniWeTok: An Unified Binary Tokenizer with Codebook Size 2^{128} for Unified Multimodal Large Language Model","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"Unified Multimodal Large Language Models (MLLMs) require a visual representation that simultaneously supports high-fidelity reconstruction, complex semantic extraction, and generative suitability. However, existing visual tokenizers typically struggle to satisfy these conflicting objectives within a single framework. In this paper, we introduce UniWeTok, a unified discrete tokenizer designed to bridge this gap using a massive binary codebook (2^{128}). For training framework, we introduce Pre-Post Distillation and a Generative-Aware Prior to enhance the semantic extraction and generative prior of the discrete tokens. In terms of model architecture, we propose a convolution-attention hybrid architecture with the SigLu activation function. SigLu activation not only bounds the encoder output and stabilizes the semantic distillation process but also effectively addresses the optimization conflict between token entropy loss and commitment loss. We further propose a three-stage training framework designed to enhance UniWeTok's adaptability cross various image resolutions and perception-sensitive scenarios, such as those involving human faces and textual content. On ImageNet, UniWeTok achieves state-of-the-art image generation performance (FID: UniWeTok 1.38 vs. REPA 1.42) while requiring a remarkably low training compute (Training Tokens: UniWeTok 33B vs. REPA 262B). On general-domain, UniWeTok demonstrates highly competitive capabilities across a broad range of tasks, including multimodal understanding, image generation (DPG Score: UniWeTok 86.63 vs. FLUX.1 [Dev] 83.84), and editing (GEdit Overall Score: UniWeTok 5.09 vs. OmniGen 5.06). We release code and models to facilitate community exploration of unified tokenizer and MLLM.","upvotes":12,"discussionId":"6993e66350fb2c0be4783d7a","ai_summary":"UniWeTok introduces a unified discrete tokenizer with a massive binary codebook and novel training techniques to achieve superior performance in image generation and multimodal tasks while reducing computational requirements.","ai_keywords":["visual tokenizers","discrete tokenizer","binary codebook","Pre-Post Distillation","Generative-Aware Prior","convolution-attention hybrid architecture","SigLu activation function","token entropy loss","commitment loss","three-stage training framework","multimodal understanding","image generation","DPG Score","GEdit Overall Score"],"organization":{"_id":"653b817d32c97d0655575872","name":"ByteDance","fullname":"ByteDance","avatar":"https://cdn-uploads.huggingface.co/production/uploads/6535c9e88bde2fae19b6fb25/0clr54wj5Ly-RkYU9OXPp.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"62cd94c4aac2c91c95538fb9","avatarUrl":"/avatars/6ac1c7a07ff73364892bb5f2f2074e1b.svg","isPro":false,"fullname":"Lars Vagnes","user":"larsh0103","type":"user"},{"_id":"66935bdc5489e4f73c76bc7b","avatarUrl":"/avatars/129d1e86bbaf764b507501f4feb177db.svg","isPro":false,"fullname":"Abidoye Aanuoluwapo","user":"Aanuoluwapo65","type":"user"},{"_id":"64439fd49174daa2f68d385e","avatarUrl":"/avatars/8c174d64531f871fc8abe801b5f0564c.svg","isPro":false,"fullname":"Daniil Robnikov","user":"daniilrobnikov","type":"user"},{"_id":"661ab1f1fa3b144a381fa454","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661ab1f1fa3b144a381fa454/IlpZBb9NCjo7ntFwMIH53.png","isPro":false,"fullname":"Urro","user":"urroxyz","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"65a61f11d81b6fa6c8b98637","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65a61f11d81b6fa6c8b98637/9w_9Vo0g_smW-GLM8PMSJ.jpeg","isPro":false,"fullname":"zhuang","user":"GrayShine","type":"user"},{"_id":"62318c0386753f5f41d0e261","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62318c0386753f5f41d0e261/xO_5PvOf7lXhQPnQLcmnq.jpeg","isPro":false,"fullname":"Jiaming Han","user":"csuhan","type":"user"},{"_id":"65709464b3501cbcb8f4007e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65709464b3501cbcb8f4007e/lD6_mJvhiGPqYxIXhkKal.jpeg","isPro":true,"fullname":"Yuang Ai","user":"shallowdream204","type":"user"},{"_id":"65171225a1a5e5d6177354e6","avatarUrl":"/avatars/4c4c8d0c511d4350463341b124aedb98.svg","isPro":false,"fullname":"hao chen","user":"wanhu","type":"user"},{"_id":"6634d16bf1dea8d672e2499e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/WhwqtqU5MeVY3pymmWXyT.png","isPro":false,"fullname":"fangyikang wang","user":"zituitui","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"653b817d32c97d0655575872","name":"ByteDance","fullname":"ByteDance","avatar":"https://cdn-uploads.huggingface.co/production/uploads/6535c9e88bde2fae19b6fb25/0clr54wj5Ly-RkYU9OXPp.png"}}">
Papers
arxiv:2602.14178

UniWeTok: An Unified Binary Tokenizer with Codebook Size 2^{128} for Unified Multimodal Large Language Model

Published on Feb 15
· Submitted by
taesiri
on Feb 17
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

UniWeTok introduces a unified discrete tokenizer with a massive binary codebook and novel training techniques to achieve superior performance in image generation and multimodal tasks while reducing computational requirements.

AI-generated summary

Unified Multimodal Large Language Models (MLLMs) require a visual representation that simultaneously supports high-fidelity reconstruction, complex semantic extraction, and generative suitability. However, existing visual tokenizers typically struggle to satisfy these conflicting objectives within a single framework. In this paper, we introduce UniWeTok, a unified discrete tokenizer designed to bridge this gap using a massive binary codebook (2^{128}). For training framework, we introduce Pre-Post Distillation and a Generative-Aware Prior to enhance the semantic extraction and generative prior of the discrete tokens. In terms of model architecture, we propose a convolution-attention hybrid architecture with the SigLu activation function. SigLu activation not only bounds the encoder output and stabilizes the semantic distillation process but also effectively addresses the optimization conflict between token entropy loss and commitment loss. We further propose a three-stage training framework designed to enhance UniWeTok's adaptability cross various image resolutions and perception-sensitive scenarios, such as those involving human faces and textual content. On ImageNet, UniWeTok achieves state-of-the-art image generation performance (FID: UniWeTok 1.38 vs. REPA 1.42) while requiring a remarkably low training compute (Training Tokens: UniWeTok 33B vs. REPA 262B). On general-domain, UniWeTok demonstrates highly competitive capabilities across a broad range of tasks, including multimodal understanding, image generation (DPG Score: UniWeTok 86.63 vs. FLUX.1 [Dev] 83.84), and editing (GEdit Overall Score: UniWeTok 5.09 vs. OmniGen 5.06). We release code and models to facilitate community exploration of unified tokenizer and MLLM.

Community

Paper submitter

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.14178 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.14178 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.14178 in a Space README.md to link it from this page.

Collections including this paper 1