I think the performamce gain may be considered as too marginal. I would like to know the gain for more challenging datasets such as mm-vet, mmstar, and llava bench in the wild. Do you have any plan? As human sensory system, the more image tokens the more benefit.
\n","updatedAt":"2024-07-04T22:40:36.543Z","author":{"_id":"657152eb12f162153b50ec9d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/657152eb12f162153b50ec9d/qnldHP35PclV0pDz_05q8.jpeg","fullname":"Byung-Kwan Lee","name":"BK-Lee","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":65,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9663967490196228},"editors":["BK-Lee"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/657152eb12f162153b50ec9d/qnldHP35PclV0pDz_05q8.jpeg"],"reactions":[{"reaction":"🧠","users":["YxxxB","passing2961","lllliuhhhhggg"],"count":3}],"isReport":false},"replies":[{"id":"6691ac039c97af59872fd18c","author":{"_id":"61af81009f77f7b669578f95","avatarUrl":"/avatars/fb50773ac49948940eb231834ee6f2fd.svg","fullname":"rotem israeli","name":"irotem98","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6,"isUserFollowing":false},"createdAt":"2024-07-12T22:19:47.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"I think that even if it doesn't improve much, the compression of 75% of the tokens alone make it really cool approach to project the visual tokens. \nplease release the checkpoints :)","html":"I think that even if it doesn't improve much, the compression of 75% of the tokens alone make it really cool approach to project the visual tokens.
please release the checkpoints :)
Great
\n","updatedAt":"2024-07-06T00:35:35.556Z","author":{"_id":"64df3ad6a9bcacc18bc0606a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/s3kpJyOf7NwO-tHEpRcok.png","fullname":"Carlos","name":"Carlosvirella100","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":7,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5207188725471497},"editors":["Carlosvirella100"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/s3kpJyOf7NwO-tHEpRcok.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2407.02392","authors":[{"_id":"668618e247f2a33570ed316e","user":{"_id":"64c48a78d07620bdc99777d4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c48a78d07620bdc99777d4/NJC4Ot0a7YSdU5RC6dgga.jpeg","isPro":false,"fullname":"LI WENTONG","user":"sunshine-lwt","type":"user"},"name":"Wentong Li","status":"claimed_verified","statusLastChangedAt":"2024-07-22T07:12:20.412Z","hidden":false},{"_id":"668618e247f2a33570ed316f","user":{"_id":"64a3fe3dde901eb01df12398","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64a3fe3dde901eb01df12398/Js2bEx4rxKuEKVt5z9I2D.jpeg","isPro":false,"fullname":"YuqianYuan","user":"CircleRadon","type":"user"},"name":"Yuqian Yuan","status":"claimed_verified","statusLastChangedAt":"2024-07-04T08:03:26.166Z","hidden":false},{"_id":"668618e247f2a33570ed3170","name":"Jian Liu","hidden":false},{"_id":"668618e247f2a33570ed3171","user":{"_id":"6353b31206d707b332426678","avatarUrl":"/avatars/901a91f8270d3363f89a26785e471586.svg","isPro":false,"fullname":"Dongqi Tang","user":"coura","type":"user"},"name":"Dongqi Tang","status":"admin_assigned","statusLastChangedAt":"2024-07-04T12:13:51.284Z","hidden":false},{"_id":"668618e247f2a33570ed3172","user":{"_id":"66863d26e2b71e3d09189ae9","avatarUrl":"/avatars/3c0e6f30e053f2e622ae75e1dc43edba.svg","isPro":false,"fullname":"Song Wang","user":"songw-zju","type":"user"},"name":"Song Wang","status":"claimed_verified","statusLastChangedAt":"2025-05-29T20:18:10.779Z","hidden":false},{"_id":"668618e247f2a33570ed3173","name":"Jianke Zhu","hidden":false},{"_id":"668618e247f2a33570ed3174","name":"Lei Zhang","hidden":false}],"publishedAt":"2024-07-02T16:10:55.000Z","submittedOnDailyAt":"2024-07-04T02:16:29.325Z","title":"TokenPacker: Efficient Visual Projector for Multimodal LLM","submittedOnDailyBy":{"_id":"64c48a78d07620bdc99777d4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c48a78d07620bdc99777d4/NJC4Ot0a7YSdU5RC6dgga.jpeg","isPro":false,"fullname":"LI WENTONG","user":"sunshine-lwt","type":"user"},"summary":"The visual projector serves as an essential bridge between the visual encoder\nand the Large Language Model (LLM) in a Multimodal LLM (MLLM). Typically, MLLMs\nadopt a simple MLP to preserve all visual contexts via one-to-one\ntransformation. However, the visual tokens are redundant and can be\nconsiderably increased when dealing with high-resolution images, impairing the\nefficiency of MLLMs significantly. Some recent works have introduced resampler\nor abstractor to reduce the number of resulting visual tokens. Unfortunately,\nthey fail to capture finer details and undermine the visual reasoning\ncapabilities of MLLMs. In this work, we propose a novel visual projector, which\nadopts a coarse-to-fine scheme to inject the enriched characteristics to\ngenerate the condensed visual tokens. In specific, we first interpolate the\nvisual features as a low-resolution point query, providing the overall visual\nrepresentation as the foundation. Then, we introduce a region-to-point\ninjection module that utilizes high-resolution, multi-level region-based cues\nas fine-grained reference keys and values, allowing them to be fully absorbed\nwithin the corresponding local context region. This step effectively updates\nthe coarse point query, transforming it into an enriched one for the subsequent\nLLM reasoning. Extensive experiments demonstrate that our approach compresses\nthe visual tokens by 75%~89%, while achieves comparable or even better\nperformance across diverse benchmarks with significantly higher efficiency. The\nsource codes can be found at https://github.com/CircleRadon/TokenPacker.","upvotes":23,"discussionId":"668618e347f2a33570ed31eb","githubRepo":"https://github.com/circleradon/tokenpacker","githubRepoAddedBy":"auto","ai_summary":"A novel visual projector using a coarse-to-fine scheme reduces visual token redundancy and improves visual reasoning in Multimodal LLMs.","ai_keywords":["MLLM","visual projector","MLP","visual tokens","resampler","abstractor","coarse-to-fine scheme","region-to-point injection module","visual features","low-resolution point query","high-resolution","multi-level region-based cues","fine-grained reference keys","local context region","LLM reasoning"],"githubStars":276},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"61af81009f77f7b669578f95","avatarUrl":"/avatars/fb50773ac49948940eb231834ee6f2fd.svg","isPro":false,"fullname":"rotem israeli","user":"irotem98","type":"user"},{"_id":"64c48a78d07620bdc99777d4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c48a78d07620bdc99777d4/NJC4Ot0a7YSdU5RC6dgga.jpeg","isPro":false,"fullname":"LI WENTONG","user":"sunshine-lwt","type":"user"},{"_id":"654ca976c13b28fa29df6793","avatarUrl":"/avatars/5dc441fe89f210d20cfa41caaf92172b.svg","isPro":false,"fullname":"Xiaoxin","user":"Xiaoxin11111","type":"user"},{"_id":"66863974b9a71fa518a5eea1","avatarUrl":"/avatars/9012bc92197242fec9801deff65e1e0f.svg","isPro":false,"fullname":"Xiaolu Liu","user":"xiaolul2","type":"user"},{"_id":"66863d26e2b71e3d09189ae9","avatarUrl":"/avatars/3c0e6f30e053f2e622ae75e1dc43edba.svg","isPro":false,"fullname":"Song Wang","user":"songw-zju","type":"user"},{"_id":"64a3fe3dde901eb01df12398","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64a3fe3dde901eb01df12398/Js2bEx4rxKuEKVt5z9I2D.jpeg","isPro":false,"fullname":"YuqianYuan","user":"CircleRadon","type":"user"},{"_id":"655ac762cb17ec19ef82719b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655ac762cb17ec19ef82719b/1kDncYrGLYS_2SR8cNdAL.png","isPro":false,"fullname":"Welcome to matlok","user":"matlok","type":"user"},{"_id":"60a5d8764e4508065fac8ce4","avatarUrl":"/avatars/185add88f05abbef58a57bade1023bba.svg","isPro":false,"fullname":"zhang","user":"lighter","type":"user"},{"_id":"64ec0c43cfa36c8ac2f00ca9","avatarUrl":"/avatars/d91fbe26f64f4a6159d4afa0668ca2fa.svg","isPro":false,"fullname":"Jason","user":"agnJason","type":"user"},{"_id":"630c4a414c0945d20b8dfd4a","avatarUrl":"/avatars/1c8ab92ba383746c42d94f5b98361094.svg","isPro":false,"fullname":"ltl","user":"ltl","type":"user"},{"_id":"6311bca0ae8896941da24e66","avatarUrl":"/avatars/48de64894fc3c9397e26e4d6da3ff537.svg","isPro":false,"fullname":"Fynn Kröger","user":"fynnkroeger","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">TokenPacker: Efficient Visual Projector for Multimodal LLM
Abstract
A novel visual projector using a coarse-to-fine scheme reduces visual token redundancy and improves visual reasoning in Multimodal LLMs.
The visual projector serves as an essential bridge between the visual encoder and the Large Language Model (LLM) in a Multimodal LLM (MLLM). Typically, MLLMs adopt a simple MLP to preserve all visual contexts via one-to-one transformation. However, the visual tokens are redundant and can be considerably increased when dealing with high-resolution images, impairing the efficiency of MLLMs significantly. Some recent works have introduced resampler or abstractor to reduce the number of resulting visual tokens. Unfortunately, they fail to capture finer details and undermine the visual reasoning capabilities of MLLMs. In this work, we propose a novel visual projector, which adopts a coarse-to-fine scheme to inject the enriched characteristics to generate the condensed visual tokens. In specific, we first interpolate the visual features as a low-resolution point query, providing the overall visual representation as the foundation. Then, we introduce a region-to-point injection module that utilizes high-resolution, multi-level region-based cues as fine-grained reference keys and values, allowing them to be fully absorbed within the corresponding local context region. This step effectively updates the coarse point query, transforming it into an enriched one for the subsequent LLM reasoning. Extensive experiments demonstrate that our approach compresses the visual tokens by 75%~89%, while achieves comparable or even better performance across diverse benchmarks with significantly higher efficiency. The source codes can be found at https://github.com/CircleRadon/TokenPacker.
Community
I think the performamce gain may be considered as too marginal. I would like to know the gain for more challenging datasets such as mm-vet, mmstar, and llava bench in the wild. Do you have any plan? As human sensory system, the more image tokens the more benefit.
I think that even if it doesn't improve much, the compression of 75% of the tokens alone make it really cool approach to project the visual tokens.
please release the checkpoints :)
Great
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper