Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-02-06T01:36:20.140Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6947106719017029},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"67f0e93888d089a03df44282","author":{"_id":"67345e0ff6f2d658c77ab39e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/sVYkCt1DnAGFCgrrufExU.png","fullname":"Vishal","name":"mvish7","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-04-05T08:26:32.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"I'm curious about using ALIGNVLM strategy for non-text use cases, e.g. general VQA instead of Text/Chart/OCR related VQA. Wondering why the paper evaluates majorly on the text/document/OCR based image understanding?\n","html":"

I'm curious about using ALIGNVLM strategy for non-text use cases, e.g. general VQA instead of Text/Chart/OCR related VQA. Wondering why the paper evaluates majorly on the text/document/OCR based image understanding?

\n","updatedAt":"2025-04-05T09:11:57.024Z","author":{"_id":"67345e0ff6f2d658c77ab39e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/sVYkCt1DnAGFCgrrufExU.png","fullname":"Vishal","name":"mvish7","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.5615703463554382},"editors":["mvish7"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/sVYkCt1DnAGFCgrrufExU.png"],"reactions":[],"isReport":false}},{"id":"683032817bfab040d45fc0ce","author":{"_id":"67345e0ff6f2d658c77ab39e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/sVYkCt1DnAGFCgrrufExU.png","fullname":"Vishal","name":"mvish7","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-05-23T08:32:01.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Hi Guys\n\nI have created a PoC of using ALIGN module in smolVLM and trained it on roughly 300k images. IMO the eval results are impressive. Here is the link: https://github.com/mvish7/AlignVLM/tree/main","html":"

Hi Guys

\n

I have created a PoC of using ALIGN module in smolVLM and trained it on roughly 300k images. IMO the eval results are impressive. Here is the link: https://github.com/mvish7/AlignVLM/tree/main

\n","updatedAt":"2025-05-23T08:32:01.257Z","author":{"_id":"67345e0ff6f2d658c77ab39e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/sVYkCt1DnAGFCgrrufExU.png","fullname":"Vishal","name":"mvish7","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8918997049331665},"editors":["mvish7"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/sVYkCt1DnAGFCgrrufExU.png"],"reactions":[],"isReport":false}},{"id":"68390c9e8e201b47202deae3","author":{"_id":"63efd75a5c2ceb16fc6e98fc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63efd75a5c2ceb16fc6e98fc/qoA4LKuLTEr7hx90i90UK.jpeg","fullname":"Ahmed Masry","name":"ahmed-masry","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":76,"isUserFollowing":false},"createdAt":"2025-05-30T01:40:46.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Hi Guys, \n\nThank you for your interest in our work! We’ve received several questions about the effectiveness of the Align Connector compared to the MLP Connector on general vision-language tasks. To address this, we conducted an additional experiment where we trained the models on the Mammoth-VL dataset, a general vision-language instruction dataset, instead of BigDocs. We then evaluated them on a range of benchmarks such as MMLU, MMVet, SeedBench, POPE, and GQA.\n\nThe results in the table below show that **ALIGN outperforms the MLP connector across all the benchmarks**. \nWe’re currently in the process of releasing the official codebase and plan to update the arXiv paper soon with the latest results and findings. But I am also glad and excited to see some open-source reimplementation of our Align connector from the community! \n\n![Screenshot 2025-05-29 213410.png](https://cdn-uploads.huggingface.co/production/uploads/63efd75a5c2ceb16fc6e98fc/JjcCW2A6PUcdlfjfYrZRT.png)\n","html":"

Hi Guys,

\n

Thank you for your interest in our work! We’ve received several questions about the effectiveness of the Align Connector compared to the MLP Connector on general vision-language tasks. To address this, we conducted an additional experiment where we trained the models on the Mammoth-VL dataset, a general vision-language instruction dataset, instead of BigDocs. We then evaluated them on a range of benchmarks such as MMLU, MMVet, SeedBench, POPE, and GQA.

\n

The results in the table below show that ALIGN outperforms the MLP connector across all the benchmarks.
We’re currently in the process of releasing the official codebase and plan to update the arXiv paper soon with the latest results and findings. But I am also glad and excited to see some open-source reimplementation of our Align connector from the community!

\n

\"Screenshot

\n","updatedAt":"2025-05-30T01:40:46.895Z","author":{"_id":"63efd75a5c2ceb16fc6e98fc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63efd75a5c2ceb16fc6e98fc/qoA4LKuLTEr7hx90i90UK.jpeg","fullname":"Ahmed Masry","name":"ahmed-masry","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":76,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.874653160572052},"editors":["ahmed-masry"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/63efd75a5c2ceb16fc6e98fc/qoA4LKuLTEr7hx90i90UK.jpeg"],"reactions":[],"isReport":false}},{"id":"68b82e78792b7edfe19bb74c","author":{"_id":"687fae3c0ea146bcadf7b0df","avatarUrl":"/avatars/08244475d0db47534389f0b2eaf1f445.svg","fullname":"Ian Blair","name":"ianblair8787","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-09-03T12:03:04.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Hello, great work on the paper! I have a question about the experiment on noise injection to the visual features and AlignVLM's robustness. Seeing as the Gaussian noise is rather large, I would think that the visual features would lose all semantic information, and yet your results show a resistance against this. I understand that the constraining of the mapped visual features within the convex hull of the token embedding space will add resistance to small perturbations, but how is any visual information being carried through the Align module with such a large amount of noise? Are the visual embeddings to which we add noise not normalized? Does the Align module somehow recover the original signal in some non-linear way in order to retain relevant visual information?\n ","html":"

Hello, great work on the paper! I have a question about the experiment on noise injection to the visual features and AlignVLM's robustness. Seeing as the Gaussian noise is rather large, I would think that the visual features would lose all semantic information, and yet your results show a resistance against this. I understand that the constraining of the mapped visual features within the convex hull of the token embedding space will add resistance to small perturbations, but how is any visual information being carried through the Align module with such a large amount of noise? Are the visual embeddings to which we add noise not normalized? Does the Align module somehow recover the original signal in some non-linear way in order to retain relevant visual information?\n

\n","updatedAt":"2025-09-03T12:03:04.778Z","author":{"_id":"687fae3c0ea146bcadf7b0df","avatarUrl":"/avatars/08244475d0db47534389f0b2eaf1f445.svg","fullname":"Ian Blair","name":"ianblair8787","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9271281361579895},"editors":["ianblair8787"],"editorAvatarUrls":["/avatars/08244475d0db47534389f0b2eaf1f445.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.01341","authors":[{"_id":"67a236ba5f63ce00e8402d56","user":{"_id":"63efd75a5c2ceb16fc6e98fc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63efd75a5c2ceb16fc6e98fc/qoA4LKuLTEr7hx90i90UK.jpeg","isPro":true,"fullname":"Ahmed Masry","user":"ahmed-masry","type":"user"},"name":"Ahmed Masry","status":"claimed_verified","statusLastChangedAt":"2025-02-04T16:54:31.623Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d57","user":{"_id":"63507c18aef7e7f6cf476017","avatarUrl":"/avatars/183a74624b9daec613a57d405fa577bf.svg","isPro":false,"fullname":"Juan A. Rodriguez","user":"joanrod","type":"user"},"name":"Juan A. Rodriguez","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:15:38.988Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d58","user":{"_id":"6452d79149b6b9a2383b5775","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/T28lP0kE7PZIGzJjhSpSx.jpeg","isPro":false,"fullname":"Tianyu Zhang","user":"TianyuZhang","type":"user"},"name":"Tianyu Zhang","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:15:45.137Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d59","user":{"_id":"62bb1e0f3ff437e49a3088e5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62bb1e0f3ff437e49a3088e5/MWNanci3x5g780xh-704U.png","isPro":true,"fullname":"Suyuchen Wang","user":"sheryc","type":"user"},"name":"Suyuchen Wang","status":"claimed_verified","statusLastChangedAt":"2025-02-04T16:54:29.585Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d5a","user":{"_id":"65826e30d73d6402f7ac515e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65826e30d73d6402f7ac515e/NjUQbyfMCWNjb5tVXTxKk.jpeg","isPro":false,"fullname":"Chao Wang","user":"erikchwang","type":"user"},"name":"Chao Wang","status":"claimed_verified","statusLastChangedAt":"2025-02-07T09:58:22.513Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d5b","user":{"_id":"6752203d99b478caa1e85a79","avatarUrl":"/avatars/29284c6cb11d45a640bf3871954007ed.svg","isPro":false,"fullname":"Aarash Feizi","user":"feiziaarash","type":"user"},"name":"Aarash Feizi","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:33:25.590Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d5c","name":"Akshay Kalkunte Suresh","hidden":false},{"_id":"67a236ba5f63ce00e8402d5d","user":{"_id":"65830af2a1707aa10effcc32","avatarUrl":"/avatars/0626454399b711fca7fb2b66fcecaca8.svg","isPro":false,"fullname":"Abhay Puri","user":"abhaypuri","type":"user"},"name":"Abhay Puri","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:33:52.015Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d5e","user":{"_id":"636865b8cca0a0a962c21f3f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Mja7cpws4gb2Jmdj_foPA.png","isPro":false,"fullname":"Xiangru (Edward) Jian","user":"HideOnBush","type":"user"},"name":"Xiangru Jian","status":"claimed_verified","statusLastChangedAt":"2025-10-24T16:17:35.209Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d5f","user":{"_id":"646cc16b94eb019a96e1fb2e","avatarUrl":"/avatars/31f168a6c1ec45eb0c784d9119c1b9bf.svg","isPro":false,"fullname":"Pierre-Andre Noel","user":"PierreAndreNoel","type":"user"},"name":"Pierre-André Noël","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:34:03.702Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d60","user":{"_id":"63d3095c2727d7888cbb54e2","avatarUrl":"/avatars/3d437afdf19cb43c6b67a15e4c2955f8.svg","isPro":false,"fullname":"Sathwik Tejaswi Madhusudhan","user":"stm4","type":"user"},"name":"Sathwik Tejaswi Madhusudhan","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:34:10.257Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d61","user":{"_id":"64b829042fccad9f5ff20cc7","avatarUrl":"/avatars/92cca111ddd0db5d998615c2257a0894.svg","isPro":false,"fullname":"Marco Pedersoli","user":"Marcopede","type":"user"},"name":"Marco Pedersoli","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:34:16.472Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d62","user":{"_id":"654a97282d2fcd6bf2851173","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/654a97282d2fcd6bf2851173/9zXf940gr4WNt4e-oOt4k.png","isPro":false,"fullname":"Bang Liu","user":"Bang-UdeM-Mila","type":"user"},"name":"Bang Liu","status":"claimed_verified","statusLastChangedAt":"2025-02-13T20:36:11.104Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d63","user":{"_id":"631f54aa5ba8c026340b13cf","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/631f54aa5ba8c026340b13cf/2jI0VUDG5cKkdf2C5KJuy.png","isPro":false,"fullname":"Nicolas Chapados","user":"nicolaschapados","type":"user"},"name":"Nicolas Chapados","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:34:23.259Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d64","name":"Yoshua Bengio","hidden":false},{"_id":"67a236ba5f63ce00e8402d65","user":{"_id":"6706c57876c98ec236f2f090","avatarUrl":"/avatars/d45543b65b70f03a71dcd378a6ce931b.svg","isPro":false,"fullname":"Enamul Hoque","user":"enamulhoque1","type":"user"},"name":"Enamul Hoque","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:34:32.793Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d66","name":"Christopher Pal","hidden":false},{"_id":"67a236ba5f63ce00e8402d67","user":{"_id":"64062855692855e65ae31688","avatarUrl":"/avatars/e35d22a037b8b35422d3ee982f133076.svg","isPro":false,"fullname":"Issam Laradji","user":"issamlaradji","type":"user"},"name":"Issam H. Laradji","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:34:48.589Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d68","user":{"_id":"646edffecb6ea6e6b6e1be4c","avatarUrl":"/avatars/8e1b0312c935ff1338c9fb74046fce02.svg","isPro":false,"fullname":"David Vazquez","user":"DavidVazquez","type":"user"},"name":"David Vazquez","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:35:01.844Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d69","name":"Perouz Taslakian","hidden":false},{"_id":"67a236ba5f63ce00e8402d6a","user":{"_id":"65ca8745d64d82c92fa7c71f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65ca8745d64d82c92fa7c71f/xayDYjs_LoAsa8BEgDlhb.jpeg","isPro":false,"fullname":"G","user":"spandanagella","type":"user"},"name":"Spandana Gella","status":"admin_assigned","statusLastChangedAt":"2025-02-04T17:35:14.497Z","hidden":false},{"_id":"67a236ba5f63ce00e8402d6b","user":{"_id":"642f99079b2484d7d857341b","avatarUrl":"/avatars/01965cc5a5dbe9c08025a51973462a6a.svg","isPro":false,"fullname":"Sai Rajeswar","user":"rajeswarsai","type":"user"},"name":"Sai Rajeswar","status":"claimed_verified","statusLastChangedAt":"2025-10-05T12:49:27.045Z","hidden":false}],"publishedAt":"2025-02-03T13:34:51.000Z","submittedOnDailyAt":"2025-02-04T13:21:54.103Z","title":"AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal\n Understanding","submittedOnDailyBy":{"_id":"63efd75a5c2ceb16fc6e98fc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63efd75a5c2ceb16fc6e98fc/qoA4LKuLTEr7hx90i90UK.jpeg","isPro":true,"fullname":"Ahmed Masry","user":"ahmed-masry","type":"user"},"summary":"Aligning visual features with language embeddings is a key challenge in\nvision-language models (VLMs). The performance of such models hinges on having\na good connector that maps visual features generated by a vision encoder to a\nshared embedding space with the LLM while preserving semantic similarity.\nExisting connectors, such as multilayer perceptrons (MLPs), often produce\nout-of-distribution or noisy inputs, leading to misalignment between the\nmodalities. In this work, we propose a novel vision-text alignment method,\nAlignVLM, that maps visual features to a weighted average of LLM text\nembeddings. Our approach leverages the linguistic priors encoded by the LLM to\nensure that visual features are mapped to regions of the space that the LLM can\neffectively interpret. AlignVLM is particularly effective for document\nunderstanding tasks, where scanned document images must be accurately mapped to\ntheir textual content. Our extensive experiments show that AlignVLM achieves\nstate-of-the-art performance compared to prior alignment methods. We provide\nfurther analysis demonstrating improved vision-text feature alignment and\nrobustness to noise.","upvotes":39,"discussionId":"67a236bb5f63ce00e8402ddc","ai_summary":"A new vision-text alignment method, AlignVLM, effectively maps visual features to LLM embeddings, improving performance in document understanding and robustness to noise.","ai_keywords":["vision-language models","VLMs","multilayer perceptrons","MLPs","AlignVLM","LLM","linguistic priors","document understanding","feature alignment"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63efd75a5c2ceb16fc6e98fc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63efd75a5c2ceb16fc6e98fc/qoA4LKuLTEr7hx90i90UK.jpeg","isPro":true,"fullname":"Ahmed Masry","user":"ahmed-masry","type":"user"},{"_id":"646cc16b94eb019a96e1fb2e","avatarUrl":"/avatars/31f168a6c1ec45eb0c784d9119c1b9bf.svg","isPro":false,"fullname":"Pierre-Andre Noel","user":"PierreAndreNoel","type":"user"},{"_id":"65c27c201b5b51dd4814fcd2","avatarUrl":"/avatars/c456aa966417b7f7f04734e05488d439.svg","isPro":false,"fullname":"Abhay Puri","user":"abhaypuri98","type":"user"},{"_id":"6752203d99b478caa1e85a79","avatarUrl":"/avatars/29284c6cb11d45a640bf3871954007ed.svg","isPro":false,"fullname":"Aarash Feizi","user":"feiziaarash","type":"user"},{"_id":"62bb1e0f3ff437e49a3088e5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62bb1e0f3ff437e49a3088e5/MWNanci3x5g780xh-704U.png","isPro":true,"fullname":"Suyuchen Wang","user":"sheryc","type":"user"},{"_id":"642f99079b2484d7d857341b","avatarUrl":"/avatars/01965cc5a5dbe9c08025a51973462a6a.svg","isPro":false,"fullname":"Sai Rajeswar","user":"rajeswarsai","type":"user"},{"_id":"636865b8cca0a0a962c21f3f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Mja7cpws4gb2Jmdj_foPA.png","isPro":false,"fullname":"Xiangru (Edward) Jian","user":"HideOnBush","type":"user"},{"_id":"633dd1e467e67df066ef2cbf","avatarUrl":"/avatars/121fff6aefd09565fd2b01d7fcda6757.svg","isPro":false,"fullname":"Sean Hughes","user":"hughesthe1st","type":"user"},{"_id":"65de77181bfb017644456233","avatarUrl":"/avatars/5805d26d671a971598aa8f5c5a929acb.svg","isPro":false,"fullname":"Akshay Kalkunte","user":"akshaykalkunte-now","type":"user"},{"_id":"607f060442beb4da0f990182","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/607f060442beb4da0f990182/j5W2tLyU6JqkaTf3kv66s.jpeg","isPro":false,"fullname":"Patrice Bechard","user":"patricebechard","type":"user"},{"_id":"62f6691f329d4d014d1b4087","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1660315866736-noauth.jpeg","isPro":false,"fullname":"David Vazquez","user":"dvazquez","type":"user"},{"_id":"6615538f231480986e6faf3c","avatarUrl":"/avatars/f6b12f380aac0abdf36e1ab57ecda2f5.svg","isPro":false,"fullname":"Tianyu Zhang","user":"TianyuZhang-ServiceNow","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2502.01341

AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding

Published on Feb 3, 2025
· Submitted by
Ahmed Masry
on Feb 4, 2025

Abstract

A new vision-text alignment method, AlignVLM, effectively maps visual features to LLM embeddings, improving performance in document understanding and robustness to noise.

AI-generated summary

Aligning visual features with language embeddings is a key challenge in vision-language models (VLMs). The performance of such models hinges on having a good connector that maps visual features generated by a vision encoder to a shared embedding space with the LLM while preserving semantic similarity. Existing connectors, such as multilayer perceptrons (MLPs), often produce out-of-distribution or noisy inputs, leading to misalignment between the modalities. In this work, we propose a novel vision-text alignment method, AlignVLM, that maps visual features to a weighted average of LLM text embeddings. Our approach leverages the linguistic priors encoded by the LLM to ensure that visual features are mapped to regions of the space that the LLM can effectively interpret. AlignVLM is particularly effective for document understanding tasks, where scanned document images must be accurately mapped to their textual content. Our extensive experiments show that AlignVLM achieves state-of-the-art performance compared to prior alignment methods. We provide further analysis demonstrating improved vision-text feature alignment and robustness to noise.

Community

Paper author Paper submitter

Happy to announce AlignVLM📏: a novel approach to bridging vision and language latent spaces for multimodal understanding in VLMs! 🌍📄🖼️

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

I'm curious about using ALIGNVLM strategy for non-text use cases, e.g. general VQA instead of Text/Chart/OCR related VQA. Wondering why the paper evaluates majorly on the text/document/OCR based image understanding?

Hi Guys

I have created a PoC of using ALIGN module in smolVLM and trained it on roughly 300k images. IMO the eval results are impressive. Here is the link: https://github.com/mvish7/AlignVLM/tree/main

Paper author Paper submitter

Hi Guys,

Thank you for your interest in our work! We’ve received several questions about the effectiveness of the Align Connector compared to the MLP Connector on general vision-language tasks. To address this, we conducted an additional experiment where we trained the models on the Mammoth-VL dataset, a general vision-language instruction dataset, instead of BigDocs. We then evaluated them on a range of benchmarks such as MMLU, MMVet, SeedBench, POPE, and GQA.

The results in the table below show that ALIGN outperforms the MLP connector across all the benchmarks.
We’re currently in the process of releasing the official codebase and plan to update the arXiv paper soon with the latest results and findings. But I am also glad and excited to see some open-source reimplementation of our Align connector from the community!

Screenshot 2025-05-29 213410.png

Hello, great work on the paper! I have a question about the experiment on noise injection to the visual features and AlignVLM's robustness. Seeing as the Gaussian noise is rather large, I would think that the visual features would lose all semantic information, and yet your results show a resistance against this. I understand that the constraining of the mapped visual features within the convex hull of the token embedding space will add resistance to small perturbations, but how is any visual information being carried through the Align module with such a large amount of noise? Are the visual embeddings to which we add noise not normalized? Does the Align module somehow recover the original signal in some non-linear way in order to retain relevant visual information?

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.01341 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.01341 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.01341 in a Space README.md to link it from this page.

Collections including this paper 7