Librarian Bot. I found the following papers similar to this paper. \n
The following papers were recommended by the Semantic Scholar API
\n
\n
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-12-17T01:34:07.910Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6911126971244812},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2412.09604","authors":[{"_id":"675d657d1c375f21ff6c008f","user":{"_id":"64b9033777ae61bcc80aa4f3","avatarUrl":"/avatars/408c335395c79f3df69fd9bf70abc312.svg","isPro":false,"fullname":"Hao Li","user":"cpsxhao","type":"user"},"name":"Hao Li","status":"claimed_verified","statusLastChangedAt":"2024-12-16T09:41:44.798Z","hidden":false},{"_id":"675d657d1c375f21ff6c0090","user":{"_id":"64b7475efa7eabaae5f7ba94","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b7475efa7eabaae5f7ba94/YLv44PZM6tw1sxACY1-U_.png","isPro":false,"fullname":"Changyao Tian","user":"Changyao","type":"user"},"name":"Changyao Tian","status":"admin_assigned","statusLastChangedAt":"2024-12-16T10:23:50.435Z","hidden":false},{"_id":"675d657d1c375f21ff6c0091","user":{"_id":"644a2bccd9a3ae834104b065","avatarUrl":"/avatars/ee2caf787796cca438349d10089bdfd1.svg","isPro":false,"fullname":"Jie Shao","user":"hehesang","type":"user"},"name":"Jie Shao","status":"admin_assigned","statusLastChangedAt":"2024-12-16T10:24:24.661Z","hidden":false},{"_id":"675d657d1c375f21ff6c0092","user":{"_id":"64ae2359179421d320b1694b","avatarUrl":"/avatars/c387a75191005bcaa473091de5383a10.svg","isPro":false,"fullname":"Xizhou Zhu","user":"Einsiedler","type":"user"},"name":"Xizhou Zhu","status":"admin_assigned","statusLastChangedAt":"2024-12-16T10:24:30.898Z","hidden":false},{"_id":"675d657d1c375f21ff6c0093","user":{"_id":"665d4b515fdfe8f923e347a7","avatarUrl":"/avatars/d114b24c02dadfca0a8aee104755a8ec.svg","isPro":false,"fullname":"Zhaokai Wang","user":"wzk1015","type":"user"},"name":"Zhaokai Wang","status":"claimed_verified","statusLastChangedAt":"2024-12-16T09:41:42.930Z","hidden":false},{"_id":"675d657d1c375f21ff6c0094","name":"Jinguo Zhu","hidden":false},{"_id":"675d657d1c375f21ff6c0095","user":{"_id":"66efe658de163a536aa84178","avatarUrl":"/avatars/de547ed48ca37694e08ee926aea03d9f.svg","isPro":false,"fullname":"dou wenhan","user":"douwh","type":"user"},"name":"Wenhan Dou","status":"admin_assigned","statusLastChangedAt":"2024-12-16T10:24:50.436Z","hidden":false},{"_id":"675d657d1c375f21ff6c0096","user":{"_id":"66149831f9e58fe02b08f10e","avatarUrl":"/avatars/493636f2368e689c9241e53bc2e7b97e.svg","isPro":false,"fullname":"wangxiaogang","user":"wangxiaogang","type":"user"},"name":"Xiaogang Wang","status":"admin_assigned","statusLastChangedAt":"2024-12-16T10:25:01.045Z","hidden":false},{"_id":"675d657d1c375f21ff6c0097","user":{"_id":"65c04e9c27a5fdca81abcbd9","avatarUrl":"/avatars/12a155683c824fa23da4a9e2bed4f64e.svg","isPro":false,"fullname":"Hongsheng LI","user":"hsli-cuhk","type":"user"},"name":"Hongsheng Li","status":"admin_assigned","statusLastChangedAt":"2024-12-16T10:25:17.271Z","hidden":false},{"_id":"675d657d1c375f21ff6c0098","user":{"_id":"65ead3ea908526a39082e641","avatarUrl":"/avatars/dcf870695fd56b06ca03d82f831e9019.svg","isPro":false,"fullname":"Lewei Lu","user":"luotto","type":"user"},"name":"Lewei Lu","status":"admin_assigned","statusLastChangedAt":"2024-12-16T10:25:24.403Z","hidden":false},{"_id":"675d657d1c375f21ff6c0099","user":{"_id":"64686f7172d9180d4ac8b4e4","avatarUrl":"/avatars/db67dd6c4b2b41054ddcce5a18ade6f8.svg","isPro":false,"fullname":"Jifeng Dai","user":"daijifeng","type":"user"},"name":"Jifeng Dai","status":"admin_assigned","statusLastChangedAt":"2024-12-16T10:25:31.520Z","hidden":false}],"publishedAt":"2024-12-12T18:59:26.000Z","submittedOnDailyAt":"2024-12-16T07:18:18.466Z","title":"SynerGen-VL: Towards Synergistic Image Understanding and Generation with\n Vision Experts and Token Folding","submittedOnDailyBy":{"_id":"665d4b515fdfe8f923e347a7","avatarUrl":"/avatars/d114b24c02dadfca0a8aee104755a8ec.svg","isPro":false,"fullname":"Zhaokai Wang","user":"wzk1015","type":"user"},"summary":"The remarkable success of Large Language Models (LLMs) has extended to the\nmultimodal domain, achieving outstanding performance in image understanding and\ngeneration. Recent efforts to develop unified Multimodal Large Language Models\n(MLLMs) that integrate these capabilities have shown promising results.\nHowever, existing approaches often involve complex designs in model\narchitecture or training pipeline, increasing the difficulty of model training\nand scaling. In this paper, we propose SynerGen-VL, a simple yet powerful\nencoder-free MLLM capable of both image understanding and generation. To\naddress challenges identified in existing encoder-free unified MLLMs, we\nintroduce the token folding mechanism and the vision-expert-based progressive\nalignment pretraining strategy, which effectively support high-resolution image\nunderstanding while reducing training complexity. After being trained on\nlarge-scale mixed image-text data with a unified next-token prediction\nobjective, SynerGen-VL achieves or surpasses the performance of existing\nencoder-free unified MLLMs with comparable or smaller parameter sizes, and\nnarrows the gap with task-specific state-of-the-art models, highlighting a\npromising path toward future unified MLLMs. Our code and models shall be\nreleased.","upvotes":38,"discussionId":"675d657f1c375f21ff6c0123","ai_summary":"SynerGen-VL, an encoder-free MLLM using token folding and vision-expert-based progressive alignment, performs comparably to task-specific models on image understanding and generation with reduced training complexity.","ai_keywords":["Large Language Models","Multimodal Large Language Models","SynerGen-VL","token folding mechanism","vision-expert-based progressive alignment pretraining strategy","high-resolution image understanding","unified next-token prediction objective"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64b7475efa7eabaae5f7ba94","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b7475efa7eabaae5f7ba94/YLv44PZM6tw1sxACY1-U_.png","isPro":false,"fullname":"Changyao Tian","user":"Changyao","type":"user"},{"_id":"665d4b515fdfe8f923e347a7","avatarUrl":"/avatars/d114b24c02dadfca0a8aee104755a8ec.svg","isPro":false,"fullname":"Zhaokai Wang","user":"wzk1015","type":"user"},{"_id":"64b9033777ae61bcc80aa4f3","avatarUrl":"/avatars/408c335395c79f3df69fd9bf70abc312.svg","isPro":false,"fullname":"Hao Li","user":"cpsxhao","type":"user"},{"_id":"6624ba6d79d897d7ddee24b5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6624ba6d79d897d7ddee24b5/eHkAquXvHBlCNNgaRDYgG.jpeg","isPro":false,"fullname":"Guanzhou Chen","user":"Rayment","type":"user"},{"_id":"674a03a3f21674e4894cd2c5","avatarUrl":"/avatars/16a360b6e3ead4554e2461963ede29c5.svg","isPro":false,"fullname":"shiqian","user":"shiqiansu","type":"user"},{"_id":"64564b0e4a7ffb7d5a47f412","avatarUrl":"/avatars/3e8563a0dbaaf60401478fb2c7960fa4.svg","isPro":false,"fullname":"zwgao","user":"zwgao","type":"user"},{"_id":"65ee6735d7d63c2ed038e170","avatarUrl":"/avatars/72ae873c7a13351094f8a69bd024b363.svg","isPro":false,"fullname":"Wenhai Wang","user":"whai362","type":"user"},{"_id":"64119264f0f81eb569e0d569","avatarUrl":"/avatars/00dccedea51ac7d2f398ca9d65a8e78b.svg","isPro":true,"fullname":"Zhe Chen","user":"czczup","type":"user"},{"_id":"649cf4ecdd87dd9ef76fe020","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/M7RpD_AcNewA2xADhhyCB.jpeg","isPro":false,"fullname":"Xuehui Wang","user":"huiserwang","type":"user"},{"_id":"6579b818563044badca392fc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6579b818563044badca392fc/XTKQ9Lhceibp9dnQADPQF.jpeg","isPro":false,"fullname":"cuierfei","user":"cuierfei","type":"user"},{"_id":"66abb1d1930106b4b433f295","avatarUrl":"/avatars/12e3375fb6aa04fc24d5092bf40cdecd.svg","isPro":false,"fullname":"ybw","user":"YYangzzzz","type":"user"},{"_id":"63578b44a8e247a69d4d98da","avatarUrl":"/avatars/dd22230d2f68f5c8620485f5e6a3aa3c.svg","isPro":false,"fullname":"mymy","user":"zmyzxb","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":3}">
SynerGen-VL: Towards Synergistic Image Understanding and Generation with
Vision Experts and Token Folding
Published on Dec 12, 2024
#3 Paper of the day
Abstract
SynerGen-VL, an encoder-free MLLM using token folding and vision-expert-based progressive alignment, performs comparably to task-specific models on image understanding and generation with reduced training complexity.
The remarkable success of Large Language Models (LLMs) has extended to the
multimodal domain, achieving outstanding performance in image understanding and
generation. Recent efforts to develop unified Multimodal Large Language Models
(MLLMs) that integrate these capabilities have shown promising results.
However, existing approaches often involve complex designs in model
architecture or training pipeline, increasing the difficulty of model training
and scaling. In this paper, we propose SynerGen-VL, a simple yet powerful
encoder-free MLLM capable of both image understanding and generation. To
address challenges identified in existing encoder-free unified MLLMs, we
introduce the token folding mechanism and the vision-expert-based progressive
alignment pretraining strategy, which effectively support high-resolution image
understanding while reducing training complexity. After being trained on
large-scale mixed image-text data with a unified next-token prediction
objective, SynerGen-VL achieves or surpasses the performance of existing
encoder-free unified MLLMs with comparable or smaller parameter sizes, and
narrows the gap with task-specific state-of-the-art models, highlighting a
promising path toward future unified MLLMs. Our code and models shall be
released.