Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - UniT: Unified Multimodal Chain-of-Thought Test-time Scaling
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-19T01:38:31.303Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7369149923324585},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.12279","authors":[{"_id":"69947c8fd2ea89ac106cf9af","user":{"_id":"62b67da0f56de4396ca9e44b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658586059273-62b67da0f56de4396ca9e44b.jpeg","isPro":false,"fullname":"Liangyu Chen","user":"liangyuch","type":"user"},"name":"Leon Liangyu Chen","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:38:47.995Z","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b0","user":{"_id":"650a8979c19e5b4c8a6ff062","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/650a8979c19e5b4c8a6ff062/64_JuECX_k_-uK7m7nlua.jpeg","isPro":false,"fullname":"Haoyu Ma","user":"haoyum1997","type":"user"},"name":"Haoyu Ma","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:38:57.190Z","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b1","user":{"_id":"65f09aefcaf237b0a2d4d3ff","avatarUrl":"/avatars/5b03ea49e878058efa3c88e53a6e6a9b.svg","isPro":false,"fullname":"Zhipeng Fan","user":"Jetp","type":"user"},"name":"Zhipeng Fan","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:39:04.149Z","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b2","user":{"_id":"60efe7fa0d920bc7805cada5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60efe7fa0d920bc7805cada5/2LBrJBjSCOP5ilZIpWLHl.png","isPro":false,"fullname":"Ziqi Huang","user":"Ziqi","type":"user"},"name":"Ziqi Huang","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:39:13.973Z","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b3","name":"Animesh Sinha","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b4","user":{"_id":"6549417b3ce45eb764faf993","avatarUrl":"/avatars/d310f475d0697f5f13b3d4141ea0ccaf.svg","isPro":false,"fullname":"Xiaoliang Dai","user":"daixl1992","type":"user"},"name":"Xiaoliang Dai","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:39:20.967Z","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b5","name":"Jialiang Wang","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b6","name":"Zecheng He","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b7","name":"Jianwei Yang","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b8","user":{"_id":"62aba526cae4462c0c6caa0f","avatarUrl":"/avatars/430560ec2c2547f819225769ab432f30.svg","isPro":false,"fullname":"Chunyuan Li","user":"Chunyuan24","type":"user"},"name":"Chunyuan Li","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:39:39.037Z","hidden":false},{"_id":"69947c8fd2ea89ac106cf9b9","user":{"_id":"697563dbce43f259ee32d7ed","avatarUrl":"/avatars/ac626eb596216c6e87f13fc52ba3fa11.svg","isPro":false,"fullname":"Junzhe Sun","user":"junzhesun","type":"user"},"name":"Junzhe Sun","status":"claimed_verified","statusLastChangedAt":"2026-02-19T09:53:39.459Z","hidden":false},{"_id":"69947c8fd2ea89ac106cf9ba","name":"Chu Wang","hidden":false},{"_id":"69947c8fd2ea89ac106cf9bb","user":{"_id":"677c8b2e92550a07fcad0f50","avatarUrl":"/avatars/2be26e8f25e98cfe5b1d227ee0409cd0.svg","isPro":false,"fullname":"Serena Yeung-Levy","user":"yeunglevy","type":"user"},"name":"Serena Yeung-Levy","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:39:53.861Z","hidden":false},{"_id":"69947c8fd2ea89ac106cf9bc","user":{"_id":"6417cf37dce1e4c0229f17b1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6417cf37dce1e4c0229f17b1/7h-ZCB5f4wif7TsnF-B1M.jpeg","isPro":false,"fullname":"Felix Xu","user":"katanaxu","type":"user"},"name":"Felix Juefei-Xu","status":"claimed_verified","statusLastChangedAt":"2026-02-19T09:53:41.555Z","hidden":false}],"publishedAt":"2026-02-12T18:59:49.000Z","submittedOnDailyAt":"2026-02-18T06:59:40.830Z","title":"UniT: Unified Multimodal Chain-of-Thought Test-time Scaling","submittedOnDailyBy":{"_id":"62b67da0f56de4396ca9e44b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658586059273-62b67da0f56de4396ca9e44b.jpeg","isPro":false,"fullname":"Liangyu Chen","user":"liangyuch","type":"user"},"summary":"Unified models can handle both multimodal understanding and generation within a single architecture, yet they typically operate in a single pass without iteratively refining their outputs. Many multimodal tasks, especially those involving complex spatial compositions, multiple interacting objects, or evolving instructions, require decomposing instructions, verifying intermediate results, and making iterative corrections. While test-time scaling (TTS) has demonstrated that allocating additional inference compute for iterative reasoning substantially improves language model performance, extending this paradigm to unified multimodal models remains an open challenge. We introduce UniT, a framework for multimodal chain-of-thought test-time scaling that enables a single unified model to reason, verify, and refine across multiple rounds. UniT combines agentic data synthesis, unified model training, and flexible test-time inference to elicit cognitive behaviors including verification, subgoal decomposition, and content memory. Our key findings are: (1) unified models trained on short reasoning trajectories generalize to longer inference chains at test time; (2) sequential chain-of-thought reasoning provides a more scalable and compute-efficient TTS strategy than parallel sampling; (3) training on generation and editing trajectories improves out-of-distribution visual reasoning. These results establish multimodal test-time scaling as an effective paradigm for advancing both generation and understanding in unified models.","upvotes":19,"discussionId":"69947c90d2ea89ac106cf9bd","ai_summary":"UniT framework enables unified multimodal models to perform iterative reasoning and refinement through chain-of-thought test-time scaling, improving both generation and understanding capabilities.","ai_keywords":["unified models","multimodal understanding","multimodal generation","test-time scaling","chain-of-thought reasoning","agentic data synthesis","unified model training","test-time inference","cognitive behaviors","visual reasoning"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62b67da0f56de4396ca9e44b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658586059273-62b67da0f56de4396ca9e44b.jpeg","isPro":false,"fullname":"Liangyu Chen","user":"liangyuch","type":"user"},{"_id":"646e1ef5075bbcc48ddf21e8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/_vJC0zeVOIvaNV2R6toqg.jpeg","isPro":false,"fullname":"Pu Fanyi","user":"pufanyi","type":"user"},{"_id":"66915a572c1a3a8edcc977b4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66915a572c1a3a8edcc977b4/2tANTgj48VQMgCcEcdkwE.jpeg","isPro":false,"fullname":"Yuwei Niu","user":"Yuwei-Niu","type":"user"},{"_id":"6400ba2b261cfa61f3a00555","avatarUrl":"/avatars/1311e0b5e21b1c94d73fcaf455d3c7f7.svg","isPro":false,"fullname":"Kairui","user":"KairuiHu","type":"user"},{"_id":"6478679d7b370854241b2ad8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6478679d7b370854241b2ad8/dBczWYYdfEt9tQcnVGhQk.jpeg","isPro":false,"fullname":"xiangan","user":"xiangan","type":"user"},{"_id":"649aa367c6cf3cc95bc1b7f6","avatarUrl":"/avatars/4bf5446c261eab08fc06caebf4c5779a.svg","isPro":false,"fullname":"Yifei Shen","user":"yshenaw","type":"user"},{"_id":"64b4a717aa03b6520839e9b8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b4a717aa03b6520839e9b8/Rt3ERG-6BVEA4hAwOz0_I.jpeg","isPro":false,"fullname":"Haiwen Diao","user":"Paranioar","type":"user"},{"_id":"673e025a1b559505fc8d9ac8","avatarUrl":"/avatars/5e4d3d63358bc82e763ff9dfce22d1a1.svg","isPro":false,"fullname":"Kyu Song","user":"kyunocap","type":"user"},{"_id":"60efe7fa0d920bc7805cada5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60efe7fa0d920bc7805cada5/2LBrJBjSCOP5ilZIpWLHl.png","isPro":false,"fullname":"Ziqi Huang","user":"Ziqi","type":"user"},{"_id":"6417cf37dce1e4c0229f17b1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6417cf37dce1e4c0229f17b1/7h-ZCB5f4wif7TsnF-B1M.jpeg","isPro":false,"fullname":"Felix Xu","user":"katanaxu","type":"user"},{"_id":"684d57f26e04c265777ead3f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/cuOj-bQqukSZreXgUJlfm.png","isPro":false,"fullname":"Joakim Lee","user":"Reinforcement4All","type":"user"},{"_id":"62aba526cae4462c0c6caa0f","avatarUrl":"/avatars/430560ec2c2547f819225769ab432f30.svg","isPro":false,"fullname":"Chunyuan Li","user":"Chunyuan24","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
UniT framework enables unified multimodal models to perform iterative reasoning and refinement through chain-of-thought test-time scaling, improving both generation and understanding capabilities.
AI-generated summary
Unified models can handle both multimodal understanding and generation within a single architecture, yet they typically operate in a single pass without iteratively refining their outputs. Many multimodal tasks, especially those involving complex spatial compositions, multiple interacting objects, or evolving instructions, require decomposing instructions, verifying intermediate results, and making iterative corrections. While test-time scaling (TTS) has demonstrated that allocating additional inference compute for iterative reasoning substantially improves language model performance, extending this paradigm to unified multimodal models remains an open challenge. We introduce UniT, a framework for multimodal chain-of-thought test-time scaling that enables a single unified model to reason, verify, and refine across multiple rounds. UniT combines agentic data synthesis, unified model training, and flexible test-time inference to elicit cognitive behaviors including verification, subgoal decomposition, and content memory. Our key findings are: (1) unified models trained on short reasoning trajectories generalize to longer inference chains at test time; (2) sequential chain-of-thought reasoning provides a more scalable and compute-efficient TTS strategy than parallel sampling; (3) training on generation and editing trajectories improves out-of-distribution visual reasoning. These results establish multimodal test-time scaling as an effective paradigm for advancing both generation and understanding in unified models.
UniT framework enables unified multimodal models to perform iterative reasoning and refinement through chain-of-thought test-time scaling, improving both generation and understanding capabilities.