Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Chain of Mindset: Reasoning with Adaptive Cognitive Modes
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-12T01:40:48.402Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7300703525543213},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.10063","authors":[{"_id":"698bf4ef6052d3bed9630a96","user":{"_id":"6895e7f146763431aea25ca4","avatarUrl":"/avatars/52e550c3f7e8da2e31b63413e2e71e6c.svg","isPro":false,"fullname":"Tianyi Jiang","user":"LumosJiang","type":"user"},"name":"Tianyi Jiang","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:14:33.352Z","hidden":false},{"_id":"698bf4ef6052d3bed9630a97","name":"Arctanx An","hidden":false},{"_id":"698bf4ef6052d3bed9630a98","name":"Hengyi Feng","hidden":false},{"_id":"698bf4ef6052d3bed9630a99","name":"Naixin Zhai","hidden":false},{"_id":"698bf4ef6052d3bed9630a9a","name":"Haodong Li","hidden":false},{"_id":"698bf4ef6052d3bed9630a9b","user":{"_id":"64084fa192033c150738e4f2","avatarUrl":"/avatars/dfff2216eb235c635e5abe6fda3084f0.svg","isPro":false,"fullname":"Yu_xm","user":"Yu2020","type":"user"},"name":"Xiaomin Yu","status":"claimed_verified","statusLastChangedAt":"2026-02-11T12:34:26.745Z","hidden":false},{"_id":"698bf4ef6052d3bed9630a9c","name":"Jiahui Liu","hidden":false},{"_id":"698bf4ef6052d3bed9630a9d","name":"Hanwen Du","hidden":false},{"_id":"698bf4ef6052d3bed9630a9e","name":"Shuo Zhang","hidden":false},{"_id":"698bf4ef6052d3bed9630a9f","user":{"_id":"64aa645404e7b379feccc490","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64aa645404e7b379feccc490/4m8qcdy2OGK8visR5Jjl5.png","isPro":false,"fullname":"Zhi Yang","user":"yangzhi1","type":"user"},"name":"Zhi Yang","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:14:35.722Z","hidden":false},{"_id":"698bf4ef6052d3bed9630aa0","name":"Jie Huang","hidden":false},{"_id":"698bf4ef6052d3bed9630aa1","name":"Yuhua Li","hidden":false},{"_id":"698bf4ef6052d3bed9630aa2","name":"Yongxin Ni","hidden":false},{"_id":"698bf4ef6052d3bed9630aa3","name":"Huacan Wang","hidden":false},{"_id":"698bf4ef6052d3bed9630aa4","name":"Ronghao Chen","hidden":false}],"publishedAt":"2026-02-10T18:31:47.000Z","submittedOnDailyAt":"2026-02-11T00:51:58.024Z","title":"Chain of Mindset: Reasoning with Adaptive Cognitive Modes","submittedOnDailyBy":{"_id":"64aa645404e7b379feccc490","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64aa645404e7b379feccc490/4m8qcdy2OGK8visR5Jjl5.png","isPro":false,"fullname":"Zhi Yang","user":"yangzhi1","type":"user"},"summary":"Human problem-solving is never the repetition of a single mindset, by which we mean a distinct mode of cognitive processing. When tackling a specific task, we do not rely on a single mindset; instead, we integrate multiple mindsets within the single solution process. However, existing LLM reasoning methods fall into a common trap: they apply the same fixed mindset across all steps, overlooking that different stages of solving the same problem require fundamentally different mindsets. This single-minded assumption prevents models from reaching the next level of intelligence. To address this limitation, we propose Chain of Mindset (CoM), a training-free agentic framework that enables step-level adaptive mindset orchestration. CoM decomposes reasoning into four functionally heterogeneous mindsets: Spatial, Convergent, Divergent, and Algorithmic. A Meta-Agent dynamically selects the optimal mindset based on the evolving reasoning state, while a bidirectional Context Gate filters cross-module information flow to maintain effectiveness and efficiency. Experiments across six challenging benchmarks spanning mathematics, code generation, scientific QA, and spatial reasoning demonstrate that CoM achieves state-of-the-art performance, outperforming the strongest baseline by 4.96\\% and 4.72\\% in overall accuracy on Qwen3-VL-32B-Instruct and Gemini-2.0-Flash, while balancing reasoning efficiency. Our code is publicly available at https://github.com/QuantaAlpha/chain-of-mindset{https://github.com/QuantaAlpha/chain-of-mindset}.","upvotes":70,"discussionId":"698bf4f06052d3bed9630aa5","githubRepo":"https://github.com/QuantaAlpha/chain-of-mindset","githubRepoAddedBy":"user","ai_summary":"A novel training-free framework called Chain of Mindset enables step-level adaptive mindset orchestration for large language models by integrating spatial, convergent, divergent, and algorithmic reasoning approaches.","ai_keywords":["Chain of Mindset","CoM","agentic framework","step-level adaptive mindset orchestration","Spatial mindset","Convergent mindset","Divergent mindset","Algorithmic mindset","Meta-Agent","bidirectional Context Gate","reasoning efficiency","large language models"],"githubStars":76,"organization":{"_id":"68b33ab6a9ed99140481cf44","name":"QuantaAlpha","fullname":"QuantaAlpha","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63f7767fbd28622c9b9915e9/DRN8PvmnpKmn2MSLQ7qhF.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64b78eb76ab5d14ca7faac87","avatarUrl":"/avatars/cc847b8c8bf8cc2537e03ee218628396.svg","isPro":false,"fullname":"CristianoC","user":"CristianoC20","type":"user"},{"_id":"64aa645404e7b379feccc490","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64aa645404e7b379feccc490/4m8qcdy2OGK8visR5Jjl5.png","isPro":false,"fullname":"Zhi Yang","user":"yangzhi1","type":"user"},{"_id":"69670d031053fc18e0ac011e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/69670d031053fc18e0ac011e/FX_QB_BTkJeyiLVZCXpRI.png","isPro":false,"fullname":"AIFin Lab","user":"aifinlab","type":"user"},{"_id":"673c9bc630316b2f3d7c2efd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/zQX2REs4WVnpbdqi3a1Eq.png","isPro":false,"fullname":"zenglingfeng","user":"uu531","type":"user"},{"_id":"674a969f61ea69f68d32391e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/6yhUWIgT7h-aYnF5ZrgJq.png","isPro":false,"fullname":"yy","user":"yyZCX","type":"user"},{"_id":"698ae164f101d6911fabcab8","avatarUrl":"/avatars/bbeadbd0d7a176b92cf89bbf522d46f8.svg","isPro":false,"fullname":"Fangqi Lou","user":"louf7","type":"user"},{"_id":"697c1d8687f5985c0c3c40bc","avatarUrl":"/avatars/f134149fad862a81f389166498a35f79.svg","isPro":false,"fullname":"Alex Lau","user":"Medivn","type":"user"},{"_id":"64084fa192033c150738e4f2","avatarUrl":"/avatars/dfff2216eb235c635e5abe6fda3084f0.svg","isPro":false,"fullname":"Yu_xm","user":"Yu2020","type":"user"},{"_id":"649ccad91b14867c42f33216","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/649ccad91b14867c42f33216/7-jFLuYr2nPhqO0c381GO.png","isPro":false,"fullname":"Nicy","user":"Oliver1515","type":"user"},{"_id":"66a0ab4923e426e19db92773","avatarUrl":"/avatars/19517dd085a3e48e644613ca0b2c3753.svg","isPro":false,"fullname":"ronghaochen","user":"cristiano28","type":"user"},{"_id":"683ab6f97ff49fccfd6d462c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/683ab6f97ff49fccfd6d462c/-FNquLhsZiIhkThcReTEW.jpeg","isPro":false,"fullname":"Qian ZhuoYang","user":"0xMarkQ","type":"user"},{"_id":"697c53e114e87c4cc60df7d7","avatarUrl":"/avatars/c1e7297c424f8097a35377594f0ca9b4.svg","isPro":false,"fullname":"QIAOSEN XU","user":"Xuqser","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"68b33ab6a9ed99140481cf44","name":"QuantaAlpha","fullname":"QuantaAlpha","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63f7767fbd28622c9b9915e9/DRN8PvmnpKmn2MSLQ7qhF.jpeg"}}">
A novel training-free framework called Chain of Mindset enables step-level adaptive mindset orchestration for large language models by integrating spatial, convergent, divergent, and algorithmic reasoning approaches.
AI-generated summary
Human problem-solving is never the repetition of a single mindset, by which we mean a distinct mode of cognitive processing. When tackling a specific task, we do not rely on a single mindset; instead, we integrate multiple mindsets within the single solution process. However, existing LLM reasoning methods fall into a common trap: they apply the same fixed mindset across all steps, overlooking that different stages of solving the same problem require fundamentally different mindsets. This single-minded assumption prevents models from reaching the next level of intelligence. To address this limitation, we propose Chain of Mindset (CoM), a training-free agentic framework that enables step-level adaptive mindset orchestration. CoM decomposes reasoning into four functionally heterogeneous mindsets: Spatial, Convergent, Divergent, and Algorithmic. A Meta-Agent dynamically selects the optimal mindset based on the evolving reasoning state, while a bidirectional Context Gate filters cross-module information flow to maintain effectiveness and efficiency. Experiments across six challenging benchmarks spanning mathematics, code generation, scientific QA, and spatial reasoning demonstrate that CoM achieves state-of-the-art performance, outperforming the strongest baseline by 4.96\% and 4.72\% in overall accuracy on Qwen3-VL-32B-Instruct and Gemini-2.0-Flash, while balancing reasoning efficiency. Our code is publicly available at https://github.com/QuantaAlpha/chain-of-mindset{https://github.com/QuantaAlpha/chain-of-mindset}.
CoM is a training-free agentic framework that dynamically orchestrates four step-level mindsets (Spatial, Convergent, Divergent, Algorithmic) via a Meta-Agent and a Context Gate, avoiding one-size-fits-all reasoning and improving accuracy and efficiency across diverse benchmarks.