Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Think Right: Learning to Mitigate Under-Over Thinking via Adaptive,
Attentive Compression
https://github.com/joykirat18/TRAAC\n","updatedAt":"2025-10-03T14:36:21.424Z","author":{"_id":"62ce26129f723d34cf1f595a","avatarUrl":"/avatars/8e305ac7c170d70fbf83c109789b40d9.svg","fullname":"Justin Chen","name":"dinobby","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7868136763572693},"editors":["dinobby"],"editorAvatarUrls":["/avatars/8e305ac7c170d70fbf83c109789b40d9.svg"],"reactions":[],"isReport":false}},{"id":"68e07a7894278d4a80368802","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-10-04T01:38:00.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Your Models Have Thought Enough: Training Large Reasoning Models to Stop Overthinking](https://huggingface.co/papers/2509.23392) (2025)\n* [Train Long, Think Short: Curriculum Learning for Efficient Reasoning](https://huggingface.co/papers/2508.08940) (2025)\n* [Sample More to Think Less: Group Filtered Policy Optimization for Concise Reasoning](https://huggingface.co/papers/2508.09726) (2025)\n* [BudgetThinker: Empowering Budget-aware LLM Reasoning with Control Tokens](https://huggingface.co/papers/2508.17196) (2025)\n* [Less is More Tokens: Efficient Math Reasoning via Difficulty-Aware Chain-of-Thought Distillation](https://huggingface.co/papers/2509.05226) (2025)\n* [Promoting Efficient Reasoning with Verifiable Stepwise Reward](https://huggingface.co/papers/2508.10293) (2025)\n* [Aware First, Think Less: Dynamic Boundary Self-Awareness Drives Extreme Reasoning Efficiency in Large Language Models](https://huggingface.co/papers/2508.11582) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-10-04T01:38:00.351Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7239082455635071},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2510.01581","authors":[{"_id":"68dfdf1f73e20ab5778419e8","name":"Joykirat Singh","hidden":false},{"_id":"68dfdf1f73e20ab5778419e9","name":"Justin Chih-Yao Chen","hidden":false},{"_id":"68dfdf1f73e20ab5778419ea","name":"Archiki Prasad","hidden":false},{"_id":"68dfdf1f73e20ab5778419eb","name":"Elias Stengel-Eskin","hidden":false},{"_id":"68dfdf1f73e20ab5778419ec","user":{"_id":"64aba383fddf117e6e5ba818","avatarUrl":"/avatars/ee7d25d865b34be5902872d060ad9153.svg","isPro":false,"fullname":"Akshay Nambi","user":"akshaynambi","type":"user"},"name":"Akshay Nambi","status":"claimed_verified","statusLastChangedAt":"2025-11-27T10:17:42.133Z","hidden":false},{"_id":"68dfdf1f73e20ab5778419ed","name":"Mohit Bansal","hidden":false}],"publishedAt":"2025-10-02T02:00:20.000Z","submittedOnDailyAt":"2025-10-03T13:06:21.413Z","title":"Think Right: Learning to Mitigate Under-Over Thinking via Adaptive,\n Attentive Compression","submittedOnDailyBy":{"_id":"62ce26129f723d34cf1f595a","avatarUrl":"/avatars/8e305ac7c170d70fbf83c109789b40d9.svg","isPro":false,"fullname":"Justin Chen","user":"dinobby","type":"user"},"summary":"Recent thinking models solve complex reasoning tasks by scaling test-time\ncompute, but this scaling must be allocated in line with task difficulty. On\none hand, short reasoning (underthinking) leads to errors on harder problems\nthat require extended reasoning steps; but, excessively long reasoning\n(overthinking) can be token-inefficient, generating unnecessary steps even\nafter reaching a correct intermediate solution. We refer to this as\nunder-adaptivity, where the model fails to modulate its response length\nappropriately given problems of varying difficulty. To address under-adaptivity\nand strike a balance between under- and overthinking, we propose TRAAC (Think\nRight with Adaptive, Attentive Compression), an online post-training RL method\nthat leverages the model's self-attention over a long reasoning trajectory to\nidentify important steps and prune redundant ones. TRAAC also estimates\ndifficulty and incorporates it into training rewards, thereby learning to\nallocate reasoning budget commensurate with example difficulty. Our approach\nimproves accuracy, reduces reasoning steps, and enables adaptive thinking\ncompared to base models and other RL baselines. Across a variety of tasks\n(AIME, AMC, GPQA-D, BBEH), TRAAC (Qwen3-4B) achieves an average absolute\naccuracy gain of 8.4% with a relative reduction in reasoning length of 36.8%\ncompared to the base model, and a 7.9% accuracy gain paired with a 29.4% length\ndrop compared to the best RL baseline. TRAAC also shows strong generalization:\nalthough our models are trained on math datasets, they show accuracy and\nefficiency gains on out-of-distribution non-math datasets like GPQA-D, BBEH,\nand OptimalThinkingBench. Our analysis further verifies that TRAAC provides\nfine-grained adjustments to thinking budget based on difficulty and that a\ncombination of task-difficulty calibration and attention-based compression\nyields gains across diverse tasks.","upvotes":2,"discussionId":"68dfdf1f73e20ab5778419ee","githubRepo":"https://github.com/joykirat18/TRAAC","githubRepoAddedBy":"auto","ai_summary":"TRAAC, an online post-training RL method, improves model accuracy and efficiency by adaptively adjusting reasoning steps based on task difficulty using self-attention.","ai_keywords":["self-attention","long reasoning trajectory","adaptive","attentive compression","under-adaptivity","reasoning budget","task-difficulty calibration","TRAAC","Qwen3-4B","AIME","AMC","GPQA-D","BBEH","OptimalThinkingBench"],"githubStars":11},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"65025370b6595dc45c397340","avatarUrl":"/avatars/9469599b176034548042922c0afa7051.svg","isPro":false,"fullname":"J C","user":"dark-pen","type":"user"},{"_id":"686db5d4af2b856fabbf13aa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/6BjMv2LVNoqvbX8fQSTPI.png","isPro":false,"fullname":"V bbbb","user":"Bbbbbnnn","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
TRAAC, an online post-training RL method, improves model accuracy and efficiency by adaptively adjusting reasoning steps based on task difficulty using self-attention.
AI-generated summary
Recent thinking models solve complex reasoning tasks by scaling test-time
compute, but this scaling must be allocated in line with task difficulty. On
one hand, short reasoning (underthinking) leads to errors on harder problems
that require extended reasoning steps; but, excessively long reasoning
(overthinking) can be token-inefficient, generating unnecessary steps even
after reaching a correct intermediate solution. We refer to this as
under-adaptivity, where the model fails to modulate its response length
appropriately given problems of varying difficulty. To address under-adaptivity
and strike a balance between under- and overthinking, we propose TRAAC (Think
Right with Adaptive, Attentive Compression), an online post-training RL method
that leverages the model's self-attention over a long reasoning trajectory to
identify important steps and prune redundant ones. TRAAC also estimates
difficulty and incorporates it into training rewards, thereby learning to
allocate reasoning budget commensurate with example difficulty. Our approach
improves accuracy, reduces reasoning steps, and enables adaptive thinking
compared to base models and other RL baselines. Across a variety of tasks
(AIME, AMC, GPQA-D, BBEH), TRAAC (Qwen3-4B) achieves an average absolute
accuracy gain of 8.4% with a relative reduction in reasoning length of 36.8%
compared to the base model, and a 7.9% accuracy gain paired with a 29.4% length
drop compared to the best RL baseline. TRAAC also shows strong generalization:
although our models are trained on math datasets, they show accuracy and
efficiency gains on out-of-distribution non-math datasets like GPQA-D, BBEH,
and OptimalThinkingBench. Our analysis further verifies that TRAAC provides
fine-grained adjustments to thinking budget based on difficulty and that a
combination of task-difficulty calibration and attention-based compression
yields gains across diverse tasks.