Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-01-17T01:37:28.349Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7334548234939575},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"696b501eca5779cca92942dd","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false},"createdAt":"2026-01-17T09:02:22.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/rewarding-the-rare-uniqueness-aware-rl-for-creative-problem-solving-in-llms\n","html":"

arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/rewarding-the-rare-uniqueness-aware-rl-for-creative-problem-solving-in-llms

\n","updatedAt":"2026-01-17T09:02:22.807Z","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7914626002311707},"editors":["grantsing"],"editorAvatarUrls":["/avatars/3aedb9522cc3cd08349d654f523fd792.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.08763","authors":[{"_id":"6969b0a232f0333869ff946a","user":{"_id":"64351475901c5734bcb64248","avatarUrl":"/avatars/12346d4301c1bfb00ce0ea128a93cc15.svg","isPro":false,"fullname":"Zhiyuan Hu","user":"zhiyuanhucs","type":"user"},"name":"Zhiyuan Hu","status":"admin_assigned","statusLastChangedAt":"2026-01-16T15:32:38.232Z","hidden":false},{"_id":"6969b0a232f0333869ff946b","user":{"_id":"6891c906f3c31445cc040ab1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6891c906f3c31445cc040ab1/NBqxXOY7al4CD0XBj8ke2.jpeg","isPro":false,"fullname":"Yucheng Wang","user":"DevilEnfant","type":"user"},"name":"Yucheng Wang","status":"admin_assigned","statusLastChangedAt":"2026-01-16T15:32:48.080Z","hidden":false},{"_id":"6969b0a232f0333869ff946c","name":"Yufei He","hidden":false},{"_id":"6969b0a232f0333869ff946d","user":{"_id":"682deb444988bd82847e2b03","avatarUrl":"/avatars/15da087e84386ea72c6fa2db63571420.svg","isPro":false,"fullname":"Jia-Ying Wu","user":"EricaWu","type":"user"},"name":"Jiaying Wu","status":"admin_assigned","statusLastChangedAt":"2026-01-16T15:32:59.692Z","hidden":false},{"_id":"6969b0a232f0333869ff946e","user":{"_id":"62f662bcc58915315c4eccea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg","isPro":true,"fullname":"Yilun Zhao","user":"yilunzhao","type":"user"},"name":"Yilun Zhao","status":"claimed_verified","statusLastChangedAt":"2026-02-06T18:58:00.543Z","hidden":false},{"_id":"6969b0a232f0333869ff946f","name":"See-Kiong Ng","hidden":false},{"_id":"6969b0a232f0333869ff9470","user":{"_id":"672793ffa5255a517fd02045","avatarUrl":"/avatars/a2569be6f2e952b5b00e5d4b89a7cede.svg","isPro":false,"fullname":"Cynthia Breazeal","user":"cynthiabreazeal","type":"user"},"name":"Cynthia Breazeal","status":"admin_assigned","statusLastChangedAt":"2026-01-16T15:33:06.327Z","hidden":false},{"_id":"6969b0a232f0333869ff9471","user":{"_id":"655722e80438e0854fae7554","avatarUrl":"/avatars/b93a74f7c7880f9fe0f3ffb47e2aef5e.svg","isPro":false,"fullname":"Luu Anh Tuan","user":"anhtuanluu36","type":"user"},"name":"Anh Tuan Luu","status":"admin_assigned","statusLastChangedAt":"2026-01-16T15:33:12.181Z","hidden":false},{"_id":"6969b0a232f0333869ff9472","user":{"_id":"682352cdb1c5350f850dd952","avatarUrl":"/avatars/5426efe0195ac8f914839e6585b1a112.svg","isPro":false,"fullname":"Hae Won Park","user":"robohaewon","type":"user"},"name":"Hae Won Park","status":"admin_assigned","statusLastChangedAt":"2026-01-16T15:33:17.979Z","hidden":false},{"_id":"6969b0a232f0333869ff9473","user":{"_id":"651d8032c50012d33e914f2f","avatarUrl":"/avatars/0a44c9f51fc50ce86582e328c361ea00.svg","isPro":false,"fullname":"Bryan Hooi","user":"bhooi","type":"user"},"name":"Bryan Hooi","status":"admin_assigned","statusLastChangedAt":"2026-01-16T15:33:23.007Z","hidden":false}],"publishedAt":"2026-01-13T17:48:43.000Z","submittedOnDailyAt":"2026-01-16T01:00:36.686Z","title":"Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs","submittedOnDailyBy":{"_id":"64351475901c5734bcb64248","avatarUrl":"/avatars/12346d4301c1bfb00ce0ea128a93cc15.svg","isPro":false,"fullname":"Zhiyuan Hu","user":"zhiyuanhucs","type":"user"},"summary":"Reinforcement learning (RL) has become a central paradigm for post-training large language models (LLMs), particularly for complex reasoning tasks, yet it often suffers from exploration collapse: policies prematurely concentrate on a small set of dominant reasoning patterns, improving pass@1 while limiting rollout-level diversity and gains in pass@k. We argue that this failure stems from regularizing local token behavior rather than diversity over sets of solutions. To address this, we propose Uniqueness-Aware Reinforcement Learning, a rollout-level objective that explicitly rewards correct solutions that exhibit rare high-level strategies. Our method uses an LLM-based judge to cluster rollouts for the same problem according to their high-level solution strategies, ignoring superficial variations, and reweights policy advantages inversely with cluster size. As a result, correct but novel strategies receive higher rewards than redundant ones. Across mathematics, physics, and medical reasoning benchmarks, our approach consistently improves pass@k across large sampling budgets and increases the area under the pass@k curve (AUC@K) without sacrificing pass@1, while sustaining exploration and uncovering more diverse solution strategies at scale.","upvotes":148,"discussionId":"6969b0a232f0333869ff9474","githubRepo":"https://github.com/zhiyuanhubj/Uniqueness-Aware-RL","githubRepoAddedBy":"auto","ai_summary":"Reinforcement learning for large language models is enhanced by a rollout-level objective that rewards rare high-level reasoning strategies, improving diverse solution discovery without sacrificing initial performance.","ai_keywords":["reinforcement learning","large language models","exploration collapse","pass@k","pass@1","rollout-level objective","high-level solution strategies","clustering","policy advantages","AUC@K"],"githubStars":7,"organization":{"_id":"63728bde14d543d507ae970d","name":"MIT","fullname":"Massachusetts Institute of Technology","avatar":"https://cdn-uploads.huggingface.co/production/uploads/noauth/S90qoeEJeEYaYf-c7Zs8g.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64351475901c5734bcb64248","avatarUrl":"/avatars/12346d4301c1bfb00ce0ea128a93cc15.svg","isPro":false,"fullname":"Zhiyuan Hu","user":"zhiyuanhucs","type":"user"},{"_id":"61a4a4743205e107691e0d68","avatarUrl":"/avatars/df900a465a244dc749c007009336b4d6.svg","isPro":false,"fullname":"Jiaying","user":"Judit","type":"user"},{"_id":"6650c77a74664a42ddfb9187","avatarUrl":"/avatars/92001bbe0ae9b14309730316b639cede.svg","isPro":false,"fullname":"yueliu1999","user":"yueliu1999","type":"user"},{"_id":"695b8320a4394eee0ea22f4b","avatarUrl":"/avatars/6aa8e12365fe6505ac38e0e84cd88d1a.svg","isPro":false,"fullname":"Ma Jiahao","user":"AH26","type":"user"},{"_id":"65b0ca7170773c0ab8fd981e","avatarUrl":"/avatars/fda6502ada338dca9756877db42d8d08.svg","isPro":false,"fullname":"luojueling","user":"xiaoluo11","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6969b98f4f243d1b42a1f4b2","avatarUrl":"/avatars/f0fd2506c531e49452e79641fa3f91c6.svg","isPro":false,"fullname":"Hairui","user":"Venus2020","type":"user"},{"_id":"6684f1993bfdaa40c2f5b2b8","avatarUrl":"/avatars/70b876daed30f45a393da61dd700c198.svg","isPro":false,"fullname":"zhu","user":"zhu-thu-22","type":"user"},{"_id":"66f16d166f7038039a1e2770","avatarUrl":"/avatars/0a30d3e9af3b109ce4b82396b0e8d685.svg","isPro":false,"fullname":"Yibo Wang","user":"yiboowang","type":"user"},{"_id":"6969b95098631084adaa5b16","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/MsQHDz650n7tyKEdH30FI.jpeg","isPro":false,"fullname":"DENG MINGYU","user":"Cici050304","type":"user"},{"_id":"664c4c861c4a3c91e2e70a68","avatarUrl":"/avatars/3ed7fdd0d05e216dc473779bb9b0487b.svg","isPro":false,"fullname":"Zhanzhi Lou","user":"zzzlou","type":"user"},{"_id":"6539bc7756c9b35961021fa8","avatarUrl":"/avatars/b0140589c0a435c903c93d93a1a6ee8b.svg","isPro":false,"fullname":"Jiaqi Wei","user":"VitaCoco","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2,"organization":{"_id":"63728bde14d543d507ae970d","name":"MIT","fullname":"Massachusetts Institute of Technology","avatar":"https://cdn-uploads.huggingface.co/production/uploads/noauth/S90qoeEJeEYaYf-c7Zs8g.png"}}">
Papers
arxiv:2601.08763

Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs

Published on Jan 13
· Submitted by
Zhiyuan Hu
on Jan 16
#2 Paper of the day

Abstract

Reinforcement learning for large language models is enhanced by a rollout-level objective that rewards rare high-level reasoning strategies, improving diverse solution discovery without sacrificing initial performance.

AI-generated summary

Reinforcement learning (RL) has become a central paradigm for post-training large language models (LLMs), particularly for complex reasoning tasks, yet it often suffers from exploration collapse: policies prematurely concentrate on a small set of dominant reasoning patterns, improving pass@1 while limiting rollout-level diversity and gains in pass@k. We argue that this failure stems from regularizing local token behavior rather than diversity over sets of solutions. To address this, we propose Uniqueness-Aware Reinforcement Learning, a rollout-level objective that explicitly rewards correct solutions that exhibit rare high-level strategies. Our method uses an LLM-based judge to cluster rollouts for the same problem according to their high-level solution strategies, ignoring superficial variations, and reweights policy advantages inversely with cluster size. As a result, correct but novel strategies receive higher rewards than redundant ones. Across mathematics, physics, and medical reasoning benchmarks, our approach consistently improves pass@k across large sampling budgets and increases the area under the pass@k curve (AUC@K) without sacrificing pass@1, while sustaining exploration and uncovering more diverse solution strategies at scale.

Community

Paper author Paper submitter

Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs

好文章

等我有时间再好好看看罒▽罒

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.08763 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.08763 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.08763 in a Space README.md to link it from this page.

Collections including this paper 15