Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - ASPO: Asymmetric Importance Sampling Policy Optimization
[go: Go Back, main page]

https://github.com/wizard-III/Archer2.0
Models & Data: https://huggingface.co/collections/Fate-Zero/archer20-68b945c878768a27941fd7b6
Zhihu: https://zhuanlan.zhihu.com/p/1950985242098799047

\n","updatedAt":"2025-10-08T04:21:52.420Z","author":{"_id":"667187ba9ab144eb3ac43a1b","avatarUrl":"/avatars/db5558aa1c5160b9aee8b58573271959.svg","fullname":"Runze Liu","name":"RyanLiu112","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.47465816140174866},"editors":["RyanLiu112"],"editorAvatarUrls":["/avatars/db5558aa1c5160b9aee8b58573271959.svg"],"reactions":[],"isReport":false}},{"id":"68e711e5a02c8ccbe30dc877","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-10-09T01:37:41.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [CE-GPPO: Coordinating Entropy via Gradient-Preserving Clipping Policy Optimization in Reinforcement Learning](https://huggingface.co/papers/2509.20712) (2025)\n* [DCPO: Dynamic Clipping Policy Optimization](https://huggingface.co/papers/2509.02333) (2025)\n* [Prosperity before Collapse: How Far Can Off-Policy RL Reach with Stale Data on LLMs?](https://huggingface.co/papers/2510.01161) (2025)\n* [EEPO: Exploration-Enhanced Policy Optimization via Sample-Then-Forget](https://huggingface.co/papers/2510.05837) (2025)\n* [Attention as a Compass: Efficient Exploration for Process-Supervised RL in Reasoning Models](https://huggingface.co/papers/2509.26628) (2025)\n* [Improving Sampling Efficiency in RLVR through Adaptive Rollout and Response Reuse](https://huggingface.co/papers/2509.25808) (2025)\n* [From Uniform to Heterogeneous: Tailoring Policy Optimization to Every Token's Nature](https://huggingface.co/papers/2509.16591) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-10-09T01:37:41.534Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7520926594734192},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2510.06062","authors":[{"_id":"68e5e643975ac4c405ef213b","name":"Jiakang Wang","hidden":false},{"_id":"68e5e643975ac4c405ef213c","user":{"_id":"667187ba9ab144eb3ac43a1b","avatarUrl":"/avatars/db5558aa1c5160b9aee8b58573271959.svg","isPro":false,"fullname":"Runze Liu","user":"RyanLiu112","type":"user"},"name":"Runze Liu","status":"claimed_verified","statusLastChangedAt":"2025-10-08T05:49:12.082Z","hidden":false},{"_id":"68e5e643975ac4c405ef213d","name":"Lei Lin","hidden":false},{"_id":"68e5e643975ac4c405ef213e","name":"Wenping Hu","hidden":false},{"_id":"68e5e643975ac4c405ef213f","name":"Xiu Li","hidden":false},{"_id":"68e5e643975ac4c405ef2140","name":"Fuzheng Zhang","hidden":false},{"_id":"68e5e643975ac4c405ef2141","name":"Guorui Zhou","hidden":false},{"_id":"68e5e643975ac4c405ef2142","name":"Kun Gai","hidden":false}],"publishedAt":"2025-10-07T15:54:24.000Z","submittedOnDailyAt":"2025-10-08T02:51:52.415Z","title":"ASPO: Asymmetric Importance Sampling Policy Optimization","submittedOnDailyBy":{"_id":"667187ba9ab144eb3ac43a1b","avatarUrl":"/avatars/db5558aa1c5160b9aee8b58573271959.svg","isPro":false,"fullname":"Runze Liu","user":"RyanLiu112","type":"user"},"summary":"Recent Large Language Model (LLM) post-training methods rely on token-level\nclipping mechanisms during Reinforcement Learning (RL). However, we identify a\nfundamental flaw in this Outcome-Supervised RL (OSRL) paradigm: the Importance\nSampling (IS) ratios of positive-advantage tokens are mismatched, leading to\nunbalanced token weighting for positive and negative tokens. This mismatch\nsuppresses the update of low-probability tokens while over-amplifying already\nhigh-probability ones. To address this, we propose Asymmetric Importance\nSampling Policy Optimization (ASPO), which uses a simple yet effective strategy\nthat flips the IS ratios of positive-advantage tokens, aligning their update\ndirection with the learning dynamics of negative ones. AIS further incorporates\na soft dual-clipping mechanism to stabilize extreme updates while maintaining\ngradient flow. Comprehensive experiments on coding and mathematical reasoning\nbenchmarks demonstrate that ASPO significantly mitigates premature convergence,\nimproves training stability, and enhances final performance over strong\nGRPO-based baselines. Our analysis provides new insights into the role of\ntoken-level weighting in OSRL and highlights the critical importance of\ncorrecting IS in LLM RL. The code and models of ASPO are available at\nhttps://github.com/wizard-III/Archer2.0.","upvotes":14,"discussionId":"68e5e643975ac4c405ef2143","githubRepo":"https://github.com/wizard-III/Archer2.0","githubRepoAddedBy":"user","ai_summary":"ASPO addresses the imbalance in token weighting during OSRL by flipping Importance Sampling ratios and incorporating a soft dual-clipping mechanism, improving training stability and performance in LLMs.","ai_keywords":["Outcome-Supervised RL","Importance Sampling","Asymmetric Importance Sampling Policy Optimization","soft dual-clipping mechanism","token-level clipping","low-probability tokens","high-probability tokens","premature convergence","training stability","GRPO-based baselines"],"githubStars":29},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"667187ba9ab144eb3ac43a1b","avatarUrl":"/avatars/db5558aa1c5160b9aee8b58573271959.svg","isPro":false,"fullname":"Runze Liu","user":"RyanLiu112","type":"user"},{"_id":"6566b3d2751591e4fadd4d93","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6566b3d2751591e4fadd4d93/DT3r2Ja9WSAVPjIpfhnLL.jpeg","isPro":false,"fullname":"Alan","user":"wizardII","type":"user"},{"_id":"6502fd40fa0dccbd85bf5b2f","avatarUrl":"/avatars/12c9f0c7aff3d3054fc797a9de55c416.svg","isPro":false,"fullname":"MMD","user":"mmd-1995","type":"user"},{"_id":"67ab5e4cb5811b6e0be17870","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/-sQTUW9gAdhv8BmjoufSY.png","isPro":false,"fullname":"AllenRL2","user":"AllenRL2","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"62c63abed03daa9baab60bde","avatarUrl":"/avatars/adaebb167e2c11aaa5cc75a7f86d6082.svg","isPro":false,"fullname":"linlei","user":"thinkaboutzero","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"6407e5294edf9f5c4fd32228","avatarUrl":"/avatars/8e2d55460e9fe9c426eb552baf4b2cb0.svg","isPro":false,"fullname":"Stoney Kang","user":"sikang99","type":"user"},{"_id":"687f0106d3fe970a47061cda","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/dgPr4Wu8PTLkb8h3eI9IN.png","isPro":false,"fullname":"whhstat","user":"whhstat","type":"user"},{"_id":"68e64f8fecf63bf29d0b3cc5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/n9Qu_ZETkqdJQ5dLCH0yA.png","isPro":false,"fullname":"James Zheng","user":"semaj-zheng","type":"user"},{"_id":"67d7c332d61cc25f92d44328","avatarUrl":"/avatars/50b9d7a00aaa9df25fb7b112adad14c8.svg","isPro":false,"fullname":"pengzhiwei01","user":"pengzhiwei01","type":"user"},{"_id":"647ffddeb82adfa7cc1a10d9","avatarUrl":"/avatars/26aa168d6b2068298ebb16584aa52b6c.svg","isPro":false,"fullname":"zhu","user":"xuekai","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2510.06062

ASPO: Asymmetric Importance Sampling Policy Optimization

Published on Oct 7, 2025
· Submitted by
Runze Liu
on Oct 8, 2025
Authors:
,
,
,
,
,
,

Abstract

ASPO addresses the imbalance in token weighting during OSRL by flipping Importance Sampling ratios and incorporating a soft dual-clipping mechanism, improving training stability and performance in LLMs.

AI-generated summary

Recent Large Language Model (LLM) post-training methods rely on token-level clipping mechanisms during Reinforcement Learning (RL). However, we identify a fundamental flaw in this Outcome-Supervised RL (OSRL) paradigm: the Importance Sampling (IS) ratios of positive-advantage tokens are mismatched, leading to unbalanced token weighting for positive and negative tokens. This mismatch suppresses the update of low-probability tokens while over-amplifying already high-probability ones. To address this, we propose Asymmetric Importance Sampling Policy Optimization (ASPO), which uses a simple yet effective strategy that flips the IS ratios of positive-advantage tokens, aligning their update direction with the learning dynamics of negative ones. AIS further incorporates a soft dual-clipping mechanism to stabilize extreme updates while maintaining gradient flow. Comprehensive experiments on coding and mathematical reasoning benchmarks demonstrate that ASPO significantly mitigates premature convergence, improves training stability, and enhances final performance over strong GRPO-based baselines. Our analysis provides new insights into the role of token-level weighting in OSRL and highlights the critical importance of correcting IS in LLM RL. The code and models of ASPO are available at https://github.com/wizard-III/Archer2.0.

Community

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.06062 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.06062 in a Space README.md to link it from this page.

Collections including this paper 1