Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - ReMiT: RL-Guided Mid-Training for Iterative LLM Evolution
\n","updatedAt":"2026-02-09T15:13:35.088Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6964873671531677},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}},{"id":"698a8d49221e6eff5092ed0d","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-10T01:43:37.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [GIFT: Unlocking Global Optimality in Post-Training via Finite-Temperature Gibbs Initialization](https://huggingface.co/papers/2601.09233) (2026)\n* [Diversity or Precision? A Deep Dive into Next Token Prediction](https://huggingface.co/papers/2512.22955) (2025)\n* [CORD: Bridging the Audio-Text Reasoning Gap via Weighted On-policy Cross-modal Distillation](https://huggingface.co/papers/2601.16547) (2026)\n* [Video-OPD: Efficient Post-Training of Multimodal Large Language Models for Temporal Video Grounding via On-Policy Distillation](https://huggingface.co/papers/2602.02994) (2026)\n* [Reinforcement Learning with Promising Tokens for Large Language Models](https://huggingface.co/papers/2602.03195) (2026)\n* [AIR: Post-training Data Selection for Reasoning via Attention Head Influence](https://huggingface.co/papers/2512.13279) (2025)\n* [Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting](https://huggingface.co/papers/2601.02151) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-10T01:43:37.976Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7410725951194763},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.03075","authors":[{"_id":"6989beb3beecc443208d282f","name":"Junjie Huang","hidden":false},{"_id":"6989beb3beecc443208d2830","user":{"_id":"64181d03edc5a69a66959b8a","avatarUrl":"/avatars/ca1748bc8fe0742158d836302c4292c7.svg","isPro":false,"fullname":"JR QIN","user":"qinjr","type":"user"},"name":"Jiarui Qin","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:19:20.097Z","hidden":false},{"_id":"6989beb3beecc443208d2831","user":{"_id":"63fc75f9b9db84750cea9c5c","avatarUrl":"/avatars/2c5bf9685e0cfc4b5785a4a86c34e0db.svg","isPro":false,"fullname":"DI YIN","user":"DIYIN","type":"user"},"name":"Di Yin","status":"claimed_verified","statusLastChangedAt":"2026-02-09T14:31:17.616Z","hidden":false},{"_id":"6989beb3beecc443208d2832","name":"Weiwen Liu","hidden":false},{"_id":"6989beb3beecc443208d2833","name":"Yong Yu","hidden":false},{"_id":"6989beb3beecc443208d2834","name":"Xing Sun","hidden":false},{"_id":"6989beb3beecc443208d2835","name":"Weinan Zhang","hidden":false}],"publishedAt":"2026-02-03T04:04:41.000Z","submittedOnDailyAt":"2026-02-09T08:34:30.813Z","title":"ReMiT: RL-Guided Mid-Training for Iterative LLM Evolution","submittedOnDailyBy":{"_id":"63fc75f9b9db84750cea9c5c","avatarUrl":"/avatars/2c5bf9685e0cfc4b5785a4a86c34e0db.svg","isPro":false,"fullname":"DI YIN","user":"DIYIN","type":"user"},"summary":"Standard training pipelines for large language models (LLMs) are typically unidirectional, progressing from pre-training to post-training. However, the potential for a bidirectional process--where insights from post-training retroactively improve the pre-trained foundation--remains unexplored. We aim to establish a self-reinforcing flywheel: a cycle in which reinforcement learning (RL)-tuned model strengthens the base model, which in turn enhances subsequent post-training performance, requiring no specially trained teacher or reference model. To realize this, we analyze training dynamics and identify the mid-training (annealing) phase as a critical turning point for model capabilities. This phase typically occurs at the end of pre-training, utilizing high-quality corpora under a rapidly decaying learning rate. Building upon this insight, we introduce ReMiT (Reinforcement Learning-Guided Mid-Training). Specifically, ReMiT leverages the reasoning priors of RL-tuned models to dynamically reweight tokens during the mid-training phase, prioritizing those pivotal for reasoning. Empirically, ReMiT achieves an average improvement of 3\\% on 10 pre-training benchmarks, spanning math, code, and general reasoning, and sustains these gains by over 2\\% throughout the post-training pipeline. These results validate an iterative feedback loop, enabling continuous and self-reinforcing evolution of LLMs.","upvotes":6,"discussionId":"6989beb3beecc443208d2836","ai_summary":"ReMiT introduces a bidirectional training approach where reinforcement learning-guided mid-training token reweighting improves large language model pre-training and post-training performance through an iterative feedback loop.","ai_keywords":["large language models","reinforcement learning","mid-training phase","token reweighting","reasoning priors","pre-training","post-training","iterative feedback loop"],"organization":{"_id":"66543b6e420092799d2f625c","name":"tencent","fullname":"Tencent","avatar":"https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/Lp3m-XLpjQGwBItlvn69q.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63fc75f9b9db84750cea9c5c","avatarUrl":"/avatars/2c5bf9685e0cfc4b5785a4a86c34e0db.svg","isPro":false,"fullname":"DI YIN","user":"DIYIN","type":"user"},{"_id":"63c1699e40a26dd2db32400d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c1699e40a26dd2db32400d/3N0-Zp8igv8-52mXAdiiq.jpeg","isPro":false,"fullname":"Chroma","user":"Chroma111","type":"user"},{"_id":"661ab1f1fa3b144a381fa454","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661ab1f1fa3b144a381fa454/IlpZBb9NCjo7ntFwMIH53.png","isPro":false,"fullname":"Urro","user":"urroxyz","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"64181d03edc5a69a66959b8a","avatarUrl":"/avatars/ca1748bc8fe0742158d836302c4292c7.svg","isPro":false,"fullname":"JR QIN","user":"qinjr","type":"user"},{"_id":"689c2b8f0b10f771fb6c5167","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/BTuu5EkBPNtjmi13HoEl4.png","isPro":false,"fullname":"Charlotte Taylor​​","user":"Charlotte0163","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"66543b6e420092799d2f625c","name":"tencent","fullname":"Tencent","avatar":"https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/Lp3m-XLpjQGwBItlvn69q.png"}}">
ReMiT introduces a bidirectional training approach where reinforcement learning-guided mid-training token reweighting improves large language model pre-training and post-training performance through an iterative feedback loop.
AI-generated summary
Standard training pipelines for large language models (LLMs) are typically unidirectional, progressing from pre-training to post-training. However, the potential for a bidirectional process--where insights from post-training retroactively improve the pre-trained foundation--remains unexplored. We aim to establish a self-reinforcing flywheel: a cycle in which reinforcement learning (RL)-tuned model strengthens the base model, which in turn enhances subsequent post-training performance, requiring no specially trained teacher or reference model. To realize this, we analyze training dynamics and identify the mid-training (annealing) phase as a critical turning point for model capabilities. This phase typically occurs at the end of pre-training, utilizing high-quality corpora under a rapidly decaying learning rate. Building upon this insight, we introduce ReMiT (Reinforcement Learning-Guided Mid-Training). Specifically, ReMiT leverages the reasoning priors of RL-tuned models to dynamically reweight tokens during the mid-training phase, prioritizing those pivotal for reasoning. Empirically, ReMiT achieves an average improvement of 3\% on 10 pre-training benchmarks, spanning math, code, and general reasoning, and sustains these gains by over 2\% throughout the post-training pipeline. These results validate an iterative feedback loop, enabling continuous and self-reinforcing evolution of LLMs.