Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Self-Improving Multilingual Long Reasoning via Translation-Reasoning Integrated Training
[go: Go Back, main page]

https://arxivlens.com/PaperView/Details/self-improving-multilingual-long-reasoning-via-translation-reasoning-integrated-training-4613-db1460ec

\n
    \n
  • Executive Summary
  • \n
  • Detailed Breakdown
  • \n
  • Practical Applications
  • \n
\n","updatedAt":"2026-02-09T15:11:25.856Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7095744609832764},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}},{"id":"698a8cef221e6eff5092e7c9","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-10T01:42:07.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Align to the Pivot: Dual Alignment with Self-Feedback for Multilingual Math Reasoning](https://huggingface.co/papers/2601.17671) (2026)\n* [Gained in Translation: Privileged Pairwise Judges Enhance Multilingual Reasoning](https://huggingface.co/papers/2601.18722) (2026)\n* [CURE-Med: Curriculum-Informed Reinforcement Learning for Multilingual Medical Reasoning](https://huggingface.co/papers/2601.13262) (2026)\n* [Structured Reasoning for Large Language Models](https://huggingface.co/papers/2601.07180) (2026)\n* [Do LLMs Need Inherent Reasoning Before Reinforcement Learning? A Study in Korean Self-Correction](https://huggingface.co/papers/2601.05459) (2026)\n* [Language-Coupled Reinforcement Learning for Multilingual Retrieval-Augmented Generation](https://huggingface.co/papers/2601.14896) (2026)\n* [Med-CoReasoner: Reducing Language Disparities in Medical Reasoning via Language-Informed Co-Reasoning](https://huggingface.co/papers/2601.08267) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-10T01:42:07.249Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7375751733779907},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[{"reaction":"๐Ÿ‘","users":["MikeHarris"],"count":1}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.05940","authors":[{"_id":"6985662b4ad556f294b7ebf8","user":{"_id":"68356f5db243fb809813a715","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68356f5db243fb809813a715/grhHvANfDRp75rMJxWlQo.jpeg","isPro":false,"fullname":"LiuJunxiao","user":"master-lan","type":"user"},"name":"Junxiao Liu","status":"claimed_verified","statusLastChangedAt":"2026-02-06T18:51:11.681Z","hidden":false},{"_id":"6985662b4ad556f294b7ebf9","name":"Zhijun Wang","hidden":false},{"_id":"6985662b4ad556f294b7ebfa","name":"Yixiao Li","hidden":false},{"_id":"6985662b4ad556f294b7ebfb","user":{"_id":"643525ea0b30bd434ea15363","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/643525ea0b30bd434ea15363/7sAzllfWUPtt68NY1gDLj.png","isPro":false,"fullname":"Jackie Lai","user":"DreamW1ngs","type":"user"},"name":"Zhejian Lai","status":"claimed_verified","statusLastChangedAt":"2026-02-09T08:35:21.194Z","hidden":false},{"_id":"6985662b4ad556f294b7ebfc","name":"Liqian Huang","hidden":false},{"_id":"6985662b4ad556f294b7ebfd","name":"Xin Huang","hidden":false},{"_id":"6985662b4ad556f294b7ebfe","name":"Xue Han","hidden":false},{"_id":"6985662b4ad556f294b7ebff","name":"Junlan Feng","hidden":false},{"_id":"6985662b4ad556f294b7ec00","name":"Shujian Huang","hidden":false}],"publishedAt":"2026-02-05T17:55:09.000Z","submittedOnDailyAt":"2026-02-09T01:14:36.127Z","title":"Self-Improving Multilingual Long Reasoning via Translation-Reasoning Integrated Training","submittedOnDailyBy":{"_id":"68356f5db243fb809813a715","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68356f5db243fb809813a715/grhHvANfDRp75rMJxWlQo.jpeg","isPro":false,"fullname":"LiuJunxiao","user":"master-lan","type":"user"},"summary":"Long reasoning models often struggle in multilingual settings: they tend to reason in English for non-English questions; when constrained to reasoning in the question language, accuracies drop substantially. The struggle is caused by the limited abilities for both multilingual question understanding and multilingual reasoning. To address both problems, we propose TRIT (Translation-Reasoning Integrated Training), a self-improving framework that integrates the training of translation into multilingual reasoning. Without external feedback or additional multilingual data, our method jointly enhances multilingual question understanding and response generation. On MMATH, our method outperforms multiple baselines by an average of 7 percentage points, improving both answer correctness and language consistency. Further analysis reveals that integrating translation training improves cross-lingual question alignment by over 10 percentage points and enhances translation quality for both mathematical questions and general-domain text, with gains up to 8.4 COMET points on FLORES-200.","upvotes":18,"discussionId":"6985662b4ad556f294b7ec01","ai_summary":"TRIT framework improves multilingual reasoning by jointly training translation and reasoning components, enhancing question understanding and response generation across languages.","ai_keywords":["multilingual reasoning","translation reasoning integrated training","cross-lingual question alignment","COMET","FLORES-200","MMATH"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"68356f5db243fb809813a715","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68356f5db243fb809813a715/grhHvANfDRp75rMJxWlQo.jpeg","isPro":false,"fullname":"LiuJunxiao","user":"master-lan","type":"user"},{"_id":"61a9ccca3e8d72e791476614","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61a9ccca3e8d72e791476614/icULFDp3dPKgwUePllNly.png","isPro":false,"fullname":"Shuaijie She","user":"kevinpro","type":"user"},{"_id":"643525ea0b30bd434ea15363","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/643525ea0b30bd434ea15363/7sAzllfWUPtt68NY1gDLj.png","isPro":false,"fullname":"Jackie Lai","user":"DreamW1ngs","type":"user"},{"_id":"649d1d4c379eada9a580cf59","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/649d1d4c379eada9a580cf59/ucXv7KoJDEB3Phgn-Dn5E.png","isPro":false,"fullname":"xuhuang","user":"xuhuang87","type":"user"},{"_id":"65080dc63fc966d1bbba485d","avatarUrl":"/avatars/347890233f2316e7f7a04d652b2378bb.svg","isPro":false,"fullname":"Shimao Zhang","user":"Shimao-Zhang","type":"user"},{"_id":"654106ea5d60444d17dde12c","avatarUrl":"/avatars/7432871636c465de0646b423cbb6adb5.svg","isPro":false,"fullname":"Jack Alexander","user":"masterLan","type":"user"},{"_id":"6797b87733e38f906ac967c3","avatarUrl":"/avatars/236109f237223a218ef9e7b3b05c2541.svg","isPro":false,"fullname":"jjl-ljx","user":"jjl-ljx","type":"user"},{"_id":"65cc78118ebd392213fc9e16","avatarUrl":"/avatars/859735d5abdb6f2c6f26bd8db034a1ce.svg","isPro":false,"fullname":"Xiang Liu","user":"VincentLx","type":"user"},{"_id":"69896c4020563ca40a8d3db4","avatarUrl":"/avatars/7df1b874693432188c312bb111c1e599.svg","isPro":false,"fullname":"qi chen","user":"zerobuge","type":"user"},{"_id":"66e0404662d6ab4f1107580f","avatarUrl":"/avatars/ef71694fea5482078a637a3869e30d19.svg","isPro":false,"fullname":"Yi Wang","user":"Yi53","type":"user"},{"_id":"698974b3ea4d641ee4067488","avatarUrl":"/avatars/82e9fc3b887c0ea16ef02837c1eaccb5.svg","isPro":false,"fullname":"lan","user":"ddlan123","type":"user"},{"_id":"69897535b92dcf60f8bc301e","avatarUrl":"/avatars/570f85cbde0031bb6a2b06806a282d88.svg","isPro":false,"fullname":"zhang","user":"meiyu123","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2602.05940

Self-Improving Multilingual Long Reasoning via Translation-Reasoning Integrated Training

Published on Feb 5
ยท Submitted by
LiuJunxiao
on Feb 9
Authors:
,
,
,
,
,
,

Abstract

TRIT framework improves multilingual reasoning by jointly training translation and reasoning components, enhancing question understanding and response generation across languages.

AI-generated summary

Long reasoning models often struggle in multilingual settings: they tend to reason in English for non-English questions; when constrained to reasoning in the question language, accuracies drop substantially. The struggle is caused by the limited abilities for both multilingual question understanding and multilingual reasoning. To address both problems, we propose TRIT (Translation-Reasoning Integrated Training), a self-improving framework that integrates the training of translation into multilingual reasoning. Without external feedback or additional multilingual data, our method jointly enhances multilingual question understanding and response generation. On MMATH, our method outperforms multiple baselines by an average of 7 percentage points, improving both answer correctness and language consistency. Further analysis reveals that integrating translation training improves cross-lingual question alignment by over 10 percentage points and enhances translation quality for both mathematical questions and general-domain text, with gains up to 8.4 COMET points on FLORES-200.

Community

Paper author Paper submitter
โ€ข
edited 12 days ago

We propose TRIT, a self-improving framework that integrates translation and reasoning to enhance multilingual long reasoning without external feedback.

arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/self-improving-multilingual-long-reasoning-via-translation-reasoning-integrated-training-4613-db1460ec

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.05940 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.05940 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.05940 in a Space README.md to link it from this page.

Collections including this paper 1