Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - TranslateGemma Technical Report
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-01-16T01:38:24.703Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7295631170272827},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.09012","authors":[{"_id":"696869da0ac10a06522f6a98","name":"Mara Finkelstein","hidden":false},{"_id":"696869da0ac10a06522f6a99","name":"Isaac Caswell","hidden":false},{"_id":"696869da0ac10a06522f6a9a","name":"Tobias Domhan","hidden":false},{"_id":"696869da0ac10a06522f6a9b","name":"Jan-Thorsten Peter","hidden":false},{"_id":"696869da0ac10a06522f6a9c","name":"Juraj Juraska","hidden":false},{"_id":"696869da0ac10a06522f6a9d","name":"Parker Riley","hidden":false},{"_id":"696869da0ac10a06522f6a9e","name":"Daniel Deutsch","hidden":false},{"_id":"696869da0ac10a06522f6a9f","name":"Cole Dilanni","hidden":false},{"_id":"696869da0ac10a06522f6aa0","name":"Colin Cherry","hidden":false},{"_id":"696869da0ac10a06522f6aa1","name":"Eleftheria Briakou","hidden":false},{"_id":"696869da0ac10a06522f6aa2","name":"Elizabeth Nielsen","hidden":false},{"_id":"696869da0ac10a06522f6aa3","name":"Jiaming Luo","hidden":false},{"_id":"696869da0ac10a06522f6aa4","name":"Kat Black","hidden":false},{"_id":"696869da0ac10a06522f6aa5","user":{"_id":"65a402a53522df7a27125823","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65a402a53522df7a27125823/tVR6lZlKmjPqi7dTZBGoS.jpeg","isPro":false,"fullname":"Ryan Mullins","user":"RyanMullins","type":"user"},"name":"Ryan Mullins","status":"claimed_verified","statusLastChangedAt":"2026-01-15T15:02:50.659Z","hidden":false},{"_id":"696869da0ac10a06522f6aa6","name":"Sweta Agrawal","hidden":false},{"_id":"696869da0ac10a06522f6aa7","name":"Wenda Xu","hidden":false},{"_id":"696869da0ac10a06522f6aa8","name":"Erin Kats","hidden":false},{"_id":"696869da0ac10a06522f6aa9","name":"Stephane Jaskiewicz","hidden":false},{"_id":"696869da0ac10a06522f6aaa","name":"Markus Freitag","hidden":false},{"_id":"696869da0ac10a06522f6aab","name":"David Vilar","hidden":false}],"publishedAt":"2026-01-13T22:23:24.000Z","submittedOnDailyAt":"2026-01-15T01:45:34.692Z","title":"TranslateGemma Technical Report","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"We present TranslateGemma, a suite of open machine translation models based on the Gemma 3 foundation models. To enhance the inherent multilingual capabilities of Gemma 3 for the translation task, we employ a two-stage fine-tuning process. First, supervised fine-tuning is performed using a rich mixture of high-quality large-scale synthetic parallel data generated via state-of-the-art models and human-translated parallel data. This is followed by a reinforcement learning phase, where we optimize translation quality using an ensemble of reward models, including MetricX-QE and AutoMQM, targeting translation quality. We demonstrate the effectiveness of TranslateGemma with human evaluation on the WMT25 test set across 10 language pairs and with automatic evaluation on the WMT24++ benchmark across 55 language pairs. Automatic metrics show consistent and substantial gains over the baseline Gemma 3 models across all sizes. Notably, smaller TranslateGemma models often achieve performance comparable to larger baseline models, offering improved efficiency. We also show that TranslateGemma models retain strong multimodal capabilities, with enhanced performance on the Vistra image translation benchmark. The release of the open TranslateGemma models aims to provide the research community with powerful and adaptable tools for machine translation.","upvotes":19,"discussionId":"696869da0ac10a06522f6aac","ai_summary":"TranslateGemma enhances Gemma 3's multilingual capabilities through two-stage fine-tuning with synthetic and human-translated data, achieving superior translation quality with improved efficiency.","ai_keywords":["machine translation","Gemma 3","two-stage fine-tuning","supervised fine-tuning","reinforcement learning","reward models","MetricX-QE","AutoMQM","WMT25","WMT24++","Vistra"],"organization":{"_id":"5e6aca39878b8b2bf9806447","name":"google","fullname":"Google","avatar":"https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/WtA3YYitedOr9n02eHfJe.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"610a70f35a40a8bfebfbf09b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1659922312540-610a70f35a40a8bfebfbf09b.jpeg","isPro":true,"fullname":"Daniel Bourke","user":"mrdbourke","type":"user"},{"_id":"63198263475d4fe409ae9020","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63198263475d4fe409ae9020/UtrTAlMAPdssxZ6g0znq8.jpeg","isPro":false,"fullname":"Egor Lebedev","user":"codemurt","type":"user"},{"_id":"660e324e75d8e9403f76273d","avatarUrl":"/avatars/2c03aa6c6759c181e6de9a35cc296d7d.svg","isPro":false,"fullname":"Ian Jung","user":"Iann","type":"user"},{"_id":"5e6a3d4ea9afd5125d9ec064","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1584020801691-noauth.jpeg","isPro":true,"fullname":"Stefan Schweter","user":"stefan-it","type":"user"},{"_id":"63c1699e40a26dd2db32400d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c1699e40a26dd2db32400d/3N0-Zp8igv8-52mXAdiiq.jpeg","isPro":false,"fullname":"Chroma","user":"Chroma111","type":"user"},{"_id":"65a402a53522df7a27125823","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65a402a53522df7a27125823/tVR6lZlKmjPqi7dTZBGoS.jpeg","isPro":false,"fullname":"Ryan Mullins","user":"RyanMullins","type":"user"},{"_id":"64c2a8642266ed6b44943d27","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/ZewjgvvDkujPIvbpMoM5-.jpeg","isPro":false,"fullname":"Mykola Haltiuk","user":"Goader","type":"user"},{"_id":"68e51a8638b8f9a087c947df","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/hhdqC4T5ADq57z2cH-vLI.png","isPro":false,"fullname":"David Vilar","user":"davvil","type":"user"},{"_id":"64ac0a67c275702c3c9728ff","avatarUrl":"/avatars/d59d02bf3b1cf66473681a307374454c.svg","isPro":false,"fullname":"adrian","user":"adriandj3","type":"user"},{"_id":"6317233cc92fd6fee317e030","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png","isPro":false,"fullname":"Tom Aarsen","user":"tomaarsen","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"63732ebbbd81fae2b3aaf3fb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1669551186189-63732ebbbd81fae2b3aaf3fb.jpeg","isPro":false,"fullname":"Knut Jägersberg","user":"KnutJaegersberg","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"5e6aca39878b8b2bf9806447","name":"google","fullname":"Google","avatar":"https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/WtA3YYitedOr9n02eHfJe.png"}}">
TranslateGemma enhances Gemma 3's multilingual capabilities through two-stage fine-tuning with synthetic and human-translated data, achieving superior translation quality with improved efficiency.
AI-generated summary
We present TranslateGemma, a suite of open machine translation models based on the Gemma 3 foundation models. To enhance the inherent multilingual capabilities of Gemma 3 for the translation task, we employ a two-stage fine-tuning process. First, supervised fine-tuning is performed using a rich mixture of high-quality large-scale synthetic parallel data generated via state-of-the-art models and human-translated parallel data. This is followed by a reinforcement learning phase, where we optimize translation quality using an ensemble of reward models, including MetricX-QE and AutoMQM, targeting translation quality. We demonstrate the effectiveness of TranslateGemma with human evaluation on the WMT25 test set across 10 language pairs and with automatic evaluation on the WMT24++ benchmark across 55 language pairs. Automatic metrics show consistent and substantial gains over the baseline Gemma 3 models across all sizes. Notably, smaller TranslateGemma models often achieve performance comparable to larger baseline models, offering improved efficiency. We also show that TranslateGemma models retain strong multimodal capabilities, with enhanced performance on the Vistra image translation benchmark. The release of the open TranslateGemma models aims to provide the research community with powerful and adaptable tools for machine translation.
TranslateGemma extends Gemma 3 with two-stage fine-tuning (supervised then RL) for multilingual translation, achieving strong WMT performance and multimodal capabilities.