Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Trillion 7B Technical Report
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-25T01:34:35.071Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6762600541114807},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2504.15431","authors":[{"_id":"680879ead6dc8bf64565c975","user":{"_id":"67aaee60a8192c1ba3d7d42b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67aaee60a8192c1ba3d7d42b/2pBbWeqUEUyB82EPJpNX4.jpeg","isPro":false,"fullname":"Sungjun Han","user":"sungjunhan-trl","type":"user"},"name":"Sungjun Han","status":"admin_assigned","statusLastChangedAt":"2025-04-24T11:23:37.021Z","hidden":false},{"_id":"680879ead6dc8bf64565c976","user":{"_id":"6138cc1306dd10833d2db64b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6138cc1306dd10833d2db64b/IRX4y-8M4YlzR_8jOwkKp.jpeg","isPro":true,"fullname":"Juyoung Suk","user":"juyoungml","type":"user"},"name":"Juyoung Suk","status":"claimed_verified","statusLastChangedAt":"2025-04-23T08:08:21.257Z","hidden":false},{"_id":"680879ead6dc8bf64565c977","name":"Suyeong An","hidden":false},{"_id":"680879ead6dc8bf64565c978","name":"Hyungguk Kim","hidden":false},{"_id":"680879ead6dc8bf64565c979","user":{"_id":"6729c4baae4312f3ab001a90","avatarUrl":"/avatars/5d454f1695a0a8574e730fd4d884243b.svg","isPro":false,"fullname":"Kyuseok Kim","user":"kyudolski","type":"user"},"name":"Kyuseok Kim","status":"admin_assigned","statusLastChangedAt":"2025-04-24T11:24:00.523Z","hidden":false},{"_id":"680879ead6dc8bf64565c97a","name":"Wonsuk Yang","hidden":false},{"_id":"680879ead6dc8bf64565c97b","user":{"_id":"6257adfdb98dcaa7e0de7ab4","avatarUrl":"/avatars/ddfc2135104895d09cfce0cd6f10e5fb.svg","isPro":false,"fullname":"Seungtaek Choi","user":"hist0613","type":"user"},"name":"Seungtaek Choi","status":"claimed_verified","statusLastChangedAt":"2025-04-24T09:11:03.864Z","hidden":false},{"_id":"680879ead6dc8bf64565c97c","user":{"_id":"6188cf3293317afcd1c2df7f","avatarUrl":"/avatars/95621801d5b3f3c1a681f1ad6cc66c6a.svg","isPro":false,"fullname":"Jay Shin","user":"jshin49","type":"user"},"name":"Jamin Shin","status":"claimed_verified","statusLastChangedAt":"2025-04-24T12:41:14.275Z","hidden":false}],"publishedAt":"2025-04-21T20:54:44.000Z","submittedOnDailyAt":"2025-04-24T01:09:32.264Z","title":"Trillion 7B Technical Report","submittedOnDailyBy":{"_id":"6138cc1306dd10833d2db64b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6138cc1306dd10833d2db64b/IRX4y-8M4YlzR_8jOwkKp.jpeg","isPro":true,"fullname":"Juyoung Suk","user":"juyoungml","type":"user"},"summary":"We introduce Trillion-7B, the most token-efficient Korean-centric\nmultilingual LLM available. Our novel Cross-lingual Document Attention (XLDA)\nmechanism enables highly efficient and effective knowledge transfer from\nEnglish to target languages like Korean and Japanese. Combined with optimized\ndata mixtures, language-specific filtering, and tailored tokenizer\nconstruction, Trillion-7B achieves competitive performance while dedicating\nonly 10\\% of its 2T training tokens to multilingual data and requiring just\n59.4K H100 GPU hours (\\$148K) for full training. Comprehensive evaluations\nacross 27 benchmarks in four languages demonstrate Trillion-7B's robust\nmultilingual performance and exceptional cross-lingual consistency.","upvotes":38,"discussionId":"680879ebd6dc8bf64565c9bb","ai_summary":"Trillion-7B is a highly efficient multilingual LLM leveraging Cross-lingual Document Attention (XLDA) for knowledge transfer and achieving competitive performance with minimal multilingual training data.","ai_keywords":["Cross-lingual Document Attention (XLDA)"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6188cf3293317afcd1c2df7f","avatarUrl":"/avatars/95621801d5b3f3c1a681f1ad6cc66c6a.svg","isPro":false,"fullname":"Jay Shin","user":"jshin49","type":"user"},{"_id":"67283291eebb94a257ef8937","avatarUrl":"/avatars/90b22d6ee2938cbe06ac9f03d7d4af09.svg","isPro":false,"fullname":"Juyoung Suk","user":"juyoung-trl","type":"user"},{"_id":"6138cc1306dd10833d2db64b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6138cc1306dd10833d2db64b/IRX4y-8M4YlzR_8jOwkKp.jpeg","isPro":true,"fullname":"Juyoung Suk","user":"juyoungml","type":"user"},{"_id":"67d118b78d42a11db706f899","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67d118b78d42a11db706f899/pyWTRIuFPAbhArLJzhnUQ.png","isPro":false,"fullname":"Coxwave","user":"Coxwave-AI","type":"user"},{"_id":"680885a706655e26310dc6a1","avatarUrl":"/avatars/d01fb481ca6040c73325c7acf996c09b.svg","isPro":false,"fullname":"dongjin ahn","user":"dongjinahn","type":"user"},{"_id":"64ad53921db46d71a5c576e5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64ad53921db46d71a5c576e5/mAPg4l6kZ6x1dvoxdPRm3.png","isPro":true,"fullname":"ChuGyouk","user":"ChuGyouk","type":"user"},{"_id":"67aaee60a8192c1ba3d7d42b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67aaee60a8192c1ba3d7d42b/2pBbWeqUEUyB82EPJpNX4.jpeg","isPro":false,"fullname":"Sungjun Han","user":"sungjunhan-trl","type":"user"},{"_id":"6257adfdb98dcaa7e0de7ab4","avatarUrl":"/avatars/ddfc2135104895d09cfce0cd6f10e5fb.svg","isPro":false,"fullname":"Seungtaek Choi","user":"hist0613","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6625e42e11e3eb621953f6fa","avatarUrl":"/avatars/d56aed41ad56f6d21680ed0657101e46.svg","isPro":false,"fullname":"Hyungyu seo","user":"hgseo","type":"user"},{"_id":"6342796a0875f2c99cfd313b","avatarUrl":"/avatars/98575092404c4197b20c929a6499a015.svg","isPro":false,"fullname":"Yuseung \"Phillip\" Lee","user":"phillipinseoul","type":"user"},{"_id":"655879337b098f9cb5dbdb16","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655879337b098f9cb5dbdb16/j-NwJgRvi_2jw6JU0yFp4.jpeg","isPro":false,"fullname":"Ljubomir Josifovski","user":"ljupco","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2504.15431

Trillion 7B Technical Report

Published on Apr 21, 2025
· Submitted by
Juyoung Suk
on Apr 24, 2025
Authors:
,
,
,

Abstract

Trillion-7B is a highly efficient multilingual LLM leveraging Cross-lingual Document Attention (XLDA) for knowledge transfer and achieving competitive performance with minimal multilingual training data.

AI-generated summary

We introduce Trillion-7B, the most token-efficient Korean-centric multilingual LLM available. Our novel Cross-lingual Document Attention (XLDA) mechanism enables highly efficient and effective knowledge transfer from English to target languages like Korean and Japanese. Combined with optimized data mixtures, language-specific filtering, and tailored tokenizer construction, Trillion-7B achieves competitive performance while dedicating only 10\% of its 2T training tokens to multilingual data and requiring just 59.4K H100 GPU hours (\$148K) for full training. Comprehensive evaluations across 27 benchmarks in four languages demonstrate Trillion-7B's robust multilingual performance and exceptional cross-lingual consistency.

Community

Paper author Paper submitter

Technical report for Trillion-7B, Trillion Lab's latest large language model designed to push the boundaries of multilingual scalability and performance.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.15431 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 5