Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - LLaMA Beyond English: An Empirical Study on Language Capability Transfer
[go: Go Back, main page]

\n\t\t\n\t\n\t\n\t\tCracking the Code: How LLaMA is Revolutionizing Non-English AI\n\t\n\n

\n\n

\n\t\n\t\t\n\t\n\t\n\t\tLinks ๐Ÿ”—:\n\t\n

\n

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-09T02:53:35.402Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":176,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5613979697227478},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2401.01055","authors":[{"_id":"6594c64e1adf6d577e5a3a11","name":"Jun Zhao","hidden":false},{"_id":"6594c64e1adf6d577e5a3a12","name":"Zhihao Zhang","hidden":false},{"_id":"6594c64e1adf6d577e5a3a13","name":"Qi Zhang","hidden":false},{"_id":"6594c64e1adf6d577e5a3a14","name":"Tao Gui","hidden":false},{"_id":"6594c64e1adf6d577e5a3a15","name":"Xuanjing Huang","hidden":false}],"publishedAt":"2024-01-02T06:29:02.000Z","submittedOnDailyAt":"2024-01-02T23:58:31.590Z","title":"LLaMA Beyond English: An Empirical Study on Language Capability Transfer","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"In recent times, substantial advancements have been witnessed in large\nlanguage models (LLMs), exemplified by ChatGPT, showcasing remarkable\nproficiency across a range of complex tasks. However, many mainstream LLMs\n(e.g. LLaMA) are pretrained on English-dominant corpus, which limits their\nperformance in other non-English languages. In this paper, we focus on how to\neffectively transfer the capabilities of language generation and following\ninstructions to a non-English language. To answer this question, we conduct an\nextensive empirical investigation based on LLaMA, accumulating over 1440 GPU\nhours. We analyze the impact of key factors such as vocabulary extension,\nfurther pretraining, and instruction tuning on transfer. To accurately assess\nthe model's level of knowledge, we employ four widely used standardized testing\nbenchmarks: C-Eval, MMLU, AGI-Eval, and GAOKAO-Bench. Furthermore, a\ncomprehensive evaluation of the model's response quality is conducted,\nconsidering aspects such as accuracy, fluency, informativeness, logical\ncoherence, and harmlessness, based on LLM-Eval, a benchmarks consisting\ninstruction tasks from 17 diverse categories. Our evaluation results\ndemonstrate that comparable performance to state-of-the-art transfer models can\nbe achieved with less than 1% of the pretraining data, both in terms of\nknowledge alignment and response quality. Furthermore, the experimental\noutcomes across the thirteen low-resource languages also exhibit similar\ntrends. We anticipate that the conclusions revealed by the experiments will aid\nthe community in developing non-English LLMs.","upvotes":55,"discussionId":"6594c64f1adf6d577e5a3a47","ai_summary":"A comprehensive study on transferring English-dominant LLMs to non-English languages using LLaMA demonstrates that comparable performance can be achieved with minimal pretraining data across various language capabilities and response quality metrics.","ai_keywords":["large language models","LLMs","ChatGPT","LLaMA","vocabulary extension","further pretraining","instruction tuning","C-Eval","MMLU","AGI-Eval","GAOKAO-Bench","LLM-Eval","non-English languages","low-resource languages"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64b4de0128fd98e7cc217ea4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/rInlJb_yskg0WJb61jvkz.png","isPro":false,"fullname":"Victor Sung","user":"noobmaster29","type":"user"},{"_id":"63044b750547362a22a9850d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63044b750547362a22a9850d/pbDsTuvAkyaZXkoHcnU4k.jpeg","isPro":false,"fullname":"vincentyang","user":"vincent88","type":"user"},{"_id":"6126f3567faf48ab18fcf188","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6126f3567faf48ab18fcf188/yfNqymlvI-DkV_HwPxs85.jpeg","isPro":false,"fullname":"Allan Victor","user":"BecomeAllan","type":"user"},{"_id":"6101c620900eaa0057c2ce1d","avatarUrl":"/avatars/bd282166c120711c65b5409dc860ac58.svg","isPro":false,"fullname":"Abdel-Dayane Marcos","user":"admarcosai","type":"user"},{"_id":"658317edd6cc28d6bd53f498","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/658317edd6cc28d6bd53f498/Y2SRjgS_UY_L00eeWbWBq.jpeg","isPro":false,"fullname":"Arthur Thouvenin","user":"athouvenin","type":"user"},{"_id":"638efcf4c67af472d316d424","avatarUrl":"/avatars/97a57859d7d87a3a8f1bb41d32a72bc2.svg","isPro":false,"fullname":"Ge Zhang","user":"zhangysk","type":"user"},{"_id":"6032802e1f993496bc14d9e3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png","isPro":false,"fullname":"Omar Sanseviero","user":"osanseviero","type":"user"},{"_id":"65953f3a078efa3255dae953","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/26H96hHlCmCf8a31TKV48.jpeg","isPro":false,"fullname":"Mosaab Muhammad","user":"Mosaabx","type":"user"},{"_id":"6141a88b3a0ec78603c9e784","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6141a88b3a0ec78603c9e784/DJsxSmWV39M33JFheLobC.jpeg","isPro":true,"fullname":"merve","user":"merve","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"631957c6cb116eab31dd9c1c","avatarUrl":"/avatars/e752bb73d8a6557fc0e106ad4ac6a6b2.svg","isPro":false,"fullname":"Tanimazsin Tanimazsinoglu","user":"tanimazsin130","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2401.01055

LLaMA Beyond English: An Empirical Study on Language Capability Transfer

Published on Jan 2, 2024
ยท Submitted by
AK
on Jan 2, 2024
Authors:
,
,
,
,

Abstract

A comprehensive study on transferring English-dominant LLMs to non-English languages using LLaMA demonstrates that comparable performance can be achieved with minimal pretraining data across various language capabilities and response quality metrics.

AI-generated summary

In recent times, substantial advancements have been witnessed in large language models (LLMs), exemplified by ChatGPT, showcasing remarkable proficiency across a range of complex tasks. However, many mainstream LLMs (e.g. LLaMA) are pretrained on English-dominant corpus, which limits their performance in other non-English languages. In this paper, we focus on how to effectively transfer the capabilities of language generation and following instructions to a non-English language. To answer this question, we conduct an extensive empirical investigation based on LLaMA, accumulating over 1440 GPU hours. We analyze the impact of key factors such as vocabulary extension, further pretraining, and instruction tuning on transfer. To accurately assess the model's level of knowledge, we employ four widely used standardized testing benchmarks: C-Eval, MMLU, AGI-Eval, and GAOKAO-Bench. Furthermore, a comprehensive evaluation of the model's response quality is conducted, considering aspects such as accuracy, fluency, informativeness, logical coherence, and harmlessness, based on LLM-Eval, a benchmarks consisting instruction tasks from 17 diverse categories. Our evaluation results demonstrate that comparable performance to state-of-the-art transfer models can be achieved with less than 1% of the pretraining data, both in terms of knowledge alignment and response quality. Furthermore, the experimental outcomes across the thirteen low-resource languages also exhibit similar trends. We anticipate that the conclusions revealed by the experiments will aid the community in developing non-English LLMs.

Community

Trying to apply this to CLIP.

Trying to apply this to CLIP.

Any idea of how to actually fine tune it? Would you use axolotl or something else? Thanks

Trying to apply this to CLIP.

Any idea of how to actually fine tune it? Would you use axolotl or something else? Thanks

I'd have to add to the tokenizer, then I would use open_clip or a tool made with open_clip most likely. I might have to write custom code, but I'm sure it'll work out of the box since all I need to do custom is the tokenizer.

Cracking the Code: How LLaMA is Revolutionizing Non-English AI

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2401.01055 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2401.01055 in a Space README.md to link it from this page.

Collections including this paper 22