Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - EuroBERT: Scaling Multilingual Encoders for European Languages
[go: Go Back, main page]

@stefan-it\n\t We are currently running experiments on NER. It will come with a v1.5 update in the paper for our conference submission 👌

\n","updatedAt":"2025-03-10T12:40:27.522Z","author":{"_id":"62be186a5f59ff2320e6e32b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/W_emoC2uItM-MJZyCfIKI.png","fullname":"Nicolas-BZRD","name":"Nicolas-BZRD","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":37,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.939553439617157},"editors":["Nicolas-BZRD"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/W_emoC2uItM-MJZyCfIKI.png"],"reactions":[{"reaction":"❤️","users":["stefan-it","julien-c","tomaarsen"],"count":3}],"isReport":false}},{"id":"67cedf776a1b139d804d22d2","author":{"_id":"644a900e3a619fe72b14af0f","avatarUrl":"/avatars/e2d5dac3d92757ed48e37e126a3464a3.svg","fullname":"Colombo","name":"PierreColombo","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":17,"isUserFollowing":false},"createdAt":"2025-03-10T12:47:51.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"We can discuss about it offline @stefan-it . Nicolas is currently skying in the alps :D but we could get in touch if you wish.","html":"

We can discuss about it offline \n\n@stefan-it\n\t . Nicolas is currently skying in the alps :D but we could get in touch if you wish.

\n","updatedAt":"2025-03-10T12:47:51.274Z","author":{"_id":"644a900e3a619fe72b14af0f","avatarUrl":"/avatars/e2d5dac3d92757ed48e37e126a3464a3.svg","fullname":"Colombo","name":"PierreColombo","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":17,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9463074207305908},"editors":["PierreColombo"],"editorAvatarUrls":["/avatars/e2d5dac3d92757ed48e37e126a3464a3.svg"],"reactions":[{"reaction":"❤️","users":["stefan-it"],"count":1}],"isReport":false},"replies":[{"id":"67ceedc52c78613ce852bedf","author":{"_id":"5e6a3d4ea9afd5125d9ec064","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1584020801691-noauth.jpeg","fullname":"Stefan Schweter","name":"stefan-it","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3685,"isUserFollowing":false},"createdAt":"2025-03-10T13:48:53.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Happy ⛷️ @Nicolas-BZRD ! Many thanks @PierreColombo , I would highly interested in that, you can please write me a message on LinkedIn - I am really looking forward!","html":"

Happy ⛷️ \n\n@Nicolas-BZRD\n\t ! Many thanks \n\n@PierreColombo\n\t , I would highly interested in that, you can please write me a message on LinkedIn - I am really looking forward!

\n","updatedAt":"2025-03-10T13:48:53.630Z","author":{"_id":"5e6a3d4ea9afd5125d9ec064","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1584020801691-noauth.jpeg","fullname":"Stefan Schweter","name":"stefan-it","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3685,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9418155550956726},"editors":["stefan-it"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1584020801691-noauth.jpeg"],"reactions":[{"reaction":"❤️","users":["Nicolas-BZRD"],"count":1}],"isReport":false,"parentCommentId":"67cedf776a1b139d804d22d2"}}]},{"id":"67ceff095eff112406a4c8a7","author":{"_id":"6489609284f4f8799340b28d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6489609284f4f8799340b28d/nxME-CTWj8woeQsn45g02.png","fullname":"Eyel","name":"Eyel","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2025-03-10T15:02:33.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Awesome model, can't wait to see what the community does with it !\n\nWould you consider adding the results on the Massive Text Embedding Benchmark (MTEB) ?","html":"

Awesome model, can't wait to see what the community does with it !

\n

Would you consider adding the results on the Massive Text Embedding Benchmark (MTEB) ?

\n","updatedAt":"2025-03-10T15:02:33.048Z","author":{"_id":"6489609284f4f8799340b28d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6489609284f4f8799340b28d/nxME-CTWj8woeQsn45g02.png","fullname":"Eyel","name":"Eyel","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9164551496505737},"editors":["Eyel"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6489609284f4f8799340b28d/nxME-CTWj8woeQsn45g02.png"],"reactions":[],"isReport":false},"replies":[{"id":"67cf21ca3e62e9551ab12061","author":{"_id":"6317233cc92fd6fee317e030","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png","fullname":"Tom Aarsen","name":"tomaarsen","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":3516,"isUserFollowing":false},"createdAt":"2025-03-10T17:30:50.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"As a heads up, the 3 EuroBERT models released today are very much \"base\" models, i.e. they're not finetuned for specific tasks like retrieval yet. \nFor evaluation, they simply reran the same training script with various of these base models to showcase that finetuned EuroBERT is generally stronger than e.g. Finetuned XLM-RoBERTa.\n\nI really hope that some of the excellent labs/companies that finetune embedding models (Nomic, BAAI, Mixedbread, Jina, Alibaba, Snowflake, IBM, etc.) wants to pick this up and release a strong embedding model for retrieval.","html":"

As a heads up, the 3 EuroBERT models released today are very much \"base\" models, i.e. they're not finetuned for specific tasks like retrieval yet.
For evaluation, they simply reran the same training script with various of these base models to showcase that finetuned EuroBERT is generally stronger than e.g. Finetuned XLM-RoBERTa.

\n

I really hope that some of the excellent labs/companies that finetune embedding models (Nomic, BAAI, Mixedbread, Jina, Alibaba, Snowflake, IBM, etc.) wants to pick this up and release a strong embedding model for retrieval.

\n","updatedAt":"2025-03-10T17:30:50.682Z","author":{"_id":"6317233cc92fd6fee317e030","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png","fullname":"Tom Aarsen","name":"tomaarsen","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":3516,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9340972304344177},"editors":["tomaarsen"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png"],"reactions":[],"isReport":false,"parentCommentId":"67ceff095eff112406a4c8a7"}},{"id":"67cf39106eebcd23b32a3eb1","author":{"_id":"6489609284f4f8799340b28d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6489609284f4f8799340b28d/nxME-CTWj8woeQsn45g02.png","fullname":"Eyel","name":"Eyel","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2025-03-10T19:10:08.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"I see, thanks for the explanation Tom !","html":"

I see, thanks for the explanation Tom !

\n","updatedAt":"2025-03-10T19:10:08.086Z","author":{"_id":"6489609284f4f8799340b28d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6489609284f4f8799340b28d/nxME-CTWj8woeQsn45g02.png","fullname":"Eyel","name":"Eyel","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8829412460327148},"editors":["Eyel"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6489609284f4f8799340b28d/nxME-CTWj8woeQsn45g02.png"],"reactions":[{"reaction":"👍","users":["tomaarsen"],"count":1}],"isReport":false,"parentCommentId":"67ceff095eff112406a4c8a7"}}]},{"id":"67cf932fe2a93f09e6f3149d","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-03-11T01:34:39.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Training Sparse Mixture Of Experts Text Embedding Models](https://huggingface.co/papers/2502.07972) (2025)\n* [DRAMA: Diverse Augmentation from Large Language Models to Smaller Dense Retrievers](https://huggingface.co/papers/2502.18460) (2025)\n* [Adapting Decoder-Based Language Models for Diverse Encoder Downstream Tasks](https://huggingface.co/papers/2503.02656) (2025)\n* [Granite Embedding Models](https://huggingface.co/papers/2502.20204) (2025)\n* [LayAlign: Enhancing Multilingual Reasoning in Large Language Models via Layer-Wise Adaptive Fusion and Alignment Strategy](https://huggingface.co/papers/2502.11405) (2025)\n* [One Model to Train them All: Hierarchical Self-Distillation for Enhanced Early Layer Embeddings](https://huggingface.co/papers/2503.03008) (2025)\n* [Multilingual Language Model Pretraining using Machine-translated Data](https://huggingface.co/papers/2502.13252) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-03-11T01:34:39.054Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6804583668708801},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.05500","authors":[{"_id":"67ce9626e5cdfda52b9e8839","user":{"_id":"62be186a5f59ff2320e6e32b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/W_emoC2uItM-MJZyCfIKI.png","isPro":false,"fullname":"Nicolas-BZRD","user":"Nicolas-BZRD","type":"user"},"name":"Nicolas Boizard","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:42:06.860Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e883a","user":{"_id":"65fa95405355a52c784633fc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65fa95405355a52c784633fc/rSfBUHPa7eSAsLd8DuOq4.png","isPro":false,"fullname":"Hippolyte Gisserot-Boukhlef","user":"hgissbkh","type":"user"},"name":"Hippolyte Gisserot-Boukhlef","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:42:13.176Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e883b","user":{"_id":"64132452d8a418df415a6ded","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64132452d8a418df415a6ded/qkjL5G89uldHUXlCI3n4f.jpeg","isPro":false,"fullname":"Duarte Alves","user":"DuarteMRAlves","type":"user"},"name":"Duarte M. Alves","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:42:23.055Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e883c","name":"André Martins","hidden":false},{"_id":"67ce9626e5cdfda52b9e883d","user":{"_id":"63937b399762cdd66be2a32f","avatarUrl":"/avatars/7aefd888a3c54673d5881dcef61f771b.svg","isPro":false,"fullname":"Ayoub Hammal","user":"ayoubhammal","type":"user"},"name":"Ayoub Hammal","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:42:42.527Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e883e","user":{"_id":"677bedd522ca8585ede98470","avatarUrl":"/avatars/54bca410c446610f02aca55918c74518.svg","isPro":false,"fullname":"Caio Corro","user":"caiocorro","type":"user"},"name":"Caio Corro","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:42:48.603Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e883f","user":{"_id":"61efea03a57920a251ec19b8","avatarUrl":"/avatars/f47c8e3cb17a2bf7d43f2c152bb86885.svg","isPro":false,"fullname":"Celine Hudelot","user":"CelineH","type":"user"},"name":"Céline Hudelot","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:51:41.273Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e8840","user":{"_id":"66f2d6a684a241caac8e16dc","avatarUrl":"/avatars/81acb87c2b07bea938251b40a2139911.svg","isPro":false,"fullname":"Emmanuel Malherbe","user":"emmanuelmalherbe","type":"user"},"name":"Emmanuel Malherbe","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:51:47.996Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e8841","name":"Etienne Malaboeuf","hidden":false},{"_id":"67ce9626e5cdfda52b9e8842","user":{"_id":"6708db59caf70ddea8e1355d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6708db59caf70ddea8e1355d/C6T16AdpqoeWCk7Gg9wSH.jpeg","isPro":false,"fullname":"Fanny Jourdan","user":"Fannyjrd","type":"user"},"name":"Fanny Jourdan","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:52:09.223Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e8843","user":{"_id":"67cafedda972115e89972cd7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/P_xComqG9IttvluN-6tyB.png","isPro":false,"fullname":"Gabriel Hautreux","user":"GabrielHau","type":"user"},"name":"Gabriel Hautreux","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:52:15.512Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e8844","user":{"_id":"6772bde5c997eeb5550e80ea","avatarUrl":"/avatars/8134a4d9330317e748dc7b33e1bb25f6.svg","isPro":false,"fullname":"João Alves","user":"albusonrails","type":"user"},"name":"João Alves","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:52:22.630Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e8845","user":{"_id":"66e2c22d7cc3edd60d725267","avatarUrl":"/avatars/b217c5708c7dba8b1c220f37984ccc1e.svg","isPro":false,"fullname":"Kevin El Haddad","user":"kelhad","type":"user"},"name":"Kevin El-Haddad","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:52:31.191Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e8846","user":{"_id":"60f2e021adf471cbdf8bb660","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1654090481550-60f2e021adf471cbdf8bb660.jpeg","isPro":false,"fullname":"Manuel Faysse","user":"manu","type":"user"},"name":"Manuel Faysse","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:52:38.114Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e8847","user":{"_id":"6369394dd322a76e1ea4bdf6","avatarUrl":"/avatars/a4e5ab0167025fbbfc970d54630ce754.svg","isPro":false,"fullname":"Maxime Peyrard","user":"peyrardm","type":"user"},"name":"Maxime Peyrard","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:52:44.389Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e8848","user":{"_id":"67b622d2df3a86fbca306c43","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/lNlshrl56oaKslArMzSzj.png","isPro":false,"fullname":"Nuno Guerreiro","user":"nunogj","type":"user"},"name":"Nuno M. Guerreiro","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:52:54.367Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e8849","name":"Patrick Fernandes","hidden":false},{"_id":"67ce9626e5cdfda52b9e884a","user":{"_id":"60d9a9791fa5d458da77754d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1624887555363-60d9a9791fa5d458da77754d.jpeg","isPro":false,"fullname":"Ricardo Rei","user":"RicardoRei","type":"user"},"name":"Ricardo Rei","status":"claimed_verified","statusLastChangedAt":"2025-03-16T21:18:40.390Z","hidden":false},{"_id":"67ce9626e5cdfda52b9e884b","user":{"_id":"644a900e3a619fe72b14af0f","avatarUrl":"/avatars/e2d5dac3d92757ed48e37e126a3464a3.svg","isPro":false,"fullname":"Colombo","user":"PierreColombo","type":"user"},"name":"Pierre Colombo","status":"admin_assigned","statusLastChangedAt":"2025-03-10T09:41:26.353Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/62be186a5f59ff2320e6e32b/NxwS9WJrRc9D3q9awbn_X.png"],"publishedAt":"2025-03-07T15:13:58.000Z","submittedOnDailyAt":"2025-03-10T06:12:45.848Z","title":"EuroBERT: Scaling Multilingual Encoders for European Languages","submittedOnDailyBy":{"_id":"62be186a5f59ff2320e6e32b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/W_emoC2uItM-MJZyCfIKI.png","isPro":false,"fullname":"Nicolas-BZRD","user":"Nicolas-BZRD","type":"user"},"summary":"General-purpose multilingual vector representations, used in retrieval,\nregression and classification, are traditionally obtained from bidirectional\nencoder models. Despite their wide applicability, encoders have been recently\novershadowed by advances in generative decoder-only models. However, many\ninnovations driving this progress are not inherently tied to decoders. In this\npaper, we revisit the development of multilingual encoders through the lens of\nthese advances, and introduce EuroBERT, a family of multilingual encoders\ncovering European and widely spoken global languages. Our models outperform\nexisting alternatives across a diverse range of tasks, spanning multilingual\ncapabilities, mathematics, and coding, and natively supporting sequences of up\nto 8,192 tokens. We also examine the design decisions behind EuroBERT, offering\ninsights into our dataset composition and training pipeline. We publicly\nrelease the EuroBERT models, including intermediate training checkpoints,\ntogether with our training framework.","upvotes":81,"discussionId":"67ce9627e5cdfda52b9e88a4","projectPage":"https://huggingface.co/EuroBERT","ai_summary":"EuroBERT, a family of multilingual encoders covering European and global languages, outperforms existing models across various tasks and supports long sequences, surpassing traditional bidirectional encoders.","ai_keywords":["bidirectional encoder models","generative decoder-only models","multilingual encoders","EuroBERT","multilingual capabilities","token sequences"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62be186a5f59ff2320e6e32b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/W_emoC2uItM-MJZyCfIKI.png","isPro":false,"fullname":"Nicolas-BZRD","user":"Nicolas-BZRD","type":"user"},{"_id":"60f2e021adf471cbdf8bb660","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1654090481550-60f2e021adf471cbdf8bb660.jpeg","isPro":false,"fullname":"Manuel Faysse","user":"manu","type":"user"},{"_id":"67cafedda972115e89972cd7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/P_xComqG9IttvluN-6tyB.png","isPro":false,"fullname":"Gabriel Hautreux","user":"GabrielHau","type":"user"},{"_id":"644a900e3a619fe72b14af0f","avatarUrl":"/avatars/e2d5dac3d92757ed48e37e126a3464a3.svg","isPro":false,"fullname":"Colombo","user":"PierreColombo","type":"user"},{"_id":"6708db59caf70ddea8e1355d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6708db59caf70ddea8e1355d/C6T16AdpqoeWCk7Gg9wSH.jpeg","isPro":false,"fullname":"Fanny Jourdan","user":"Fannyjrd","type":"user"},{"_id":"6742e2bb6efe2b931fa0eed2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/VKfTae05uiwSUvtxPkojN.png","isPro":false,"fullname":"Joseba Dalmau","user":"joseba-octo","type":"user"},{"_id":"64f82fefc3c12b377c9bc28d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64f82fefc3c12b377c9bc28d/VSWYUNBDczQAOfdEIPoSC.jpeg","isPro":false,"fullname":"Antonin Poché","user":"antonin-poche","type":"user"},{"_id":"64132452d8a418df415a6ded","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64132452d8a418df415a6ded/qkjL5G89uldHUXlCI3n4f.jpeg","isPro":false,"fullname":"Duarte Alves","user":"DuarteMRAlves","type":"user"},{"_id":"6627d691e91db36390c29ac2","avatarUrl":"/avatars/e090e24f0fed46d0393f4817ddb9c7b4.svg","isPro":false,"fullname":"etiennemlb","user":"etiennemlb","type":"user"},{"_id":"677bedd522ca8585ede98470","avatarUrl":"/avatars/54bca410c446610f02aca55918c74518.svg","isPro":false,"fullname":"Caio Corro","user":"caiocorro","type":"user"},{"_id":"67cea5473e2f8d4cd205f417","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67cea5473e2f8d4cd205f417/RnyPUgPOUfpW1-DtHDv1k.jpeg","isPro":false,"fullname":"Vincent Mussot","user":"vincentmussot","type":"user"},{"_id":"67a793892eb180fc3b9c7895","avatarUrl":"/avatars/d3e6c33515462ed49a59be55f38f4629.svg","isPro":false,"fullname":"Arthur Chiron","user":"ArthurChiron","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":3}">
Papers
arxiv:2503.05500

EuroBERT: Scaling Multilingual Encoders for European Languages

Published on Mar 7, 2025
· Submitted by
Nicolas-BZRD
on Mar 10, 2025
#3 Paper of the day

Abstract

EuroBERT, a family of multilingual encoders covering European and global languages, outperforms existing models across various tasks and supports long sequences, surpassing traditional bidirectional encoders.

AI-generated summary

General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.

Community

Paper author Paper submitter

Abstract: General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.

Very interesting, that Modern,Neo and now EuroBERT do not present results on token classification tasks.

I performed them for Modern and Neo BERT and they are pretty bad, so I'm wondering when we see papers tackling this. I have some ideas why and I'm curious to see EuroBERT evaluated on the CoNLL-2002 and CoNLL-2003 family.

Paper author Paper submitter

@stefan-it We are currently running experiments on NER. It will come with a v1.5 update in the paper for our conference submission 👌

We can discuss about it offline @stefan-it . Nicolas is currently skying in the alps :D but we could get in touch if you wish.

·

Happy ⛷️ @Nicolas-BZRD ! Many thanks @PierreColombo , I would highly interested in that, you can please write me a message on LinkedIn - I am really looking forward!

Awesome model, can't wait to see what the community does with it !

Would you consider adding the results on the Massive Text Embedding Benchmark (MTEB) ?

·

As a heads up, the 3 EuroBERT models released today are very much "base" models, i.e. they're not finetuned for specific tasks like retrieval yet.
For evaluation, they simply reran the same training script with various of these base models to showcase that finetuned EuroBERT is generally stronger than e.g. Finetuned XLM-RoBERTa.

I really hope that some of the excellent labs/companies that finetune embedding models (Nomic, BAAI, Mixedbread, Jina, Alibaba, Snowflake, IBM, etc.) wants to pick this up and release a strong embedding model for retrieval.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 10

Browse 10 models citing this paper

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.05500 in a Space README.md to link it from this page.

Collections including this paper 10