Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - All Languages Matter: Evaluating LMMs on Culturally Diverse 100 Languages
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-11-27T01:34:20.793Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7026968598365784},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2411.16508","authors":[{"_id":"6745a57662eb3714377eef30","name":"Ashmal Vayani","hidden":false},{"_id":"6745a57662eb3714377eef31","user":{"_id":"6639cd849662bb58d5fe5793","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6639cd849662bb58d5fe5793/PHUa38L5_llipN52ngajY.png","isPro":false,"fullname":"Dissanayake","user":"Dinura","type":"user"},"name":"Dinura Dissanayake","status":"claimed_verified","statusLastChangedAt":"2025-02-04T09:40:00.685Z","hidden":false},{"_id":"6745a57662eb3714377eef32","name":"Hasindri Watawana","hidden":false},{"_id":"6745a57662eb3714377eef33","name":"Noor Ahsan","hidden":false},{"_id":"6745a57662eb3714377eef34","name":"Nevasini Sasikumar","hidden":false},{"_id":"6745a57662eb3714377eef35","user":{"_id":"64b7d2ad8c632fbca9507431","avatarUrl":"/avatars/76c31ea218108cf6c3715269f7605404.svg","isPro":false,"fullname":"Omkar Thawakar","user":"omkarthawakar","type":"user"},"name":"Omkar Thawakar","status":"claimed_verified","statusLastChangedAt":"2025-11-03T21:05:45.798Z","hidden":false},{"_id":"6745a57662eb3714377eef36","user":{"_id":"62c488530a06c03931610fe0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62c488530a06c03931610fe0/Zi5mTvE-R0bHOjuUEl56k.jpeg","isPro":false,"fullname":"Henok Ademtew","user":"Henok","type":"user"},"name":"Henok Biadglign Ademtew","status":"claimed_verified","statusLastChangedAt":"2025-02-12T09:19:13.428Z","hidden":false},{"_id":"6745a57662eb3714377eef37","name":"Yahya Hmaiti","hidden":false},{"_id":"6745a57662eb3714377eef38","name":"Amandeep Kumar","hidden":false},{"_id":"6745a57662eb3714377eef39","user":{"_id":"64f3114bd818f024d9950642","avatarUrl":"/avatars/9d0129d85264592251a2e8fab05e02cf.svg","isPro":false,"fullname":"kartik kuckreja","user":"kartik060702","type":"user"},"name":"Kartik Kuckreja","status":"claimed_verified","statusLastChangedAt":"2025-06-24T08:15:16.392Z","hidden":false},{"_id":"6745a57662eb3714377eef3a","name":"Mykola Maslych","hidden":false},{"_id":"6745a57662eb3714377eef3b","name":"Wafa Al Ghallabi","hidden":false},{"_id":"6745a57662eb3714377eef3c","name":"Mihail Mihaylov","hidden":false},{"_id":"6745a57662eb3714377eef3d","name":"Chao Qin","hidden":false},{"_id":"6745a57662eb3714377eef3e","name":"Abdelrahman M Shaker","hidden":false},{"_id":"6745a57662eb3714377eef3f","user":{"_id":"60d33fbbd7b174177faabd4f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60d33fbbd7b174177faabd4f/pfyv_xj2B2m2N4F4sT9zJ.jpeg","isPro":true,"fullname":"Mike Zhang","user":"jjzha","type":"user"},"name":"Mike Zhang","status":"claimed_verified","statusLastChangedAt":"2024-11-28T21:15:26.163Z","hidden":false},{"_id":"6745a57662eb3714377eef40","name":"Mahardika Krisna Ihsani","hidden":false},{"_id":"6745a57662eb3714377eef41","name":"Amiel Esplana","hidden":false},{"_id":"6745a57662eb3714377eef42","name":"Monil Gokani","hidden":false},{"_id":"6745a57662eb3714377eef43","name":"Shachar Mirkin","hidden":false},{"_id":"6745a57662eb3714377eef44","name":"Harsh Singh","hidden":false},{"_id":"6745a57662eb3714377eef45","name":"Ashay Srivastava","hidden":false},{"_id":"6745a57662eb3714377eef46","name":"Endre Hamerlik","hidden":false},{"_id":"6745a57662eb3714377eef47","name":"Fathinah Asma Izzati","hidden":false},{"_id":"6745a57662eb3714377eef48","name":"Fadillah Adamsyah Maani","hidden":false},{"_id":"6745a57662eb3714377eef49","name":"Sebastian Cavada","hidden":false},{"_id":"6745a57662eb3714377eef4a","name":"Jenny Chim","hidden":false},{"_id":"6745a57662eb3714377eef4b","name":"Rohit Gupta","hidden":false},{"_id":"6745a57662eb3714377eef4c","name":"Sanjay Manjunath","hidden":false},{"_id":"6745a57662eb3714377eef4d","name":"Kamila Zhumakhanova","hidden":false},{"_id":"6745a57662eb3714377eef4e","name":"Feno Heriniaina Rabevohitra","hidden":false},{"_id":"6745a57662eb3714377eef4f","name":"Azril Amirudin","hidden":false},{"_id":"6745a57662eb3714377eef50","name":"Muhammad Ridzuan","hidden":false},{"_id":"6745a57662eb3714377eef51","name":"Daniya Kareem","hidden":false},{"_id":"6745a57662eb3714377eef52","name":"Ketan More","hidden":false},{"_id":"6745a57662eb3714377eef53","name":"Kunyang Li","hidden":false},{"_id":"6745a57662eb3714377eef54","name":"Pramesh Shakya","hidden":false},{"_id":"6745a57662eb3714377eef55","name":"Muhammad Saad","hidden":false},{"_id":"6745a57662eb3714377eef56","name":"Amirpouya Ghasemaghaei","hidden":false},{"_id":"6745a57662eb3714377eef57","user":{"_id":"63791fc77df2fefdcaf17c65","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63791fc77df2fefdcaf17c65/kdq8k_xc28VhFsSMPHbV0.jpeg","isPro":false,"fullname":"Amir Djanibekov","user":"amupd","type":"user"},"name":"Amirbek Djanibekov","status":"claimed_verified","statusLastChangedAt":"2025-09-21T13:14:17.683Z","hidden":false},{"_id":"6745a57662eb3714377eef58","name":"Dilshod Azizov","hidden":false},{"_id":"6745a57662eb3714377eef59","name":"Branislava Jankovic","hidden":false},{"_id":"6745a57662eb3714377eef5a","name":"Naman Bhatia","hidden":false},{"_id":"6745a57662eb3714377eef5b","name":"Alvaro Cabrera","hidden":false},{"_id":"6745a57662eb3714377eef5c","user":{"_id":"6478f30ea68454566353ef95","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6478f30ea68454566353ef95/hX5NiVPyQbK8TIBKnT4J1.jpeg","isPro":false,"fullname":"Johan Samir Obando Ceron","user":"johanobandoc","type":"user"},"name":"Johan Obando-Ceron","status":"claimed_verified","statusLastChangedAt":"2025-11-11T19:57:45.035Z","hidden":false},{"_id":"6745a57662eb3714377eef5d","name":"Olympiah Otieno","hidden":false},{"_id":"6745a57662eb3714377eef5e","user":{"_id":"649edc8ca74a8d2dc44f117e","avatarUrl":"/avatars/33064f5dd5c8029278cdaa01cd16d4ab.svg","isPro":false,"fullname":"Fabian Farestam","user":"northern-64bit","type":"user"},"name":"Fabian Farestam","status":"claimed_verified","statusLastChangedAt":"2025-04-01T15:59:53.780Z","hidden":false},{"_id":"6745a57662eb3714377eef5f","name":"Muztoba Rabbani","hidden":false},{"_id":"6745a57662eb3714377eef60","name":"Sanoojan Baliah","hidden":false},{"_id":"6745a57662eb3714377eef61","name":"Santosh Sanjeev","hidden":false},{"_id":"6745a57662eb3714377eef62","name":"Abduragim Shtanchaev","hidden":false},{"_id":"6745a57662eb3714377eef63","name":"Maheen Fatima","hidden":false},{"_id":"6745a57662eb3714377eef64","name":"Thao Nguyen","hidden":false},{"_id":"6745a57662eb3714377eef65","name":"Amrin Kareem","hidden":false},{"_id":"6745a57662eb3714377eef66","name":"Toluwani Aremu","hidden":false},{"_id":"6745a57662eb3714377eef67","name":"Nathan Xavier","hidden":false},{"_id":"6745a57662eb3714377eef68","name":"Amit Bhatkal","hidden":false},{"_id":"6745a57662eb3714377eef69","user":{"_id":"64457216b272430bdbf3f7df","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64457216b272430bdbf3f7df/Dpz8pnPMMn83PxjNR5Bpf.jpeg","isPro":false,"fullname":"Hawau Olamide Toyin","user":"herwoww","type":"user"},"name":"Hawau Toyin","status":"claimed_verified","statusLastChangedAt":"2025-06-10T09:31:40.562Z","hidden":false},{"_id":"6745a57662eb3714377eef6a","user":{"_id":"63a4754927f1f64ed7238dac","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg","isPro":false,"fullname":"Aman Chadha","user":"amanchadha","type":"user"},"name":"Aman Chadha","status":"claimed_verified","statusLastChangedAt":"2024-11-27T22:20:43.573Z","hidden":false},{"_id":"6745a57662eb3714377eef6b","name":"Hisham Cholakkal","hidden":false},{"_id":"6745a57662eb3714377eef6c","name":"Rao Muhammad Anwer","hidden":false},{"_id":"6745a57662eb3714377eef6d","name":"Michael Felsberg","hidden":false},{"_id":"6745a57662eb3714377eef6e","name":"Jorma Laaksonen","hidden":false},{"_id":"6745a57662eb3714377eef6f","name":"Thamar Solorio","hidden":false},{"_id":"6745a57662eb3714377eef70","name":"Monojit Choudhury","hidden":false},{"_id":"6745a57662eb3714377eef71","name":"Ivan Laptev","hidden":false},{"_id":"6745a57662eb3714377eef72","name":"Mubarak Shah","hidden":false},{"_id":"6745a57662eb3714377eef73","name":"Salman Khan","hidden":false},{"_id":"6745a57662eb3714377eef74","name":"Fahad Khan","hidden":false}],"publishedAt":"2024-11-25T15:44:42.000Z","submittedOnDailyAt":"2024-11-26T21:29:34.313Z","title":"All Languages Matter: Evaluating LMMs on Culturally Diverse 100\n Languages","submittedOnDailyBy":{"_id":"63a4754927f1f64ed7238dac","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg","isPro":false,"fullname":"Aman Chadha","user":"amanchadha","type":"user"},"summary":"Existing Large Multimodal Models (LMMs) generally focus on only a few regions\nand languages. As LMMs continue to improve, it is increasingly important to\nensure they understand cultural contexts, respect local sensitivities, and\nsupport low-resource languages, all while effectively integrating corresponding\nvisual cues. In pursuit of culturally diverse global multimodal models, our\nproposed All Languages Matter Benchmark (ALM-bench) represents the largest and\nmost comprehensive effort to date for evaluating LMMs across 100 languages.\nALM-bench challenges existing models by testing their ability to understand and\nreason about culturally diverse images paired with text in various languages,\nincluding many low-resource languages traditionally underrepresented in LMM\nresearch. The benchmark offers a robust and nuanced evaluation framework\nfeaturing various question formats, including true/false, multiple choice, and\nopen-ended questions, which are further divided into short and long-answer\ncategories. ALM-bench design ensures a comprehensive assessment of a model's\nability to handle varied levels of difficulty in visual and linguistic\nreasoning. To capture the rich tapestry of global cultures, ALM-bench carefully\ncurates content from 13 distinct cultural aspects, ranging from traditions and\nrituals to famous personalities and celebrations. Through this, ALM-bench not\nonly provides a rigorous testing ground for state-of-the-art open and\nclosed-source LMMs but also highlights the importance of cultural and\nlinguistic inclusivity, encouraging the development of models that can serve\ndiverse global populations effectively. Our benchmark is publicly available.","upvotes":10,"discussionId":"6745a57a62eb3714377ef0f4","githubRepo":"https://github.com/mbzuai-oryx/ALM-Bench","githubRepoAddedBy":"auto","ai_summary":"ALM-bench provides a comprehensive evaluation of Large Multimodal Models across 100 languages, focusing on cultural and linguistic inclusivity.","ai_keywords":["Large Multimodal Models","LMMs","ALM-bench","culturally diverse images","low-resource languages","visual and linguistic reasoning","true/false questions","multiple choice","open-ended questions","short and long-answer categories","cultural aspects","traditions","rituals","famous personalities","celebrations","global cultures","cultural and linguistic inclusivity"],"githubStars":47},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63a4754927f1f64ed7238dac","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg","isPro":false,"fullname":"Aman Chadha","user":"amanchadha","type":"user"},{"_id":"674661b90ba8b132df388878","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ooiQvmlA7_9Tv2JrUdEQM.png","isPro":false,"fullname":"Peter Shaw","user":"TigerML","type":"user"},{"_id":"63082bb7bc0a2a5ee2253523","avatarUrl":"/avatars/6cf8d12d16d15db1070fbea89b5b3967.svg","isPro":false,"fullname":"Kuo-Hsin Tu","user":"dapumptu","type":"user"},{"_id":"64d4615cf8082bf19b916492","avatarUrl":"/avatars/8e1b59565ec5e4b31090cf1b911781b9.svg","isPro":false,"fullname":"wongyukim","user":"wongyukim","type":"user"},{"_id":"644e1b1d9b4e87c31bab0a14","avatarUrl":"/avatars/88bb4c4a67dc8958069e9014f5e73a0b.svg","isPro":false,"fullname":"Michael Barry","user":"MichaelBarryUK","type":"user"},{"_id":"62a740afafe38c48674729d2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1655128233198-noauth.jpeg","isPro":false,"fullname":"Yegor Sviridov","user":"Rexschwert","type":"user"},{"_id":"65248f618c2691925ddbe13f","avatarUrl":"/avatars/7d485f6582090c79d3e5de7b2209deac.svg","isPro":false,"fullname":"Aayush Srivastava","user":"aayushsrivastava","type":"user"},{"_id":"60d33fbbd7b174177faabd4f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60d33fbbd7b174177faabd4f/pfyv_xj2B2m2N4F4sT9zJ.jpeg","isPro":true,"fullname":"Mike Zhang","user":"jjzha","type":"user"},{"_id":"6639cd849662bb58d5fe5793","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6639cd849662bb58d5fe5793/PHUa38L5_llipN52ngajY.png","isPro":false,"fullname":"Dissanayake","user":"Dinura","type":"user"},{"_id":"649edc8ca74a8d2dc44f117e","avatarUrl":"/avatars/33064f5dd5c8029278cdaa01cd16d4ab.svg","isPro":false,"fullname":"Fabian Farestam","user":"northern-64bit","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2411.16508

All Languages Matter: Evaluating LMMs on Culturally Diverse 100 Languages

Published on Nov 25, 2024
ยท Submitted by
Aman Chadha
on Nov 26, 2024
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

ALM-bench provides a comprehensive evaluation of Large Multimodal Models across 100 languages, focusing on cultural and linguistic inclusivity.

AI-generated summary

Existing Large Multimodal Models (LMMs) generally focus on only a few regions and languages. As LMMs continue to improve, it is increasingly important to ensure they understand cultural contexts, respect local sensitivities, and support low-resource languages, all while effectively integrating corresponding visual cues. In pursuit of culturally diverse global multimodal models, our proposed All Languages Matter Benchmark (ALM-bench) represents the largest and most comprehensive effort to date for evaluating LMMs across 100 languages. ALM-bench challenges existing models by testing their ability to understand and reason about culturally diverse images paired with text in various languages, including many low-resource languages traditionally underrepresented in LMM research. The benchmark offers a robust and nuanced evaluation framework featuring various question formats, including true/false, multiple choice, and open-ended questions, which are further divided into short and long-answer categories. ALM-bench design ensures a comprehensive assessment of a model's ability to handle varied levels of difficulty in visual and linguistic reasoning. To capture the rich tapestry of global cultures, ALM-bench carefully curates content from 13 distinct cultural aspects, ranging from traditions and rituals to famous personalities and celebrations. Through this, ALM-bench not only provides a rigorous testing ground for state-of-the-art open and closed-source LMMs but also highlights the importance of cultural and linguistic inclusivity, encouraging the development of models that can serve diverse global populations effectively. Our benchmark is publicly available.

Community

Paper author Paper submitter

๐Ÿš€ Introducing All Languages Matter: Evaluating LMMs on Culturally Diverse 100 Languages (ALM-Bench): A culturally diverse multilingual and multimodal VQA benchmark covering 100 languages with 22.7K question-answers. ALM-bench encompasses 19 generic and culture-specific domains for each language, enriched with four diverse question types.

๐ŸŒ With over 800 hours of human annotations, ALM-Bench is meticulously curated and verified with native-language experts to assess the next generation of massively multilingual multimodal models in a standardized way, pushing the boundaries of LMMs towards better cultural understanding and inclusivity.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2411.16508 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 2