Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models
[go: Go Back, main page]

https://github.com/KbsdJames/Omni-MATH
Project Page: https://omni-math.github.io/
HFDataset: https://huggingface.co/datasets/KbsdJames/Omni-MATH
OmniJudge: https://huggingface.co/KbsdJames/Omni-Judge

\n","updatedAt":"2024-10-15T01:39:10.115Z","author":{"_id":"65ae21adabf6d1ccb795e9a4","avatarUrl":"/avatars/b5dced62c6a3564095a8fa0959bc06cb.svg","fullname":"Bofei Gao","name":"KbsdJames","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":16,"isUserFollowing":false}},"numEdits":1,"editors":["KbsdJames"],"editorAvatarUrls":["/avatars/b5dced62c6a3564095a8fa0959bc06cb.svg"],"reactions":[],"isReport":false}},{"id":"670f181adb1a6bcfe849fe0b","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-10-16T01:34:18.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [TableBench: A Comprehensive and Complex Benchmark for Table Question Answering](https://huggingface.co/papers/2408.09174) (2024)\n* [CMM-Math: A Chinese Multimodal Math Dataset To Evaluate and Enhance the Mathematics Reasoning of Large Multimodal Models](https://huggingface.co/papers/2409.02834) (2024)\n* [ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models Via Error Detection](https://huggingface.co/papers/2410.04509) (2024)\n* [MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs](https://huggingface.co/papers/2410.04698) (2024)\n* [OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data](https://huggingface.co/papers/2410.01560) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-10-16T01:34:18.868Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2410.07985","authors":[{"_id":"6709313569a07aa53e3f5e88","user":{"_id":"65ae21adabf6d1ccb795e9a4","avatarUrl":"/avatars/b5dced62c6a3564095a8fa0959bc06cb.svg","isPro":false,"fullname":"Bofei Gao","user":"KbsdJames","type":"user"},"name":"Bofei Gao","status":"admin_assigned","statusLastChangedAt":"2024-10-15T13:58:13.843Z","hidden":false},{"_id":"6709313569a07aa53e3f5e89","user":{"_id":"6447ca6ca478b20f1755b294","avatarUrl":"/avatars/5049856b5ed1b74533fff902e14b4c7c.svg","isPro":false,"fullname":"Feifan Song","user":"songff","type":"user"},"name":"Feifan Song","status":"admin_assigned","statusLastChangedAt":"2024-10-15T14:00:23.150Z","hidden":false},{"_id":"6709313569a07aa53e3f5e8a","name":"Zhe Yang","hidden":false},{"_id":"6709313569a07aa53e3f5e8b","user":{"_id":"64b15284372d4340772a3dca","avatarUrl":"/avatars/417d5f1bc1bcb5e4d5de6169673c2cf7.svg","isPro":false,"fullname":"Zefan Cai","user":"ZefanCai","type":"user"},"name":"Zefan Cai","status":"admin_assigned","statusLastChangedAt":"2024-10-15T14:01:36.446Z","hidden":false},{"_id":"6709313569a07aa53e3f5e8c","user":{"_id":"64a139c098fad0c8a5a627a4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64a139c098fad0c8a5a627a4/MgWortQS64cJOZ3pL4WyH.jpeg","isPro":false,"fullname":"Yibo Miao","user":"instro","type":"user"},"name":"Yibo Miao","status":"admin_assigned","statusLastChangedAt":"2024-10-15T14:01:42.392Z","hidden":false},{"_id":"6709313569a07aa53e3f5e8d","user":{"_id":"670740744341dcee459fb990","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/66UkZvrAk7fQr5YCylEFk.png","isPro":false,"fullname":"Rosy24","user":"Rsy24","type":"user"},"name":"Qingxiu Dong","status":"admin_assigned","statusLastChangedAt":"2024-10-15T14:01:48.414Z","hidden":false},{"_id":"6709313569a07aa53e3f5e8e","user":{"_id":"6038d6d0612f5eef3cc05ea9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6038d6d0612f5eef3cc05ea9/ryhvAX5djQpD5OrIlZQ1f.jpeg","isPro":false,"fullname":"Lei Li","user":"tobiaslee","type":"user"},"name":"Lei Li","status":"claimed_verified","statusLastChangedAt":"2024-10-15T09:11:16.735Z","hidden":false},{"_id":"6709313569a07aa53e3f5e8f","user":{"_id":"66cd868be2ed0c6657eefeb7","avatarUrl":"/avatars/72c33b6c17a4f69c67c76cdde15f54eb.svg","isPro":false,"fullname":"mch","user":"mch0115","type":"user"},"name":"Chenghao Ma","status":"claimed_verified","statusLastChangedAt":"2024-10-15T09:11:18.805Z","hidden":false},{"_id":"6709313569a07aa53e3f5e90","user":{"_id":"658c481dd1c8b106727a8b73","avatarUrl":"/avatars/d34a7a62c3a524e5fdd2d5994348db58.svg","isPro":false,"fullname":"Liang Chen","user":"liangchen-ms","type":"user"},"name":"Liang Chen","status":"admin_assigned","statusLastChangedAt":"2024-10-15T14:02:05.684Z","hidden":false},{"_id":"6709313569a07aa53e3f5e91","name":"Runxin Xu","hidden":false},{"_id":"6709313569a07aa53e3f5e92","user":{"_id":"64912976b95c3f0a1e6233cb","avatarUrl":"/avatars/3e338c5eef2514055ed98ae6141a5d1a.svg","isPro":false,"fullname":"Zhengyang Tang","user":"tangzhy","type":"user"},"name":"Zhengyang Tang","status":"admin_assigned","statusLastChangedAt":"2024-10-15T14:02:20.963Z","hidden":false},{"_id":"6709313569a07aa53e3f5e93","user":{"_id":"637c6703ca8542a0ba900ccb","avatarUrl":"/avatars/288ed63a1efa566c3f01e850c6ba5dd5.svg","isPro":false,"fullname":"Wang","user":"Benyou","type":"user"},"name":"Benyou Wang","status":"admin_assigned","statusLastChangedAt":"2024-10-15T14:02:30.698Z","hidden":false},{"_id":"6709313569a07aa53e3f5e94","user":{"_id":"61527edf8b55dbdae72874fa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61527edf8b55dbdae72874fa/ZGWSBf_KSrDof6WyMoDMU.jpeg","isPro":false,"fullname":"Daoguang Zan","user":"Daoguang","type":"user"},"name":"Daoguang Zan","status":"admin_assigned","statusLastChangedAt":"2024-10-15T14:02:36.972Z","hidden":false},{"_id":"6709313569a07aa53e3f5e95","user":{"_id":"64b9954845ce8d7ad607c14d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b9954845ce8d7ad607c14d/LZ9yeTOz4J_YKrnGkcmnL.jpeg","isPro":false,"fullname":"Shanghaoran Quan","user":"quanshr","type":"user"},"name":"Shanghaoran Quan","status":"claimed_verified","statusLastChangedAt":"2024-10-25T09:31:19.169Z","hidden":false},{"_id":"6709313569a07aa53e3f5e96","user":{"_id":"638efcf4c67af472d316d424","avatarUrl":"/avatars/97a57859d7d87a3a8f1bb41d32a72bc2.svg","isPro":false,"fullname":"Ge Zhang","user":"zhangysk","type":"user"},"name":"Ge Zhang","status":"admin_assigned","statusLastChangedAt":"2024-10-15T14:02:52.819Z","hidden":false},{"_id":"6709313569a07aa53e3f5e97","name":"Lei Sha","hidden":false},{"_id":"6709313569a07aa53e3f5e98","name":"Yichang Zhang","hidden":false},{"_id":"6709313569a07aa53e3f5e99","name":"Xuancheng Ren","hidden":false},{"_id":"6709313569a07aa53e3f5e9a","name":"Tianyu Liu","hidden":false},{"_id":"6709313569a07aa53e3f5e9b","name":"Baobao Chang","hidden":false}],"publishedAt":"2024-10-10T14:39:33.000Z","submittedOnDailyAt":"2024-10-15T00:06:51.411Z","title":"Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large\n Language Models","submittedOnDailyBy":{"_id":"65ae21adabf6d1ccb795e9a4","avatarUrl":"/avatars/b5dced62c6a3564095a8fa0959bc06cb.svg","isPro":false,"fullname":"Bofei Gao","user":"KbsdJames","type":"user"},"summary":"Recent advancements in large language models (LLMs) have led to significant\nbreakthroughs in mathematical reasoning capabilities. However, existing\nbenchmarks like GSM8K or MATH are now being solved with high accuracy (e.g.,\nOpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for\ntruly challenging these models. To bridge this gap, we propose a comprehensive\nand challenging benchmark specifically designed to assess LLMs' mathematical\nreasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks,\nour dataset focuses exclusively on mathematics and comprises a vast collection\nof 4428 competition-level problems with rigorous human annotation. These\nproblems are meticulously categorized into over 33 sub-domains and span more\nthan 10 distinct difficulty levels, enabling a holistic assessment of model\nperformance in Olympiad-mathematical reasoning. Furthermore, we conducted an\nin-depth analysis based on this benchmark. Our experimental results show that\neven the most advanced models, OpenAI o1-mini and OpenAI o1-preview, struggle\nwith highly challenging Olympiad-level problems, with 60.54% and 52.55%\naccuracy, highlighting significant challenges in Olympiad-level mathematical\nreasoning.","upvotes":32,"discussionId":"6709313769a07aa53e3f5f50","githubRepo":"https://github.com/kbsdjames/omni-math","githubRepoAddedBy":"auto","ai_summary":"A new benchmark evaluates LLMs on Olympiad-level mathematics, revealing challenges even for advanced models.","ai_keywords":["large language models","LLMs","mathematical reasoning","GSM8K","MATH","OpenAI o1","benchmark","competition-level problems","human annotation","mathematical reasoning at Olympiad level","sub-domains","difficulty levels","OpenAI o1-mini","OpenAI o1-preview","Olympiad-level mathematical reasoning"],"githubStars":93},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64bbe9b236eb058cd9d6a5b9","avatarUrl":"/avatars/c7c01a3fa8809e73800392679abff6d5.svg","isPro":false,"fullname":"Kai Zuberbühler","user":"kaizuberbuehler","type":"user"},{"_id":"64b15284372d4340772a3dca","avatarUrl":"/avatars/417d5f1bc1bcb5e4d5de6169673c2cf7.svg","isPro":false,"fullname":"Zefan Cai","user":"ZefanCai","type":"user"},{"_id":"61b0a4ce1b3d95b3d1ed9251","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Wwjr26vdudX5KYVTb8Q0a.png","isPro":false,"fullname":"Liang Chen","user":"leonardPKU","type":"user"},{"_id":"66cd868be2ed0c6657eefeb7","avatarUrl":"/avatars/72c33b6c17a4f69c67c76cdde15f54eb.svg","isPro":false,"fullname":"mch","user":"mch0115","type":"user"},{"_id":"61f0f66a7855b96a04b223dd","avatarUrl":"/avatars/d17e4a4b467ef9019594036ed8f1ca6e.svg","isPro":false,"fullname":"W","user":"Windy","type":"user"},{"_id":"662500613892aa32a84a2fb2","avatarUrl":"/avatars/d7849328926c0add68bad39e9a0c5a81.svg","isPro":false,"fullname":"Young","user":"Thomos","type":"user"},{"_id":"649a46aea0b21c7cef780924","avatarUrl":"/avatars/47341742aaa7b8e4a39e67ceb81b28b0.svg","isPro":false,"fullname":"ddd","user":"ddxxasdf","type":"user"},{"_id":"65ae21adabf6d1ccb795e9a4","avatarUrl":"/avatars/b5dced62c6a3564095a8fa0959bc06cb.svg","isPro":false,"fullname":"Bofei Gao","user":"KbsdJames","type":"user"},{"_id":"655c2ee73d6156ea868bb88d","avatarUrl":"/avatars/302d3508442b74ec4b8d4f44700f8862.svg","isPro":false,"fullname":"Rafel Zhao","user":"elegant823","type":"user"},{"_id":"6038d6d0612f5eef3cc05ea9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6038d6d0612f5eef3cc05ea9/ryhvAX5djQpD5OrIlZQ1f.jpeg","isPro":false,"fullname":"Lei Li","user":"tobiaslee","type":"user"},{"_id":"630c4a414c0945d20b8dfd4a","avatarUrl":"/avatars/1c8ab92ba383746c42d94f5b98361094.svg","isPro":false,"fullname":"ltl","user":"ltl","type":"user"},{"_id":"6371ad82f0fe906bdc5b15f6","avatarUrl":"/avatars/ddc61e1edae5bd6b19530e1bc5e15d53.svg","isPro":false,"fullname":"Dotanoob7","user":"Dotanoob","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2410.07985

Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models

Published on Oct 10, 2024
· Submitted by
Bofei Gao
on Oct 15, 2024
Authors:
,
Lei Li ,
,
,
,
,
,

Abstract

A new benchmark evaluates LLMs on Olympiad-level mathematics, revealing challenges even for advanced models.

AI-generated summary

Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To bridge this gap, we propose a comprehensive and challenging benchmark specifically designed to assess LLMs' mathematical reasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks, our dataset focuses exclusively on mathematics and comprises a vast collection of 4428 competition-level problems with rigorous human annotation. These problems are meticulously categorized into over 33 sub-domains and span more than 10 distinct difficulty levels, enabling a holistic assessment of model performance in Olympiad-mathematical reasoning. Furthermore, we conducted an in-depth analysis based on this benchmark. Our experimental results show that even the most advanced models, OpenAI o1-mini and OpenAI o1-preview, struggle with highly challenging Olympiad-level problems, with 60.54% and 52.55% accuracy, highlighting significant challenges in Olympiad-level mathematical reasoning.

Community

Paper author Paper submitter

Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To bridge this gap, we propose a comprehensive and challenging benchmark specifically designed to assess LLMs' mathematical reasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks, our dataset focuses exclusively on mathematics and comprises a vast collection of 4428 competition-level problems with rigorous human annotation. These problems are meticulously categorized into over 33 sub-domains and span more than 10 distinct difficulty levels, enabling a holistic assessment of model performance in Olympiad-mathematical reasoning. Furthermore, we conducted an in-depth analysis based on this benchmark. Our experimental results show that even the most advanced models, OpenAI o1-mini and OpenAI o1-preview, struggle with highly challenging Olympiad-level problems, with 60.54% and 52.55% accuracy, highlighting significant challenges in Olympiad-level mathematical reasoning.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 5

Browse 5 datasets citing this paper

Spaces citing this paper 1

Collections including this paper 3