Hi \n\n@xywang1\n\t congrats on this work and thanks for making the dataset available on the hub!
\nNote that for things like the dataset viewer to work, one would need to follow this guide: https://huggingface.co/docs/datasets/loading, after which you can call dataset.push_to_hub(...) to push it to your repo. This also makes the dataset usable through the Datasets library.
Cheers!
\n","updatedAt":"2024-09-21T09:02:39.350Z","author":{"_id":"5f1158120c833276f61f1a84","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg","fullname":"Niels Rogge","name":"nielsr","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":1096,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9108258485794067},"editors":["nielsr"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"66e47b7b6d3de545f3206d17"}}]},{"id":"66e4b9a390bafd3c9c12c22d","author":{"_id":"657cd228138b7e391444a65d","avatarUrl":"/avatars/c7c984ae483144fab627aa2c54d91d0f.svg","fullname":"Xiaoyang Wang","name":"xywang1","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"createdAt":"2024-09-13T22:16:03.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"An overall workflow of our proposed DSBench benchmark.\n\n","html":"An overall workflow of our proposed DSBench benchmark.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\nThe following papers were recommended by the Semantic Scholar API
\n- \n
- BLADE: Benchmarking Language Model Agents for Data-Driven Science (2024) \n
- MMAU: A Holistic Benchmark of Agent Capabilities Across Diverse Domains (2024) \n
- PyBench: Evaluating LLM Agent on various real-world coding tasks (2024) \n
- Structured Event Reasoning with Large Language Models (2024) \n
- Can We Rely on LLM Agents to Draft Long-Horizon Plans? Let's Take TravelPlanner as an Example (2024) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
great paper!It is a groundbreaking and pioneering article
\n","updatedAt":"2024-09-15T09:49:07.014Z","author":{"_id":"66e6ac80975df8fffce6f13d","avatarUrl":"/avatars/a692afe1838f8e00a697ebaedc8c5b7f.svg","fullname":"Li","name":"hao1234567890","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8319059610366821},"editors":["hao1234567890"],"editorAvatarUrls":["/avatars/a692afe1838f8e00a697ebaedc8c5b7f.svg"],"reactions":[{"reaction":"👍","users":["AdinaY","xinyadu","liqiang888"],"count":3}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2409.07703","authors":[{"_id":"66e4032d8bcd43ff6cdfd7cf","user":{"_id":"65318cb0cdc25730d4ad1e42","avatarUrl":"/avatars/117a38c6229767b807401f4a607ab8da.svg","isPro":false,"fullname":"Liqiang Jing","user":"liqiang888","type":"user"},"name":"Liqiang Jing","status":"claimed_verified","statusLastChangedAt":"2024-09-16T07:02:56.727Z","hidden":false},{"_id":"66e4032d8bcd43ff6cdfd7d0","user":{"_id":"6262683b08481d1d75ebbaea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1668367637437-6262683b08481d1d75ebbaea.jpeg","isPro":false,"fullname":"Zhehui Huang","user":"Zhehui-Huang","type":"user"},"name":"Zhehui Huang","status":"admin_assigned","statusLastChangedAt":"2024-09-16T07:04:59.361Z","hidden":false},{"_id":"66e4032d8bcd43ff6cdfd7d1","user":{"_id":"657cd228138b7e391444a65d","avatarUrl":"/avatars/c7c984ae483144fab627aa2c54d91d0f.svg","isPro":false,"fullname":"Xiaoyang Wang","user":"xywang1","type":"user"},"name":"Xiaoyang Wang","status":"claimed_verified","statusLastChangedAt":"2024-09-16T07:03:00.221Z","hidden":false},{"_id":"66e4032d8bcd43ff6cdfd7d2","user":{"_id":"634f18e4aae4bde2c8e2adca","avatarUrl":"/avatars/40549a59fc5ba04a4baa5a1d5dba0847.svg","isPro":false,"fullname":"Wenlin Yao","user":"wenlinyao","type":"user"},"name":"Wenlin Yao","status":"admin_assigned","statusLastChangedAt":"2024-09-16T07:05:11.695Z","hidden":false},{"_id":"66e4032d8bcd43ff6cdfd7d3","user":{"_id":"5feab3a28a3201f8e554c969","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1660795228685-5feab3a28a3201f8e554c969.png","isPro":false,"fullname":"Wenhao Yu","user":"wyu1","type":"user"},"name":"Wenhao Yu","status":"admin_assigned","statusLastChangedAt":"2024-09-16T07:05:18.254Z","hidden":false},{"_id":"66e4032d8bcd43ff6cdfd7d4","user":{"_id":"64ae4f6280f308a395fd7c19","avatarUrl":"/avatars/5f1330f8187cd5e66aa517303659f110.svg","isPro":false,"fullname":"Kaixin Ma","user":"kaixinm","type":"user"},"name":"Kaixin Ma","status":"admin_assigned","statusLastChangedAt":"2024-09-16T07:05:24.397Z","hidden":false},{"_id":"66e4032d8bcd43ff6cdfd7d5","user":{"_id":"64ed478bec06efeb03034933","avatarUrl":"/avatars/cd7dc3165831e90cb36d39d41c3c8157.svg","isPro":false,"fullname":"Hongming Zhang","user":"Hongming98","type":"user"},"name":"Hongming Zhang","status":"admin_assigned","statusLastChangedAt":"2024-09-16T07:05:30.586Z","hidden":false},{"_id":"66e4032d8bcd43ff6cdfd7d6","user":{"_id":"66c4154fb83a7e94d588ade3","avatarUrl":"/avatars/2f1c8a3feb81be628e36eaa198450704.svg","isPro":false,"fullname":"Xinya Du","user":"xinyadu","type":"user"},"name":"Xinya Du","status":"claimed_verified","statusLastChangedAt":"2024-09-16T07:02:58.554Z","hidden":false},{"_id":"66e4032d8bcd43ff6cdfd7d7","name":"Dong Yu","hidden":false}],"publishedAt":"2024-09-12T02:08:00.000Z","submittedOnDailyAt":"2024-09-13T16:20:51.623Z","title":"DSBench: How Far Are Data Science Agents to Becoming Data Science\n Experts?","submittedOnDailyBy":{"_id":"657cd228138b7e391444a65d","avatarUrl":"/avatars/c7c984ae483144fab627aa2c54d91d0f.svg","isPro":false,"fullname":"Xiaoyang Wang","user":"xywang1","type":"user"},"summary":"Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have\ndemonstrated impressive language/vision reasoning abilities, igniting the\nrecent trend of building agents for targeted applications such as shopping\nassistants or AI software engineers. Recently, many data science benchmarks\nhave been proposed to investigate their performance in the data science domain.\nHowever, existing data science benchmarks still fall short when compared to\nreal-world data science applications due to their simplified settings. To\nbridge this gap, we introduce DSBench, a comprehensive benchmark designed to\nevaluate data science agents with realistic tasks. This benchmark includes 466\ndata analysis tasks and 74 data modeling tasks, sourced from Eloquence and\nKaggle competitions. DSBench offers a realistic setting by encompassing long\ncontexts, multimodal task backgrounds, reasoning with large data files and\nmulti-table structures, and performing end-to-end data modeling tasks. Our\nevaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle\nwith most tasks, with the best agent solving only 34.12% of data analysis tasks\nand achieving a 34.74% Relative Performance Gap (RPG). These findings\nunderscore the need for further advancements in developing more practical,\nintelligent, and autonomous data science agents.","upvotes":66,"discussionId":"66e4032f8bcd43ff6cdfd835","githubRepo":"https://github.com/LiqiangJing/DSBench","githubRepoAddedBy":"user","ai_summary":"DSBench evaluates large language and vision-language models on realistic data science tasks, revealing significant performance gaps that indicate the need for improved autonomous agents.","ai_keywords":["Large Language Models","Large Vision-Language Models","data science benchmarks","data analysis tasks","data modeling tasks","Eloquence","Kaggle","long contexts","multimodal","reasoning","large data files","multi-table structures","Relative Performance Gap","data science agents"],"githubStars":104},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"657cd228138b7e391444a65d","avatarUrl":"/avatars/c7c984ae483144fab627aa2c54d91d0f.svg","isPro":false,"fullname":"Xiaoyang Wang","user":"xywang1","type":"user"},{"_id":"634f18e4aae4bde2c8e2adca","avatarUrl":"/avatars/40549a59fc5ba04a4baa5a1d5dba0847.svg","isPro":false,"fullname":"Wenlin Yao","user":"wenlinyao","type":"user"},{"_id":"64ae4f6280f308a395fd7c19","avatarUrl":"/avatars/5f1330f8187cd5e66aa517303659f110.svg","isPro":false,"fullname":"Kaixin Ma","user":"kaixinm","type":"user"},{"_id":"655544975f7b57f5850fc8cd","avatarUrl":"/avatars/b896e9b2873397beed927d7fb0fa0c2a.svg","isPro":false,"fullname":"Ruosen Li","user":"Wilson-Lee","type":"user"},{"_id":"65112b6b8c4b535a971ad3cf","avatarUrl":"/avatars/1ef00b312d2b009c8c7aab21b4b3f258.svg","isPro":false,"fullname":"guiminghardychen","user":"g-h-chen","type":"user"},{"_id":"64ba4c2565535cf237da429a","avatarUrl":"/avatars/a8af0e748686e9d3364f1beaa5039ddb.svg","isPro":false,"fullname":"Dang Nguyen","user":"dangmn","type":"user"},{"_id":"63b7e6af4705f0ed5d7a292c","avatarUrl":"/avatars/7d7374c54a96a1d24825e9fe74b52574.svg","isPro":false,"fullname":"yue zhang","user":"skywalkerzhang19","type":"user"},{"_id":"64aa0f2fe04e7f92245d7ced","avatarUrl":"/avatars/84e0bee6e91216174dc186c57fdfcfc2.svg","isPro":false,"fullname":"Wang Xinyu","user":"cp-cp","type":"user"},{"_id":"632162328c0da827c72e791d","avatarUrl":"/avatars/384fc1663291fd2fbd94ea329adfd9e0.svg","isPro":false,"fullname":" Xingjian Zhao","user":"Xingjianz","type":"user"},{"_id":"6262683b08481d1d75ebbaea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1668367637437-6262683b08481d1d75ebbaea.jpeg","isPro":false,"fullname":"Zhehui Huang","user":"Zhehui-Huang","type":"user"},{"_id":"636f533c1ca0ea5107ed171d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/636f533c1ca0ea5107ed171d/jLwsrcPtUiHj8WhcE0Y67.jpeg","isPro":false,"fullname":"Bhimraj Yadav","user":"bhimrazy","type":"user"},{"_id":"61638ee4877577e50b485cd4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1633914587203-noauth.jpeg","isPro":false,"fullname":"Yebowen Hu","user":"huuuyeah","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1}">DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?
Abstract
DSBench evaluates large language and vision-language models on realistic data science tasks, revealing significant performance gaps that indicate the need for improved autonomous agents.
Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have demonstrated impressive language/vision reasoning abilities, igniting the recent trend of building agents for targeted applications such as shopping assistants or AI software engineers. Recently, many data science benchmarks have been proposed to investigate their performance in the data science domain. However, existing data science benchmarks still fall short when compared to real-world data science applications due to their simplified settings. To bridge this gap, we introduce DSBench, a comprehensive benchmark designed to evaluate data science agents with realistic tasks. This benchmark includes 466 data analysis tasks and 74 data modeling tasks, sourced from Eloquence and Kaggle competitions. DSBench offers a realistic setting by encompassing long contexts, multimodal task backgrounds, reasoning with large data files and multi-table structures, and performing end-to-end data modeling tasks. Our evaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle with most tasks, with the best agent solving only 34.12% of data analysis tasks and achieving a 34.74% Relative Performance Gap (RPG). These findings underscore the need for further advancements in developing more practical, intelligent, and autonomous data science agents.
Community
How far are data science agents to becoming data science experts? Our brand new data science benchmark comes with a comprehensive evaluation.
Code and data released at GitHub: https://github.com/LiqiangJing/DSBench
Hi @xywang1 congrats on this work and thanks for making the dataset available on the hub!
Note that for things like the dataset viewer to work, one would need to follow this guide: https://huggingface.co/docs/datasets/loading, after which you can call dataset.push_to_hub(...) to push it to your repo. This also makes the dataset usable through the Datasets library.
Cheers!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- BLADE: Benchmarking Language Model Agents for Data-Driven Science (2024)
- MMAU: A Holistic Benchmark of Agent Capabilities Across Diverse Domains (2024)
- PyBench: Evaluating LLM Agent on various real-world coding tasks (2024)
- Structured Event Reasoning with Large Language Models (2024)
- Can We Rely on LLM Agents to Draft Long-Horizon Plans? Let's Take TravelPlanner as an Example (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
great paper!It is a groundbreaking and pioneering article
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper