Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for
Complex Medical Reasoning
\n","updatedAt":"2025-03-16T00:11:27.557Z","author":{"_id":"640d3eaa3623f6a56dde856d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678589663024-640d3eaa3623f6a56dde856d.jpeg","fullname":"vansin","name":"vansin","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":40,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.816970944404602},"editors":["vansin"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1678589663024-640d3eaa3623f6a56dde856d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.07459","authors":[{"_id":"67cfd1934fed2b7e3e4cbb34","user":{"_id":"63357c608adfa81faf2ac180","avatarUrl":"/avatars/ae0314c644f882251baf59b9134fd36f.svg","isPro":false,"fullname":"Xiangru Tang","user":"RTT1","type":"user"},"name":"Xiangru Tang","status":"extracted_pending","statusLastChangedAt":"2025-03-11T06:00:52.457Z","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb35","name":"Daniel Shao","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb36","user":{"_id":"64660e9ecf550af36eb2b774","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64660e9ecf550af36eb2b774/NKFB8ltszcqPnr6BxhG6V.jpeg","isPro":false,"fullname":"Jiwoong Sohn","user":"jw-sohn","type":"user"},"name":"Jiwoong Sohn","status":"claimed_verified","statusLastChangedAt":"2025-06-16T12:56:37.417Z","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb37","name":"Jiapeng Chen","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb38","user":{"_id":"65f40e83653c231cbaf7defe","avatarUrl":"/avatars/afa5ce72324112739e539865c9aee26b.svg","isPro":false,"fullname":"Jiayi Zhang","user":"didiforhugface","type":"user"},"name":"Jiayi Zhang","status":"claimed_verified","statusLastChangedAt":"2025-07-29T07:12:15.352Z","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb39","user":{"_id":"649ea7106282cb41e77760bc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/649ea7106282cb41e77760bc/HlWjaqxr03ob93vdKg_LQ.jpeg","isPro":false,"fullname":"Isaac","user":"XiangJinYu","type":"user"},"name":"Jinyu Xiang","status":"claimed_verified","statusLastChangedAt":"2025-04-08T06:57:54.940Z","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb3a","user":{"_id":"675e0d5cdd3e9eeed6954f5a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/7oMEoBmaFiCR9K2q9Z_7q.png","isPro":false,"fullname":"Fang Wu","user":"fangwu97","type":"user"},"name":"Fang Wu","status":"claimed_verified","statusLastChangedAt":"2025-07-23T08:38:21.142Z","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb3b","user":{"_id":"62f662bcc58915315c4eccea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg","isPro":true,"fullname":"Yilun Zhao","user":"yilunzhao","type":"user"},"name":"Yilun Zhao","status":"claimed_verified","statusLastChangedAt":"2025-03-31T08:15:57.888Z","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb3c","user":{"_id":"64252594a4f3051f54dd4355","avatarUrl":"/avatars/c1f982b9b6be2956f6b6558be70d9ec3.svg","isPro":false,"fullname":"Wu","user":"alexanderwu","type":"user"},"name":"Chenglin Wu","status":"claimed_verified","statusLastChangedAt":"2025-04-08T06:57:57.846Z","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb3d","user":{"_id":"65cae89119683f9817c049ea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65cae89119683f9817c049ea/A0XxjmaJldu28JhFvWmpP.jpeg","isPro":false,"fullname":"Wenqi Shi","user":"wshi83","type":"user"},"name":"Wenqi Shi","status":"claimed_verified","statusLastChangedAt":"2025-03-11T08:21:16.321Z","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb3e","name":"Arman Cohan","hidden":false},{"_id":"67cfd1934fed2b7e3e4cbb3f","name":"Mark Gerstein","hidden":false}],"publishedAt":"2025-03-10T15:38:44.000Z","submittedOnDailyAt":"2025-03-11T04:32:55.865Z","title":"MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for\n Complex Medical Reasoning","submittedOnDailyBy":{"_id":"63357c608adfa81faf2ac180","avatarUrl":"/avatars/ae0314c644f882251baf59b9134fd36f.svg","isPro":false,"fullname":"Xiangru Tang","user":"RTT1","type":"user"},"summary":"Large Language Models (LLMs) have shown impressive performance on existing\nmedical question-answering benchmarks. This high performance makes it\nincreasingly difficult to meaningfully evaluate and differentiate advanced\nmethods. We present MedAgentsBench, a benchmark that focuses on challenging\nmedical questions requiring multi-step clinical reasoning, diagnosis\nformulation, and treatment planning-scenarios where current models still\nstruggle despite their strong performance on standard tests. Drawing from seven\nestablished medical datasets, our benchmark addresses three key limitations in\nexisting evaluations: (1) the prevalence of straightforward questions where\neven base models achieve high performance, (2) inconsistent sampling and\nevaluation protocols across studies, and (3) lack of systematic analysis of the\ninterplay between performance, cost, and inference time. Through experiments\nwith various base models and reasoning methods, we demonstrate that the latest\nthinking models, DeepSeek R1 and OpenAI o3, exhibit exceptional performance in\ncomplex medical reasoning tasks. Additionally, advanced search-based agent\nmethods offer promising performance-to-cost ratios compared to traditional\napproaches. Our analysis reveals substantial performance gaps between model\nfamilies on complex questions and identifies optimal model selections for\ndifferent computational constraints. Our benchmark and evaluation framework are\npublicly available at https://github.com/gersteinlab/medagents-benchmark.","upvotes":16,"discussionId":"67cfd1944fed2b7e3e4cbb81","githubRepo":"https://github.com/gersteinlab/medagents-benchmark","githubRepoAddedBy":"auto","ai_summary":"MedAgentsBench evaluates advanced LLMs in medical reasoning through complex, multi-step clinical scenarios, demonstrating performance gaps and cost-effective search-based agent methods.","ai_keywords":["Large Language Models","MedAgentsBench","multi-step clinical reasoning","diagnosis formulation","treatment planning","DeepSeek R1","OpenAI o3","search-based agent methods"],"githubStars":75},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63357c608adfa81faf2ac180","avatarUrl":"/avatars/ae0314c644f882251baf59b9134fd36f.svg","isPro":false,"fullname":"Xiangru Tang","user":"RTT1","type":"user"},{"_id":"643e3b1f8d0edf9b4c121061","avatarUrl":"/avatars/7d10356703cd6f13bf42b5275b710458.svg","isPro":false,"fullname":"zhaolingchen","user":"czlll","type":"user"},{"_id":"64660e9ecf550af36eb2b774","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64660e9ecf550af36eb2b774/NKFB8ltszcqPnr6BxhG6V.jpeg","isPro":false,"fullname":"Jiwoong Sohn","user":"jw-sohn","type":"user"},{"_id":"6471bddd609ae9f56368f132","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6471bddd609ae9f56368f132/G91Q4iCGN2dy3oMaz-LrO.jpeg","isPro":true,"fullname":"Yuchen Zhuang","user":"yczhuang","type":"user"},{"_id":"65cae89119683f9817c049ea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65cae89119683f9817c049ea/A0XxjmaJldu28JhFvWmpP.jpeg","isPro":false,"fullname":"Wenqi Shi","user":"wshi83","type":"user"},{"_id":"65f40e83653c231cbaf7defe","avatarUrl":"/avatars/afa5ce72324112739e539865c9aee26b.svg","isPro":false,"fullname":"Jiayi Zhang","user":"didiforhugface","type":"user"},{"_id":"62970df979f193515da13dc0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62970df979f193515da13dc0/A-mgKIcgTXRJ54GCHswTq.jpeg","isPro":false,"fullname":"Yanjun Shao","user":"super-dainiu","type":"user"},{"_id":"62f662bcc58915315c4eccea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg","isPro":true,"fullname":"Yilun Zhao","user":"yilunzhao","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"},{"_id":"675e0d5cdd3e9eeed6954f5a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/7oMEoBmaFiCR9K2q9Z_7q.png","isPro":false,"fullname":"Fang Wu","user":"fangwu97","type":"user"},{"_id":"637169557a5e5d8efdc3e58e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1668515232215-637169557a5e5d8efdc3e58e.jpeg","isPro":false,"fullname":"Haowei Zhang","user":"freesky","type":"user"},{"_id":"622474f38dc6b0b64f5e903d","avatarUrl":"/avatars/d6b60a014277a8ec7d564163c5f644aa.svg","isPro":false,"fullname":"Yuxin Zuo","user":"yuxinzuo","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
MedAgentsBench evaluates advanced LLMs in medical reasoning through complex, multi-step clinical scenarios, demonstrating performance gaps and cost-effective search-based agent methods.
AI-generated summary
Large Language Models (LLMs) have shown impressive performance on existing
medical question-answering benchmarks. This high performance makes it
increasingly difficult to meaningfully evaluate and differentiate advanced
methods. We present MedAgentsBench, a benchmark that focuses on challenging
medical questions requiring multi-step clinical reasoning, diagnosis
formulation, and treatment planning-scenarios where current models still
struggle despite their strong performance on standard tests. Drawing from seven
established medical datasets, our benchmark addresses three key limitations in
existing evaluations: (1) the prevalence of straightforward questions where
even base models achieve high performance, (2) inconsistent sampling and
evaluation protocols across studies, and (3) lack of systematic analysis of the
interplay between performance, cost, and inference time. Through experiments
with various base models and reasoning methods, we demonstrate that the latest
thinking models, DeepSeek R1 and OpenAI o3, exhibit exceptional performance in
complex medical reasoning tasks. Additionally, advanced search-based agent
methods offer promising performance-to-cost ratios compared to traditional
approaches. Our analysis reveals substantial performance gaps between model
families on complex questions and identifies optimal model selections for
different computational constraints. Our benchmark and evaluation framework are
publicly available at https://github.com/gersteinlab/medagents-benchmark.
This paper addresses a critical gap in medical AI evaluation. While current LLMs perform well on standard medical tests, this work reveals they still struggle with complex clinical reasoning. The benchmark specifically targets challenging scenarios requiring multi-step reasoning and offers valuable insights on model performance trade-offs, showing that thinking models like DeepSeek R1 and OpenAI O3 significantly outperform traditional approaches on complex medical tasks. The comprehensive analysis of cost-performance-time trade-offs provides practical guidance for researchers and practitioners selecting models for medical applications. This innovative benchmark and methodology would benefit the Huggingface community by establishing more rigorous standards for evaluating medical AI systems.