Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Are large language models superhuman chemists?
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-04-04T01:23:49.020Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7267383337020874},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2404.01475","authors":[{"_id":"660cbe6099760c2ef3b1a9eb","user":{"_id":"64dde64b8276275de53faf79","avatarUrl":"/avatars/32b438efaca7b83b8923b5ec34880faf.svg","isPro":true,"fullname":"Adrian Mirza","user":"AdrianM0","type":"user"},"name":"Adrian Mirza","status":"claimed_verified","statusLastChangedAt":"2024-05-13T07:49:29.699Z","hidden":false},{"_id":"660cbe6099760c2ef3b1a9ec","user":{"_id":"651dbb2f56b71e7dbc9a95c4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/651dbb2f56b71e7dbc9a95c4/hD4OJnyEruOJF_2KaiF1D.png","isPro":false,"fullname":"Nawaf Alampara","user":"n0w0f","type":"user"},"name":"Nawaf Alampara","status":"claimed_verified","statusLastChangedAt":"2024-06-03T18:26:34.228Z","hidden":false},{"_id":"660cbe6099760c2ef3b1a9ed","name":"Sreekanth Kunchapu","hidden":false},{"_id":"660cbe6099760c2ef3b1a9ee","name":"Benedict Emoekabu","hidden":false},{"_id":"660cbe6099760c2ef3b1a9ef","name":"Aswanth Krishnan","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f0","name":"Mara Wilhelmi","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f1","name":"Macjonathan Okereke","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f2","name":"Juliane Eberhardt","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f3","name":"Amir Mohammad Elahi","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f4","name":"Maximilian Greiner","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f5","name":"Caroline T. Holick","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f6","name":"Tanya Gupta","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f7","name":"Mehrdad Asgari","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f8","name":"Christina Glaubitz","hidden":false},{"_id":"660cbe6099760c2ef3b1a9f9","name":"Lea C. Klepsch","hidden":false},{"_id":"660cbe6099760c2ef3b1a9fa","name":"Yannik Köster","hidden":false},{"_id":"660cbe6099760c2ef3b1a9fb","name":"Jakob Meyer","hidden":false},{"_id":"660cbe6099760c2ef3b1a9fc","name":"Santiago Miret","hidden":false},{"_id":"660cbe6099760c2ef3b1a9fd","name":"Tim Hoffmann","hidden":false},{"_id":"660cbe6099760c2ef3b1a9fe","name":"Fabian Alexander Kreth","hidden":false},{"_id":"660cbe6099760c2ef3b1a9ff","name":"Michael Ringleb","hidden":false},{"_id":"660cbe6099760c2ef3b1aa00","name":"Nicole Roesner","hidden":false},{"_id":"660cbe6099760c2ef3b1aa01","name":"Ulrich S. Schubert","hidden":false},{"_id":"660cbe6099760c2ef3b1aa02","name":"Leanne M. Stafast","hidden":false},{"_id":"660cbe6099760c2ef3b1aa03","name":"Dinga Wonanke","hidden":false},{"_id":"660cbe6099760c2ef3b1aa04","user":{"_id":"6037f54f770949ef34f12d99","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1614280013027-noauth.jpeg","isPro":true,"fullname":"Michael Pieler","user":"MicPie","type":"user"},"name":"Michael Pieler","status":"claimed_verified","statusLastChangedAt":"2024-04-24T11:12:17.385Z","hidden":false},{"_id":"660cbe6099760c2ef3b1aa05","name":"Philippe Schwaller","hidden":false},{"_id":"660cbe6099760c2ef3b1aa06","name":"Kevin Maik Jablonka","hidden":false}],"publishedAt":"2024-04-01T20:56:25.000Z","submittedOnDailyAt":"2024-04-03T00:56:40.901Z","title":"Are large language models superhuman chemists?","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Large language models (LLMs) have gained widespread interest due to their\nability to process human language and perform tasks on which they have not been\nexplicitly trained. This is relevant for the chemical sciences, which face the\nproblem of small and diverse datasets that are frequently in the form of text.\nLLMs have shown promise in addressing these issues and are increasingly being\nharnessed to predict chemical properties, optimize reactions, and even design\nand conduct experiments autonomously. However, we still have only a very\nlimited systematic understanding of the chemical reasoning capabilities of\nLLMs, which would be required to improve models and mitigate potential harms.\nHere, we introduce \"ChemBench,\" an automated framework designed to rigorously\nevaluate the chemical knowledge and reasoning abilities of state-of-the-art\nLLMs against the expertise of human chemists. We curated more than 7,000\nquestion-answer pairs for a wide array of subfields of the chemical sciences,\nevaluated leading open and closed-source LLMs, and found that the best models\noutperformed the best human chemists in our study on average. The models,\nhowever, struggle with some chemical reasoning tasks that are easy for human\nexperts and provide overconfident, misleading predictions, such as about\nchemicals' safety profiles. These findings underscore the dual reality that,\nalthough LLMs demonstrate remarkable proficiency in chemical tasks, further\nresearch is critical to enhancing their safety and utility in chemical\nsciences. Our findings also indicate a need for adaptations to chemistry\ncurricula and highlight the importance of continuing to develop evaluation\nframeworks to improve safe and useful LLMs.","upvotes":19,"discussionId":"660cbe6099760c2ef3b1aa46","githubRepo":"https://github.com/lamalab-org/chem-bench","githubRepoAddedBy":"auto","ai_summary":"ChemBench, an automated evaluation framework, assesses the chemical reasoning of large language models against human experts, revealing their proficiency and shortcomings in specific tasks.","ai_keywords":["large language models","LLMs","chemical sciences","datasets","text","chemical properties","reactions","experiments","chemical reasoning","ChemBench","question-answer pairs","subfields","chemical knowledge","human chemists","safety profiles","adaptations","curricula","evaluation frameworks"],"githubStars":131},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64bbe9b236eb058cd9d6a5b9","avatarUrl":"/avatars/c7c01a3fa8809e73800392679abff6d5.svg","isPro":false,"fullname":"Kai Zuberbühler","user":"kaizuberbuehler","type":"user"},{"_id":"655d71cec93b2151432480a3","avatarUrl":"/avatars/2f1f4b10538d87b49ece965f66b472d9.svg","isPro":false,"fullname":"Michał Czajkowski","user":"callmyname","type":"user"},{"_id":"64dde64b8276275de53faf79","avatarUrl":"/avatars/32b438efaca7b83b8923b5ec34880faf.svg","isPro":true,"fullname":"Adrian Mirza","user":"AdrianM0","type":"user"},{"_id":"655ac762cb17ec19ef82719b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655ac762cb17ec19ef82719b/1kDncYrGLYS_2SR8cNdAL.png","isPro":false,"fullname":"Welcome to matlok","user":"matlok","type":"user"},{"_id":"643946d121221ac74116fa36","avatarUrl":"/avatars/59529155ac3239098f9a5b599fe11f73.svg","isPro":false,"fullname":"Eric","user":"MetaAIEric","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"651dbb2f56b71e7dbc9a95c4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/651dbb2f56b71e7dbc9a95c4/hD4OJnyEruOJF_2KaiF1D.png","isPro":false,"fullname":"Nawaf Alampara","user":"n0w0f","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"6093a02dc4a92d63a91c5236","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6093a02dc4a92d63a91c5236/yUte6V0FU0BvVFAbON-9n.jpeg","isPro":true,"fullname":"Diwank Tomer","user":"diwank","type":"user"},{"_id":"64a3efa56866210ffc6f83f1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64a3efa56866210ffc6f83f1/WXRnceaov5ciVhCVJ4KRC.jpeg","isPro":false,"fullname":"Siddharth","user":"sidbin","type":"user"},{"_id":"62618a61aee4bea02140d7d0","avatarUrl":"/avatars/9fd4813500df7bb8d64a15efd869a9d0.svg","isPro":false,"fullname":"Shivaen Ramshetty","user":"shivr","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2404.01475

Are large language models superhuman chemists?

Published on Apr 1, 2024
· Submitted by
AK
on Apr 3, 2024
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

ChemBench, an automated evaluation framework, assesses the chemical reasoning of large language models against human experts, revealing their proficiency and shortcomings in specific tasks.

AI-generated summary

Large language models (LLMs) have gained widespread interest due to their ability to process human language and perform tasks on which they have not been explicitly trained. This is relevant for the chemical sciences, which face the problem of small and diverse datasets that are frequently in the form of text. LLMs have shown promise in addressing these issues and are increasingly being harnessed to predict chemical properties, optimize reactions, and even design and conduct experiments autonomously. However, we still have only a very limited systematic understanding of the chemical reasoning capabilities of LLMs, which would be required to improve models and mitigate potential harms. Here, we introduce "ChemBench," an automated framework designed to rigorously evaluate the chemical knowledge and reasoning abilities of state-of-the-art LLMs against the expertise of human chemists. We curated more than 7,000 question-answer pairs for a wide array of subfields of the chemical sciences, evaluated leading open and closed-source LLMs, and found that the best models outperformed the best human chemists in our study on average. The models, however, struggle with some chemical reasoning tasks that are easy for human experts and provide overconfident, misleading predictions, such as about chemicals' safety profiles. These findings underscore the dual reality that, although LLMs demonstrate remarkable proficiency in chemical tasks, further research is critical to enhancing their safety and utility in chemical sciences. Our findings also indicate a need for adaptations to chemistry curricula and highlight the importance of continuing to develop evaluation frameworks to improve safe and useful LLMs.

Community

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2404.01475 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.01475 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.01475 in a Space README.md to link it from this page.

Collections including this paper 4