Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n","updatedAt":"2024-01-24T01:26:53.437Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7082310914993286},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"65cdd8148bd44adf7720e77d","author":{"_id":"648178d4ec65b8b77d841958","avatarUrl":"/avatars/b8ed2d747cb029a876a7764e12f14fcd.svg","fullname":"Sohail khan","name":"2013khansohail","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2024-02-15T09:23:32.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"\n![IMG-20","html":"

![IMG-20

\n","updatedAt":"2024-02-15T09:24:05.605Z","author":{"_id":"648178d4ec65b8b77d841958","avatarUrl":"/avatars/b8ed2d747cb029a876a7764e12f14fcd.svg","fullname":"Sohail khan","name":"2013khansohail","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.17132918536663055},"editors":["2013khansohail"],"editorAvatarUrls":["/avatars/b8ed2d747cb029a876a7764e12f14fcd.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2401.12208","authors":[{"_id":"65af4222755c534def07a08b","user":{"_id":"62e68ecff18bd3c3b532ddfd","avatarUrl":"/avatars/60ddbde83ab4be0277272a94e2e72e3a.svg","isPro":false,"fullname":"Zhihong Chen","user":"zhjohnchan","type":"user"},"name":"Zhihong Chen","status":"admin_assigned","statusLastChangedAt":"2024-01-23T09:15:17.882Z","hidden":false},{"_id":"65af4222755c534def07a08c","user":{"_id":"6180c6e5000231f499c547c7","avatarUrl":"/avatars/0255b430be744a8da236860e2a00307e.svg","isPro":false,"fullname":"Maya Varma","user":"mvarma","type":"user"},"name":"Maya Varma","status":"admin_assigned","statusLastChangedAt":"2024-01-23T09:16:02.233Z","hidden":false},{"_id":"65af4222755c534def07a08d","user":{"_id":"62716952bcef985363db8485","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62716952bcef985363db8485/zJPPo5xlwZRJdEuwYsYKp.jpeg","isPro":true,"fullname":"JB D.","user":"IAMJB","type":"user"},"name":"Jean-Benoit Delbrouck","status":"claimed_verified","statusLastChangedAt":"2024-07-18T09:09:47.549Z","hidden":false},{"_id":"65af4222755c534def07a08e","user":{"_id":"65b00ea3399c0430e82ae284","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4Iovd06vaVhsZytvG066d.jpeg","isPro":false,"fullname":"Magda Paschali","user":"magda-paschali","type":"user"},"name":"Magdalini Paschali","status":"claimed_verified","statusLastChangedAt":"2024-01-24T08:09:11.441Z","hidden":false},{"_id":"65af4222755c534def07a08f","user":{"_id":"6377044ea3b787faca443cc0","avatarUrl":"/avatars/5cd6cc70ad597dd83776018a452947d0.svg","isPro":false,"fullname":"Louis Blankemeier","user":"louisblankemeier","type":"user"},"name":"Louis Blankemeier","status":"admin_assigned","statusLastChangedAt":"2024-01-23T09:16:22.179Z","hidden":false},{"_id":"65af4222755c534def07a090","user":{"_id":"64e36dd9618cd90997e048f2","avatarUrl":"/avatars/2a239bed193982596114d52afea1bb97.svg","isPro":false,"fullname":"Dave Van Veen","user":"davevanveen","type":"user"},"name":"Dave Van Veen","status":"admin_assigned","statusLastChangedAt":"2024-01-23T09:16:29.120Z","hidden":false},{"_id":"65af4222755c534def07a091","user":{"_id":"654eb5cbc67f60a3685e49e6","avatarUrl":"/avatars/047b2bfc768b1309b8e0363e5bcfbfbb.svg","isPro":false,"fullname":"Jeya Maria Jose Valanarasu","user":"jmjose","type":"user"},"name":"Jeya Maria Jose Valanarasu","status":"claimed_verified","statusLastChangedAt":"2024-01-24T08:09:14.459Z","hidden":false},{"_id":"65af4222755c534def07a092","user":{"_id":"6526f2d0ad78d4a07a84cf24","avatarUrl":"/avatars/7e477a087098477b0d10517eab7f5a4e.svg","isPro":false,"fullname":"Alaa Youssef","user":"Alaajoer","type":"user"},"name":"Alaa Youssef","status":"admin_assigned","statusLastChangedAt":"2024-01-23T09:16:49.347Z","hidden":false},{"_id":"65af4222755c534def07a093","user":{"_id":"63a9d04f3453852ef53e1830","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1672073530253-63a9d04f3453852ef53e1830.png","isPro":false,"fullname":"Joseph Paul Cohen","user":"ieee8023","type":"user"},"name":"Joseph Paul Cohen","status":"admin_assigned","statusLastChangedAt":"2024-01-23T09:16:57.580Z","hidden":false},{"_id":"65af4222755c534def07a094","user":{"_id":"60bc341d85b60a3459be9cac","avatarUrl":"/avatars/e0d3d2907504176451a0d22885eef997.svg","isPro":false,"fullname":"Eduardo Pontes Reis","user":"edureisMD","type":"user"},"name":"Eduardo Pontes Reis","status":"admin_assigned","statusLastChangedAt":"2024-01-23T09:17:04.835Z","hidden":false},{"_id":"65af4222755c534def07a095","name":"Emily B. Tsai","hidden":false},{"_id":"65af4222755c534def07a096","name":"Andrew Johnston","hidden":false},{"_id":"65af4222755c534def07a097","name":"Cameron Olsen","hidden":false},{"_id":"65af4222755c534def07a098","user":{"_id":"6057b823861b9d53d9c4b8df","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1625184855691-6057b823861b9d53d9c4b8df.jpeg","isPro":false,"fullname":"Tanishq Abraham","user":"tmabraham","type":"user"},"name":"Tanishq Mathew Abraham","status":"admin_assigned","statusLastChangedAt":"2024-01-23T09:26:44.198Z","hidden":false},{"_id":"65af4222755c534def07a099","name":"Sergios Gatidis","hidden":false},{"_id":"65af4222755c534def07a09a","user":{"_id":"6236533b76c8a780323af640","avatarUrl":"/avatars/18078ad26aa44312a4927160216e5943.svg","isPro":false,"fullname":"Akshay Chaudhari","user":"akshaysc","type":"user"},"name":"Akshay S. Chaudhari","status":"claimed_verified","statusLastChangedAt":"2024-08-30T07:18:37.044Z","hidden":false},{"_id":"65af4222755c534def07a09b","user":{"_id":"62867edd1d99648808888d7b","avatarUrl":"/avatars/22d764782c5169634deec6f43ece7500.svg","isPro":false,"fullname":"Curt Langlotz","user":"cplanglotz","type":"user"},"name":"Curtis Langlotz","status":"claimed_verified","statusLastChangedAt":"2024-08-30T07:18:38.658Z","hidden":false}],"publishedAt":"2024-01-22T18:51:07.000Z","submittedOnDailyAt":"2024-01-23T02:05:49.613Z","title":"CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Chest X-rays (CXRs) are the most frequently performed imaging test in\nclinical practice. Recent advances in the development of vision-language\nfoundation models (FMs) give rise to the possibility of performing automated\nCXR interpretation, which can assist physicians with clinical decision-making\nand improve patient outcomes. However, developing FMs that can accurately\ninterpret CXRs is challenging due to the (1) limited availability of\nlarge-scale vision-language datasets in the medical image domain, (2) lack of\nvision and language encoders that can capture the complexities of medical data,\nand (3) absence of evaluation frameworks for benchmarking the abilities of FMs\non CXR interpretation. In this work, we address these challenges by first\nintroducing CheXinstruct - a large-scale instruction-tuning dataset\ncurated from 28 publicly-available datasets. We then present CheXagent -\nan instruction-tuned FM capable of analyzing and summarizing CXRs. To build\nCheXagent, we design a clinical large language model (LLM) for parsing\nradiology reports, a vision encoder for representing CXR images, and a network\nto bridge the vision and language modalities. Finally, we introduce\nCheXbench - a novel benchmark designed to systematically evaluate FMs\nacross 8 clinically-relevant CXR interpretation tasks. Extensive quantitative\nevaluations and qualitative reviews with five expert radiologists demonstrate\nthat CheXagent outperforms previously-developed general- and medical-domain FMs\non CheXbench tasks. Furthermore, in an effort to improve model transparency, we\nperform a fairness evaluation across factors of sex, race and age to highlight\npotential performance disparities. Our project is at\nhttps://stanford-aimi.github.io/chexagent.html.","upvotes":22,"discussionId":"65af4225755c534def07a0e4","githubRepo":"https://github.com/Stanford-AIMI/CheXagent","githubRepoAddedBy":"auto","ai_summary":"A large-scale instruction-tuning dataset and an instruction-tuned foundation model with a clinical large language model and vision encoder are introduced to automate CXR interpretation, outperforming existing models on clinical tasks and evaluated for fairness.","ai_keywords":["vision-language foundation models","CheXinstruct","CheXagent","clinical large language model","vision encoder","CheXbench","radiology reports","CXR interpretation","fairness evaluation"],"githubStars":213},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"635cada2c017767a629db012","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1667018139063-noauth.jpeg","isPro":false,"fullname":"Ojasvi Singh Yadav","user":"ojasvisingh786","type":"user"},{"_id":"62e68ecff18bd3c3b532ddfd","avatarUrl":"/avatars/60ddbde83ab4be0277272a94e2e72e3a.svg","isPro":false,"fullname":"Zhihong Chen","user":"zhjohnchan","type":"user"},{"_id":"6236533b76c8a780323af640","avatarUrl":"/avatars/18078ad26aa44312a4927160216e5943.svg","isPro":false,"fullname":"Akshay Chaudhari","user":"akshaysc","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6180c6e5000231f499c547c7","avatarUrl":"/avatars/0255b430be744a8da236860e2a00307e.svg","isPro":false,"fullname":"Maya Varma","user":"mvarma","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"6281d941eeb15579946ca3ce","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6281d941eeb15579946ca3ce/0CdrBop_kjRkOqxUTYFbf.jpeg","isPro":false,"fullname":"Hui Sun","user":"CocoSun","type":"user"},{"_id":"6550c4f27bbfce1878f5f280","avatarUrl":"/avatars/0ecedbcd8a55b2c4abd1da9e741a6652.svg","isPro":false,"fullname":"seongyun_lee","user":"Seongyun","type":"user"},{"_id":"63ddc7b80f6d2d6c3efe3600","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63ddc7b80f6d2d6c3efe3600/RX5q9T80Jl3tn6z03ls0l.jpeg","isPro":false,"fullname":"J","user":"dashfunnydashdash","type":"user"},{"_id":"635964636a61954080850e1d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/635964636a61954080850e1d/0bfExuDTrHTtm8c-40cDM.png","isPro":false,"fullname":"William Lamkin","user":"phanes","type":"user"},{"_id":"654eb5cbc67f60a3685e49e6","avatarUrl":"/avatars/047b2bfc768b1309b8e0363e5bcfbfbb.svg","isPro":false,"fullname":"Jeya Maria Jose Valanarasu","user":"jmjose","type":"user"},{"_id":"65b00ea3399c0430e82ae284","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4Iovd06vaVhsZytvG066d.jpeg","isPro":false,"fullname":"Magda Paschali","user":"magda-paschali","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2401.12208

CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation

Published on Jan 22, 2024
· Submitted by
AK
on Jan 23, 2024

Abstract

A large-scale instruction-tuning dataset and an instruction-tuned foundation model with a clinical large language model and vision encoder are introduced to automate CXR interpretation, outperforming existing models on clinical tasks and evaluated for fairness.

AI-generated summary

Chest X-rays (CXRs) are the most frequently performed imaging test in clinical practice. Recent advances in the development of vision-language foundation models (FMs) give rise to the possibility of performing automated CXR interpretation, which can assist physicians with clinical decision-making and improve patient outcomes. However, developing FMs that can accurately interpret CXRs is challenging due to the (1) limited availability of large-scale vision-language datasets in the medical image domain, (2) lack of vision and language encoders that can capture the complexities of medical data, and (3) absence of evaluation frameworks for benchmarking the abilities of FMs on CXR interpretation. In this work, we address these challenges by first introducing CheXinstruct - a large-scale instruction-tuning dataset curated from 28 publicly-available datasets. We then present CheXagent - an instruction-tuned FM capable of analyzing and summarizing CXRs. To build CheXagent, we design a clinical large language model (LLM) for parsing radiology reports, a vision encoder for representing CXR images, and a network to bridge the vision and language modalities. Finally, we introduce CheXbench - a novel benchmark designed to systematically evaluate FMs across 8 clinically-relevant CXR interpretation tasks. Extensive quantitative evaluations and qualitative reviews with five expert radiologists demonstrate that CheXagent outperforms previously-developed general- and medical-domain FMs on CheXbench tasks. Furthermore, in an effort to improve model transparency, we perform a fairness evaluation across factors of sex, race and age to highlight potential performance disparities. Our project is at https://stanford-aimi.github.io/chexagent.html.

Community

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

Sign up or log in to comment

Models citing this paper 16

Browse 16 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2401.12208 in a dataset README.md to link it from this page.

Spaces citing this paper 14

Collections including this paper 9