Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Biomed-Enriched: A Biomedical Dataset Enriched with LLMs for Pretraining and Extracting Rare and Hidden Content
[go: Go Back, main page]

\"Artboard

\n

\"combined_educational_scores_1-1.png\"

\n","updatedAt":"2025-06-26T09:14:08.391Z","author":{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","fullname":"Rian Touchent","name":"rntc","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":17,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.35222843289375305},"editors":["rntc"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2506.20331","authors":[{"_id":"685d0b5d696820ba1f28f349","user":{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","isPro":false,"fullname":"Rian Touchent","user":"rntc","type":"user"},"name":"Rian Touchent","status":"claimed_verified","statusLastChangedAt":"2025-06-26T09:23:08.325Z","hidden":false},{"_id":"685d0b5d696820ba1f28f34a","user":{"_id":"61d2d6a5291690e1c7b4dd2d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1666688353023-61d2d6a5291690e1c7b4dd2d.jpeg","isPro":false,"fullname":"Nathan Godey","user":"nthngdy","type":"user"},"name":"Nathan Godey","status":"claimed_verified","statusLastChangedAt":"2025-10-30T14:48:15.387Z","hidden":false},{"_id":"685d0b5d696820ba1f28f34b","name":"Eric de la Clergerie","hidden":false}],"publishedAt":"2025-06-25T11:30:25.000Z","submittedOnDailyAt":"2025-06-26T07:44:08.382Z","title":"Biomed-Enriched: A Biomedical Dataset Enriched with LLMs for Pretraining\n and Extracting Rare and Hidden Content","submittedOnDailyBy":{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","isPro":false,"fullname":"Rian Touchent","user":"rntc","type":"user"},"summary":"We introduce Biomed-Enriched, a biomedical text dataset constructed from\nPubMed via a two-stage annotation process. In the first stage, a large language\nmodel annotates 400K paragraphs from PubMed scientific articles, assigning\nscores for their type (review, study, clinical case, other), domain (clinical,\nbiomedical, other), and educational quality. The educational quality score\n(rated 1 to 5) estimates how useful a paragraph is for college-level learning.\nThese annotations are then used to fine-tune a small language model, which\npropagates the labels across the full PMC-OA corpus. The resulting metadata\nallows us to extract refined subsets, including 2M clinical case paragraphs\nwith over 450K high-quality ones from articles with commercial-use licenses,\nand to construct several variants via quality filtering and domain upsampling.\nClinical text is typically difficult to access due to privacy constraints, as\nhospital records cannot be publicly shared. Hence, our dataset provides an\nalternative large-scale, openly available collection of clinical cases from\nPubMed, making it a valuable resource for biomedical and clinical NLP.\nPreliminary continual-pretraining experiments with OLMo2 suggest these curated\nsubsets enable targeted improvements, with clinical upsampling boosting\nperformance by ~5% on MMLU ProfMed and educational quality filtering improving\nMedQA and MedMCQA by ~1%. Combinations of these techniques led to faster\nconvergence, reaching same performance with a third of training tokens,\nindicating potential for more efficient and effective biomedical pretraining\nstrategies.","upvotes":5,"discussionId":"685d0b5d696820ba1f28f34c","ai_summary":"A biomedical text dataset, constructed from PubMed, uses a two-stage annotation process involving large and small language models to fine-tune and extract subsets for clinical NLP, improving pretraining efficiency and performance.","ai_keywords":["Biomed-Enriched","PubMed","large language model","small language model","fine-tuning","PMC-OA corpus","educational quality","clinical cases","biomedical NLP","continual-pretraining","OLMo2","MMLU ProfMed","MedQA","MedMCQA","training tokens"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","isPro":false,"fullname":"Rian Touchent","user":"rntc","type":"user"},{"_id":"6108057a823007eaf0c7bd10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1654776264770-6108057a823007eaf0c7bd10.png","isPro":false,"fullname":"Joshua Nemecek","user":"jnemecek","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"},{"_id":"663ccbff3a74a20189d4aa2e","avatarUrl":"/avatars/83a54455e0157480f65c498cd9057cf2.svg","isPro":false,"fullname":"Nguyen Van Thanh","user":"NguyenVanThanhHust","type":"user"},{"_id":"6456023d0a1ea2229a25c60f","avatarUrl":"/avatars/ae2bdcb7d725e7c9cb0d51aae06b2056.svg","isPro":false,"fullname":"liguowei","user":"lgw1995","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2506.20331

Biomed-Enriched: A Biomedical Dataset Enriched with LLMs for Pretraining and Extracting Rare and Hidden Content

Published on Jun 25, 2025
· Submitted by
Rian Touchent
on Jun 26, 2025
Authors:

Abstract

A biomedical text dataset, constructed from PubMed, uses a two-stage annotation process involving large and small language models to fine-tune and extract subsets for clinical NLP, improving pretraining efficiency and performance.

AI-generated summary

We introduce Biomed-Enriched, a biomedical text dataset constructed from PubMed via a two-stage annotation process. In the first stage, a large language model annotates 400K paragraphs from PubMed scientific articles, assigning scores for their type (review, study, clinical case, other), domain (clinical, biomedical, other), and educational quality. The educational quality score (rated 1 to 5) estimates how useful a paragraph is for college-level learning. These annotations are then used to fine-tune a small language model, which propagates the labels across the full PMC-OA corpus. The resulting metadata allows us to extract refined subsets, including 2M clinical case paragraphs with over 450K high-quality ones from articles with commercial-use licenses, and to construct several variants via quality filtering and domain upsampling. Clinical text is typically difficult to access due to privacy constraints, as hospital records cannot be publicly shared. Hence, our dataset provides an alternative large-scale, openly available collection of clinical cases from PubMed, making it a valuable resource for biomedical and clinical NLP. Preliminary continual-pretraining experiments with OLMo2 suggest these curated subsets enable targeted improvements, with clinical upsampling boosting performance by ~5% on MMLU ProfMed and educational quality filtering improving MedQA and MedMCQA by ~1%. Combinations of these techniques led to faster convergence, reaching same performance with a third of training tokens, indicating potential for more efficient and effective biomedical pretraining strategies.

Community

Paper author Paper submitter

Artboard 9-4.png

combined_educational_scores_1-1.png

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.20331 in a model README.md to link it from this page.

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.20331 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.