Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - SciCoQA: Quality Assurance for Scientific Paper--Code Alignment
\n\n","updatedAt":"2026-01-21T07:26:18.104Z","author":{"_id":"6113da54d08630d2676c9823","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1663060375254-6113da54d08630d2676c9823.png","fullname":"Tim","name":"timbmg","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5331659913063049},"editors":["timbmg"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1663060375254-6113da54d08630d2676c9823.png"],"reactions":[{"reaction":"❤️","users":["timbmg"],"count":1}],"isReport":false}},{"id":"69717f51f4a8ead82f576918","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-01-22T01:37:21.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [FLAWS: A Benchmark for Error Identification and Localization in Scientific Papers](https://huggingface.co/papers/2511.21843) (2025)\n* [VeriSciQA: An Auto-Verified Dataset for Scientific Visual Question Answering](https://huggingface.co/papers/2511.19899) (2025)\n* [Sphinx: Benchmarking and Modeling for LLM-Driven Pull Request Review](https://huggingface.co/papers/2601.04252) (2026)\n* [CodeFuse-CommitEval: Towards Benchmarking LLM's Power on Commit Message and Code Change Inconsistency Detection](https://huggingface.co/papers/2511.19875) (2025)\n* [CodeSimpleQA: Scaling Factuality in Code Large Language Models](https://huggingface.co/papers/2512.19424) (2025)\n* [SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories](https://huggingface.co/papers/2512.17419) (2025)\n* [pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs](https://huggingface.co/papers/2601.02285) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-01-22T01:37:21.581Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6862634420394897},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[{"reaction":"👍","users":["timbmg"],"count":1}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.12910","authors":[{"_id":"69707c12a8be625b19c2b009","user":{"_id":"6113da54d08630d2676c9823","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1663060375254-6113da54d08630d2676c9823.png","isPro":false,"fullname":"Tim","user":"timbmg","type":"user"},"name":"Tim Baumgärtner","status":"claimed_verified","statusLastChangedAt":"2026-01-21T09:18:54.768Z","hidden":false},{"_id":"69707c12a8be625b19c2b00a","name":"Iryna Gurevych","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/6113da54d08630d2676c9823/L8GSTXOHoPM78LHoD2xQH.png"],"publishedAt":"2026-01-19T10:04:33.000Z","submittedOnDailyAt":"2026-01-21T04:56:18.096Z","title":"SciCoQA: Quality Assurance for Scientific Paper--Code Alignment","submittedOnDailyBy":{"_id":"6113da54d08630d2676c9823","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1663060375254-6113da54d08630d2676c9823.png","isPro":false,"fullname":"Tim","user":"timbmg","type":"user"},"summary":"We present SciCoQA, a dataset for detecting discrepancies between scientific publications and their codebases to ensure faithful implementations. We construct SciCoQA from GitHub issues and reproducibility papers, and to scale our dataset, we propose a synthetic data generation method for constructing paper-code discrepancies. We analyze the paper-code discrepancies in detail and propose discrepancy types and categories to better understand the occurring mismatches. In total, our dataset consists of 611 paper-code discrepancies (81 real, 530 synthetic), spanning diverse computational science disciplines, including AI, Physics, Quantitative Biology, and others. Our evaluation of 21 LLMs highlights the difficulty of SciCoQA, particularly for instances involving omitted paper details, long-context inputs, and data outside the models' pre-training corpus. The best performing model in our evaluation, GPT-5, can only detect 45.7\\% of real-world paper-code discrepancies.","upvotes":3,"discussionId":"69707c13a8be625b19c2b00b","projectPage":"https://ukplab.github.io/scicoqa","githubRepo":"https://github.com/ukplab/scicoqa","githubRepoAddedBy":"user","ai_summary":"SciCoQA is a dataset for identifying mismatches between scientific publications and code implementations, containing 611 discrepancies across multiple disciplines and demonstrating the challenge of detecting such issues even for advanced language models.","ai_keywords":["dataset","synthetic data generation","paper-code discrepancies","computational science","language models","reproducibility"],"githubStars":3,"organization":{"_id":"62de69518960b17bb39a263c","name":"UKPLab","fullname":"Ubiquitous Knowledge Processing Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1658743016913-62de689d86220b5cb895acea.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6113da54d08630d2676c9823","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1663060375254-6113da54d08630d2676c9823.png","isPro":false,"fullname":"Tim","user":"timbmg","type":"user"},{"_id":"686db5d4af2b856fabbf13aa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/6BjMv2LVNoqvbX8fQSTPI.png","isPro":false,"fullname":"V bbbb","user":"Bbbbbnnn","type":"user"},{"_id":"6947f69751d7ae7c3c7b6908","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/PuIDZB9XDShHohKhYmdmp.png","isPro":true,"fullname":"Ben Kelly","user":"YellowjacketGames","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"62de69518960b17bb39a263c","name":"UKPLab","fullname":"Ubiquitous Knowledge Processing Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1658743016913-62de689d86220b5cb895acea.png"}}">
SciCoQA is a dataset for identifying mismatches between scientific publications and code implementations, containing 611 discrepancies across multiple disciplines and demonstrating the challenge of detecting such issues even for advanced language models.
AI-generated summary
We present SciCoQA, a dataset for detecting discrepancies between scientific publications and their codebases to ensure faithful implementations. We construct SciCoQA from GitHub issues and reproducibility papers, and to scale our dataset, we propose a synthetic data generation method for constructing paper-code discrepancies. We analyze the paper-code discrepancies in detail and propose discrepancy types and categories to better understand the occurring mismatches. In total, our dataset consists of 611 paper-code discrepancies (81 real, 530 synthetic), spanning diverse computational science disciplines, including AI, Physics, Quantitative Biology, and others. Our evaluation of 21 LLMs highlights the difficulty of SciCoQA, particularly for instances involving omitted paper details, long-context inputs, and data outside the models' pre-training corpus. The best performing model in our evaluation, GPT-5, can only detect 45.7\% of real-world paper-code discrepancies.