Librarian Bot. I found the following papers similar to this paper. \n
The following papers were recommended by the Semantic Scholar API
\n
\n
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-10-31T01:34:49.623Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7335346341133118},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2410.21647","authors":[{"_id":"672282a1125cc4e9e2715f76","user":{"_id":"6604b3c18191f5a7efa86eba","avatarUrl":"/avatars/1ee0ae7e999f1fb3b068297eabb1bc8f.svg","isPro":false,"fullname":"Shanchao Liang","user":"shanchao","type":"user"},"name":"Shanchao Liang","status":"extracted_pending","statusLastChangedAt":"2024-10-30T19:01:54.565Z","hidden":false},{"_id":"672282a1125cc4e9e2715f77","user":{"_id":"658bb78f2e1f5bf05c2a3da9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/658bb78f2e1f5bf05c2a3da9/c4keEyiRRGXww0kZQdQph.jpeg","isPro":false,"fullname":"Yiran Hu","user":"Yiran-Hu1007","type":"user"},"name":"Yiran Hu","status":"extracted_confirmed","statusLastChangedAt":"2024-10-30T21:09:13.540Z","hidden":false},{"_id":"672282a1125cc4e9e2715f78","user":{"_id":"629e4ca2f2bda18349b6d330","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/629e4ca2f2bda18349b6d330/hd5XwxRQO32T12aWuK8lP.jpeg","isPro":false,"fullname":"Nan Jiang","user":"jiang719","type":"user"},"name":"Nan Jiang","status":"extracted_confirmed","statusLastChangedAt":"2024-10-30T20:31:21.269Z","hidden":false},{"_id":"672282a1125cc4e9e2715f79","user":{"_id":"65831342f9c5cda913df366a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65831342f9c5cda913df366a/h7MLYt--shRYQj4-q5XmR.jpeg","isPro":false,"fullname":"Lin Tan","user":"lin-tan","type":"user"},"name":"Lin Tan","status":"claimed_verified","statusLastChangedAt":"2024-10-31T09:54:53.523Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/629e4ca2f2bda18349b6d330/eGYX_fqJiwLkHUzkivzUX.png","https://cdn-uploads.huggingface.co/production/uploads/629e4ca2f2bda18349b6d330/AaLrA4a6TzS1J0nHDsU1G.png","https://cdn-uploads.huggingface.co/production/uploads/629e4ca2f2bda18349b6d330/hVqqcZorcdS1o4JNuh3RY.png"],"publishedAt":"2024-10-29T01:21:05.000Z","submittedOnDailyAt":"2024-10-30T20:02:23.112Z","title":"Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'","submittedOnDailyBy":{"_id":"629e4ca2f2bda18349b6d330","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/629e4ca2f2bda18349b6d330/hd5XwxRQO32T12aWuK8lP.jpeg","isPro":false,"fullname":"Nan Jiang","user":"jiang719","type":"user"},"summary":"Large language models (LLMs) have shown remarkable ability in code generation\nwith more than 90 pass@1 in solving Python coding problems in HumanEval and\nMBPP. Such high accuracy leads to the question: can LLMs replace human\nprogrammers? Existing manual crafted, simple, or single-line code generation\nbenchmarks cannot answer this question due to their gap with real-world\nsoftware development. To answer this question, we propose REPOCOD, a code\ngeneration benchmark with 980 problems collected from 11 popular real-world\nprojects, with more than 58% of them requiring file-level or repository-level\ncontext information. In addition, REPOCOD has the longest average canonical\nsolution length (331.6 tokens) and the highest average cyclomatic complexity\n(9.00) compared to existing benchmarks. In our evaluations on ten LLMs, none of\nthe models can achieve more than 30 pass@1 on REPOCOD, disclosing the necessity\nof building stronger LLMs that can help developers in real-world software\ndevelopment.","upvotes":18,"discussionId":"672282a2125cc4e9e2715fb0","githubRepo":"https://github.com/lt-asset/repocod","githubRepoAddedBy":"auto","ai_summary":"REPOCOD, a new code generation benchmark using real-world software development problems, shows that existing LLMs do not perform well and highlights the need for more robust models to assist developers.","ai_keywords":["Large language models","LLMS","code generation","HumanEval","MBPP","REPOCOD","file-level context","repository-level context","cyclomatic complexity","pass@1"],"githubStars":26},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"629e4ca2f2bda18349b6d330","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/629e4ca2f2bda18349b6d330/hd5XwxRQO32T12aWuK8lP.jpeg","isPro":false,"fullname":"Nan Jiang","user":"jiang719","type":"user"},{"_id":"641b754d1911d3be6745cce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/Ydjcjd4VuNUGj5Cd4QHdB.png","isPro":false,"fullname":"atayloraerospace","user":"Taylor658","type":"user"},{"_id":"63c84ef03dee6eeaa319a202","avatarUrl":"/avatars/cc03b7e0112be63a2d968085d65cd30d.svg","isPro":false,"fullname":"danning","user":"danningx","type":"user"},{"_id":"62c88b04ab9c23f5c459ed90","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62c88b04ab9c23f5c459ed90/tEaeuKpXdXwqK-zq1H-8a.png","isPro":false,"fullname":"Yi Wu","user":"yiwu","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64d4615cf8082bf19b916492","avatarUrl":"/avatars/8e1b59565ec5e4b31090cf1b911781b9.svg","isPro":false,"fullname":"wongyukim","user":"wongyukim","type":"user"},{"_id":"651c240a37fecec1fe96c60b","avatarUrl":"/avatars/5af52af97b7907e138efecac0f20799b.svg","isPro":false,"fullname":"S.F.","user":"search-facility","type":"user"},{"_id":"634fcc7017a6475e8bfc4d44","avatarUrl":"/avatars/d2e45028fef3c2d13e5a1b509af7ca49.svg","isPro":false,"fullname":"Brun","user":"JM-Brun","type":"user"},{"_id":"658bb78f2e1f5bf05c2a3da9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/658bb78f2e1f5bf05c2a3da9/c4keEyiRRGXww0kZQdQph.jpeg","isPro":false,"fullname":"Yiran Hu","user":"Yiran-Hu1007","type":"user"},{"_id":"65f56deb6ea40b9a298160c7","avatarUrl":"/avatars/273dbedfa111749eca588a4eb5d1b716.svg","isPro":false,"fullname":"Qi Li","user":"Liqi1003","type":"user"},{"_id":"66bf9fcbeef032a019f15146","avatarUrl":"/avatars/3f2f0852df802a1402820f0e6d5a0d88.svg","isPro":false,"fullname":"Jiannan Wang","user":"wang4524","type":"user"},{"_id":"66d5d74dba3ab7d9ecede4d2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/CyF-TCT3BM88gSbSafPD6.png","isPro":true,"fullname":"Tony","user":"tony-42069","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'
Published on Oct 29, 2024
Abstract
REPOCOD, a new code generation benchmark using real-world software development problems, shows that existing LLMs do not perform well and highlights the need for more robust models to assist developers.
Large language models (LLMs) have shown remarkable ability in code generation
with more than 90 pass@1 in solving Python coding problems in HumanEval and
MBPP. Such high accuracy leads to the question: can LLMs replace human
programmers? Existing manual crafted, simple, or single-line code generation
benchmarks cannot answer this question due to their gap with real-world
software development. To answer this question, we propose REPOCOD, a code
generation benchmark with 980 problems collected from 11 popular real-world
projects, with more than 58% of them requiring file-level or repository-level
context information. In addition, REPOCOD has the longest average canonical
solution length (331.6 tokens) and the highest average cyclomatic complexity
(9.00) compared to existing benchmarks. In our evaluations on ten LLMs, none of
the models can achieve more than 30 pass@1 on REPOCOD, disclosing the necessity
of building stronger LLMs that can help developers in real-world software
development.