Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code
[go: Go Back, main page]

\"facebook_1770312681764_7425229562377232848\"

\n","updatedAt":"2026-02-08T14:45:38.533Z","author":{"_id":"64213d0d9628d83384af1d10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64213d0d9628d83384af1d10/LO1029nNP3EIr7axWYE0U.jpeg","fullname":"alex","name":"jsing","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3602902293205261},"editors":["jsing"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/64213d0d9628d83384af1d10/LO1029nNP3EIr7axWYE0U.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2403.07974","authors":[{"_id":"65f8b0781780bc3711323df3","user":{"_id":"6411ed451e42164b9f0f8b24","avatarUrl":"/avatars/1254aa70e1b61044e6aeb67b4fa0ba5b.svg","isPro":false,"fullname":"Naman Jain","user":"StringChaos","type":"user"},"name":"Naman Jain","status":"claimed_verified","statusLastChangedAt":"2024-03-19T08:45:31.044Z","hidden":false},{"_id":"65f8b0781780bc3711323df4","name":"King Han","hidden":false},{"_id":"65f8b0781780bc3711323df5","user":{"_id":"6179abf8ec6ce4dc2e5f2376","avatarUrl":"/avatars/d10c6a1b350146b36949a24220471295.svg","isPro":false,"fullname":"Alex Gu","user":"minimario","type":"user"},"name":"Alex Gu","status":"extracted_confirmed","statusLastChangedAt":"2024-03-18T21:23:55.836Z","hidden":false},{"_id":"65f8b0781780bc3711323df6","user":{"_id":"62e221dfcb1f164f2cb8a66b","avatarUrl":"/avatars/06f05622e232304d3f0b8c291f3263be.svg","isPro":false,"fullname":"Wen-Ding Li","user":"xu3kev","type":"user"},"name":"Wen-Ding Li","status":"claimed_verified","statusLastChangedAt":"2024-06-28T08:03:21.603Z","hidden":false},{"_id":"65f8b0781780bc3711323df7","user":{"_id":"6539c9ca0ba076aa37c37503","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6539c9ca0ba076aa37c37503/kYe2j4iNjl6eCtpNRJjqK.jpeg","isPro":false,"fullname":"Fanjia Yan","user":"FanjiaYan","type":"user"},"name":"Fanjia Yan","status":"claimed_verified","statusLastChangedAt":"2025-02-19T09:07:59.053Z","hidden":false},{"_id":"65f8b0781780bc3711323df8","user":{"_id":"6374cd6b6ea8da14f8fef8dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6374cd6b6ea8da14f8fef8dc/l13bg0tKDjCnUw3I895QZ.png","isPro":true,"fullname":"Tianjun Zhang","user":"tianjunz","type":"user"},"name":"Tianjun Zhang","status":"claimed_verified","statusLastChangedAt":"2024-08-01T07:01:46.328Z","hidden":false},{"_id":"65f8b0781780bc3711323df9","user":{"_id":"62a9f09f097098dc397e28c2","avatarUrl":"/avatars/f7a09d1aae490ca066eccc4f011edb8a.svg","isPro":false,"fullname":"Sida Wang","user":"sidaw","type":"user"},"name":"Sida Wang","status":"extracted_pending","statusLastChangedAt":"2024-06-13T21:57:58.203Z","hidden":false},{"_id":"65f8b0781780bc3711323dfa","name":"Armando Solar-Lezama","hidden":false},{"_id":"65f8b0781780bc3711323dfb","name":"Koushik Sen","hidden":false},{"_id":"65f8b0781780bc3711323dfc","name":"Ion Stoica","hidden":false}],"publishedAt":"2024-03-12T17:58:04.000Z","title":"LiveCodeBench: Holistic and Contamination Free Evaluation of Large\n Language Models for Code","summary":"Large Language Models (LLMs) applied to code-related applications have\nemerged as a prominent field, attracting significant interest from both\nacademia and industry. However, as new and improved LLMs are developed,\nexisting evaluation benchmarks (e.g., HumanEval, MBPP) are no longer sufficient\nfor assessing their capabilities. In this work, we propose LiveCodeBench, a\ncomprehensive and contamination-free evaluation of LLMs for code, which\ncontinuously collects new problems over time from contests across three\ncompetition platforms, namely LeetCode, AtCoder, and CodeForces. Notably, our\nbenchmark also focuses on a broader range of code related capabilities, such as\nself-repair, code execution, and test output prediction, beyond just code\ngeneration. Currently, LiveCodeBench hosts four hundred high-quality coding\nproblems that were published between May 2023 and February 2024. We have\nevaluated 9 base LLMs and 20 instruction-tuned LLMs on LiveCodeBench. We\npresent empirical findings on contamination, holistic performance comparisons,\npotential overfitting in existing benchmarks as well as individual model\ncomparisons. We will release all prompts and model completions for further\ncommunity analysis, along with a general toolkit for adding new scenarios and\nmodel","upvotes":5,"discussionId":"65f8b07a1780bc3711323fe5","ai_summary":"LiveCodeBench is a new evaluation benchmark for LLMs in code-related tasks, focusing on continuous problem collection and assessment of self-repair, code execution, and test output prediction.","ai_keywords":["Large Language Models (LLMs)","LiveCodeBench","code-related applications","HumanEval","MBPP","LeetCode","AtCoder","CodeForces","self-repair","code execution","test output prediction","empirical findings","contamination","holistic performance comparisons","overfitting","general toolkit"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63a7422854f1d0225b075bfc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a7422854f1d0225b075bfc/XGYAcDPZG5ZEsNBWG6guw.jpeg","isPro":true,"fullname":"lhl","user":"leonardlin","type":"user"},{"_id":"63814d392dd1f3e7bf59862f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63814d392dd1f3e7bf59862f/XCyKK3NvV-DbIEjR4tUue.jpeg","isPro":false,"fullname":"Charlie Cheng-Jie Ji","user":"CharlieJi","type":"user"},{"_id":"629fd12579726ce6f4c47b63","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1654713461737-629fd12579726ce6f4c47b63.png","isPro":false,"fullname":"Rif Hutchings","user":"hutchingsa","type":"user"},{"_id":"696a25cb58714222e17eb822","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ahH1sc8M1zQu44gagNtU2.png","isPro":false,"fullname":"Md Nasib","user":"Nasib2008","type":"user"},{"_id":"697870b4185288f9650ef72e","avatarUrl":"/avatars/6918d5538abe7422d78111c49e278815.svg","isPro":false,"fullname":"Jeorge Reyes","user":"angmakabagongkatipunero","type":"user"}],"acceptLanguages":["*"]}">
Papers
arxiv:2403.07974

LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code

Published on Mar 12, 2024
Authors:
,
,
,

Abstract

LiveCodeBench is a new evaluation benchmark for LLMs in code-related tasks, focusing on continuous problem collection and assessment of self-repair, code execution, and test output prediction.

AI-generated summary

Large Language Models (LLMs) applied to code-related applications have emerged as a prominent field, attracting significant interest from both academia and industry. However, as new and improved LLMs are developed, existing evaluation benchmarks (e.g., HumanEval, MBPP) are no longer sufficient for assessing their capabilities. In this work, we propose LiveCodeBench, a comprehensive and contamination-free evaluation of LLMs for code, which continuously collects new problems over time from contests across three competition platforms, namely LeetCode, AtCoder, and CodeForces. Notably, our benchmark also focuses on a broader range of code related capabilities, such as self-repair, code execution, and test output prediction, beyond just code generation. Currently, LiveCodeBench hosts four hundred high-quality coding problems that were published between May 2023 and February 2024. We have evaluated 9 base LLMs and 20 instruction-tuned LLMs on LiveCodeBench. We present empirical findings on contamination, holistic performance comparisons, potential overfitting in existing benchmarks as well as individual model comparisons. We will release all prompts and model completions for further community analysis, along with a general toolkit for adding new scenarios and model

Community

facebook_1770312681764_7425229562377232848

Sign up or log in to comment

Models citing this paper 120

Browse 120 models citing this paper

Datasets citing this paper 9

Browse 9 datasets citing this paper

Spaces citing this paper 378

Collections including this paper 3