Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - OctoPack: Instruction Tuning Code Large Language Models
[go: Go Back, main page]

\n\t\t\n\t\n\t\n\t\tOctoPack: Revolutionizing Code LLMs with Git Commit Instructions\n\t\n\n

\n\n

\n\t\n\t\t\n\t\n\t\n\t\tLinks πŸ”—:\n\t\n

\n

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-09T01:46:07.334Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":176,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5314954519271851},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2308.07124","authors":[{"_id":"64dacfd68da011d65603631c","user":{"_id":"5f1eb362eec0ad2a071ad6e2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f1eb362eec0ad2a071ad6e2/IXMYkYKuTwn6kBdWnQeeY.png","isPro":false,"fullname":"Niklas Muennighoff","user":"Muennighoff","type":"user"},"name":"Niklas Muennighoff","status":"extracted_confirmed","statusLastChangedAt":"2023-08-15T01:19:14.006Z","hidden":false},{"_id":"64dacfd68da011d65603631d","user":{"_id":"612ee6a7b960e78c6d2319d4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/612ee6a7b960e78c6d2319d4/2Hu9BaAyXbyh1vt0v1Qui.jpeg","isPro":false,"fullname":"Qian Liu","user":"SivilTaram","type":"user"},"name":"Qian Liu","status":"claimed_verified","statusLastChangedAt":"2023-08-17T13:53:46.154Z","hidden":false},{"_id":"64dacfd68da011d65603631e","name":"Armel Zebaze","hidden":false},{"_id":"64dacfd68da011d65603631f","user":{"_id":"6231576e92e83fd1179ac3f0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1664543160657-6231576e92e83fd1179ac3f0.jpeg","isPro":false,"fullname":"Qinkai Zheng","user":"Stanislas","type":"user"},"name":"Qinkai Zheng","status":"admin_assigned","statusLastChangedAt":"2023-08-15T12:57:37.981Z","hidden":false},{"_id":"64dacfd68da011d656036320","user":{"_id":"61e4c4ca1ab24785ac11ba69","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61e4c4ca1ab24785ac11ba69/1Q1zhhyGSJ9RJG9MzwxVv.jpeg","isPro":false,"fullname":"Binyuan Hui","user":"huybery","type":"user"},"name":"Binyuan Hui","status":"admin_assigned","statusLastChangedAt":"2023-08-15T12:57:56.438Z","hidden":false},{"_id":"64dacfd68da011d656036321","user":{"_id":"62b7fb545233925f253531c8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62b7fb545233925f253531c8/W50u2G1HK3EtUKHRU189V.jpeg","isPro":false,"fullname":"Terry Yue Zhuo","user":"terryyz","type":"user"},"name":"Terry Yue Zhuo","status":"admin_assigned","statusLastChangedAt":"2023-08-15T12:58:05.766Z","hidden":false},{"_id":"64dacfd68da011d656036322","user":{"_id":"62f47c093561a52aa5a67c90","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62f47c093561a52aa5a67c90/d4sFnllrLH5BWbZDRNvMn.jpeg","isPro":false,"fullname":"Swayam Singh","user":"rootacess","type":"user"},"name":"Swayam Singh","status":"admin_assigned","statusLastChangedAt":"2023-08-15T12:58:25.286Z","hidden":false},{"_id":"64dacfd68da011d656036323","user":{"_id":"64709e80d9360cd9d8e7073d","avatarUrl":"/avatars/ec37977b9a10e823e0d5bb73d152f253.svg","isPro":false,"fullname":"tang xiangrun","user":"stxrun","type":"user"},"name":"Xiangru Tang","status":"admin_assigned","statusLastChangedAt":"2023-08-15T12:58:40.805Z","hidden":false},{"_id":"64dacfd68da011d656036324","user":{"_id":"5e48005437cb5b49818287a5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e48005437cb5b49818287a5/4uCXGGui-9QifAT4qelxU.png","isPro":false,"fullname":"Leandro von Werra","user":"lvwerra","type":"user"},"name":"Leandro von Werra","status":"admin_assigned","statusLastChangedAt":"2023-08-15T12:59:12.417Z","hidden":false},{"_id":"64dacfd68da011d656036325","user":{"_id":"61f4283a81c4d30f58140242","avatarUrl":"/avatars/a1cf1ef1fd442c36ed65c68e51919fed.svg","isPro":false,"fullname":"Shayne Longpre","user":"Shayne","type":"user"},"name":"Shayne Longpre","status":"admin_assigned","statusLastChangedAt":"2023-08-15T12:59:22.573Z","hidden":false}],"publishedAt":"2023-08-14T13:53:54.000Z","submittedOnDailyAt":"2023-08-15T04:24:39.821Z","title":"OctoPack: Instruction Tuning Code Large Language Models","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Finetuning large language models (LLMs) on instructions leads to vast\nperformance improvements on natural language tasks. We apply instruction tuning\nusing code, leveraging the natural structure of Git commits, which pair code\nchanges with human instructions. We compile CommitPack: 4 terabytes of Git\ncommits across 350 programming languages. We benchmark CommitPack against other\nnatural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B\nparameter StarCoder model, and achieve state-of-the-art performance among\nmodels not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%\npass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark\nto a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)\nacross 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,\nOctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among\nall permissive models, demonstrating CommitPack's benefits in generalizing to a\nwider set of languages and natural coding tasks. Code, models and data are\nfreely available at https://github.com/bigcode-project/octopack.","upvotes":32,"discussionId":"64dacfd78da011d656036330","githubRepo":"https://github.com/bigcode-project/octopack","githubRepoAddedBy":"auto","ai_summary":"Instruction tuning using Git commits improves performance on natural language and coding tasks compared to other benchmarks, with models achieving state-of-the-art results on expanded HumanEvalPack.","ai_keywords":["instruction tuning","code","Git commits","CommitPack","16B parameter StarCoder","HumanEval benchmark","HumanEvalPack","Code Repair","Code Explanation","Code Synthesis","OctoCoder","OctoGeeX"],"githubStars":478},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"6231576e92e83fd1179ac3f0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1664543160657-6231576e92e83fd1179ac3f0.jpeg","isPro":false,"fullname":"Qinkai Zheng","user":"Stanislas","type":"user"},{"_id":"62f47c093561a52aa5a67c90","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62f47c093561a52aa5a67c90/d4sFnllrLH5BWbZDRNvMn.jpeg","isPro":false,"fullname":"Swayam Singh","user":"rootacess","type":"user"},{"_id":"5f1eb362eec0ad2a071ad6e2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f1eb362eec0ad2a071ad6e2/IXMYkYKuTwn6kBdWnQeeY.png","isPro":false,"fullname":"Niklas Muennighoff","user":"Muennighoff","type":"user"},{"_id":"644b78959e85a62bf07655f2","avatarUrl":"/avatars/518660b7743715af57629e863a038165.svg","isPro":false,"fullname":"Dmitri Iourovitski","user":"IoDmitri","type":"user"},{"_id":"649bd15d0165298b6429ba7d","avatarUrl":"/avatars/cfd9bf3cb828b629ad1b5b5e4629a16b.svg","isPro":false,"fullname":"Ron Ferens","user":"ronferens","type":"user"},{"_id":"63952b31ee411d4c6fba891c","avatarUrl":"/avatars/e901c3aaa16d3a06dff09896ce8a67a2.svg","isPro":false,"fullname":"Armel Randy Zebaze","user":"ArmelRandy","type":"user"},{"_id":"64a0539e7b57fab3a5d2905e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64a0539e7b57fab3a5d2905e/4gZo0l51m_ztFjPkBPGq0.jpeg","isPro":false,"fullname":"Aleksandra Eliseeva","user":"saridormi","type":"user"},{"_id":"608cdba3b100878d75b155a0","avatarUrl":"/avatars/d9268db49fa6892301a006184f4a4a67.svg","isPro":false,"fullname":"Sanyam Bhutani","user":"Sanyam","type":"user"},{"_id":"6032802e1f993496bc14d9e3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png","isPro":false,"fullname":"Omar Sanseviero","user":"osanseviero","type":"user"},{"_id":"63a369d98c0c89dcae3b8329","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/AiH2zjy1cnt9OADAAZMLD.jpeg","isPro":false,"fullname":"Adina Yakefu","user":"AdinaY","type":"user"},{"_id":"64945856d8b51eea62559a1e","avatarUrl":"/avatars/8562a08609f635a1ca0b1964f477d59a.svg","isPro":false,"fullname":"Matt Barr","user":"mattbarr","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2308.07124

OctoPack: Instruction Tuning Code Large Language Models

Published on Aug 14, 2023
Β· Submitted by
AK
on Aug 15, 2023
#2 Paper of the day

Abstract

Instruction tuning using Git commits improves performance on natural language and coding tasks compared to other benchmarks, with models achieving state-of-the-art results on expanded HumanEvalPack.

AI-generated summary

Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.

Community

OctoPack: Revolutionizing Code LLMs with Git Commit Instructions

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 6

Browse 6 models citing this paper

Datasets citing this paper 10

Browse 10 datasets citing this paper

Spaces citing this paper 42

Collections including this paper 6