It would be great if you could link the models to this page by adding arxiv.org/abs/2409.12186 in a model README.md file. \n","updatedAt":"2024-09-19T08:14:43.360Z","author":{"_id":"63a369d98c0c89dcae3b8329","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/AiH2zjy1cnt9OADAAZMLD.jpeg","fullname":"Adina Yakefu","name":"AdinaY","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":1144,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.8876822590827942},"editors":["AdinaY"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/AiH2zjy1cnt9OADAAZMLD.jpeg"],"reactions":[],"isReport":false}},{"id":"66ecd104518ccb4e6244a2dc","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false},"createdAt":"2024-09-20T01:33:56.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [To Code, or Not To Code? Exploring Impact of Code in Pre-training](https://huggingface.co/papers/2408.10914) (2024)\n* [XMainframe: A Large Language Model for Mainframe Modernization](https://huggingface.co/papers/2408.04660) (2024)\n* [OriGen:Enhancing RTL Code Generation with Code-to-Code Augmentation and Self-Reflection](https://huggingface.co/papers/2407.16237) (2024)\n* [CodeACT: Code Adaptive Compute-efficient Tuning Framework for Code LLMs](https://huggingface.co/papers/2408.02193) (2024)\n* [DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation](https://huggingface.co/papers/2408.13204) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\nThe following papers were recommended by the Semantic Scholar API
\n- \n
- To Code, or Not To Code? Exploring Impact of Code in Pre-training (2024) \n
- XMainframe: A Large Language Model for Mainframe Modernization (2024) \n
- OriGen:Enhancing RTL Code Generation with Code-to-Code Augmentation and Self-Reflection (2024) \n
- CodeACT: Code Adaptive Compute-efficient Tuning Framework for Code LLMs (2024) \n
- DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation (2024) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\"outperforming larger models of the same model size\" -- how do I interpret this phrase in the abstract?
\n","updatedAt":"2024-09-20T17:05:40.059Z","author":{"_id":"66bff1188c3816c563f42d20","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66bff1188c3816c563f42d20/G_GDUjvn9fRV7Sjc5W_yG.jpeg","fullname":"Vasily Ilin","name":"Vilin97","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7877641320228577},"editors":["Vilin97"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/66bff1188c3816c563f42d20/G_GDUjvn9fRV7Sjc5W_yG.jpeg"],"reactions":[{"reaction":"👀","users":["nibblesfluff","Qwertystars"],"count":2}],"isReport":false}},{"id":"66f187702aee3cb7e9b1b7de","author":{"_id":"642fca56b009240418d90f50","avatarUrl":"/avatars/c50c13627f2e898d86cf0a3a215a6e3c.svg","fullname":"Sekhar M.K.","name":"Smarasan123","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2024-09-23T15:21:20.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Coffee vending recipe code in python ","html":"Coffee vending recipe code in python
\n","updatedAt":"2024-09-23T15:21:20.162Z","author":{"_id":"642fca56b009240418d90f50","avatarUrl":"/avatars/c50c13627f2e898d86cf0a3a215a6e3c.svg","fullname":"Sekhar M.K.","name":"Smarasan123","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7777994275093079},"editors":["Smarasan123"],"editorAvatarUrls":["/avatars/c50c13627f2e898d86cf0a3a215a6e3c.svg"],"reactions":[],"isReport":false}},{"id":"66f57918e8b5fab5a726a8c9","author":{"_id":"6398aef61f5466bb57c927ec","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1670950583555-noauth.jpeg","fullname":"Dattu Sharma","name":"imdatta0","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false},"createdAt":"2024-09-26T15:09:12.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"We did a small writeup about this and a few other papers this week on our blog [here](https://datta0.substack.com/p/ai-unplugged-20-score-self-correction). Please give it a read and let us know your thoughts :)","html":"We did a small writeup about this and a few other papers this week on our blog here. Please give it a read and let us know your thoughts :)
\n","updatedAt":"2024-09-26T15:09:12.307Z","author":{"_id":"6398aef61f5466bb57c927ec","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1670950583555-noauth.jpeg","fullname":"Dattu Sharma","name":"imdatta0","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9605063199996948},"editors":["imdatta0"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1670950583555-noauth.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"6732fc45671b8cae637b36f2","author":{"_id":"5e80b7d830dc073f817a2bc0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1585493970035-noauth.jpeg","fullname":"Haris Jabbar","name":"maveriq","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":7,"isUserFollowing":false},"createdAt":"2024-11-12T06:57:09.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"very nice blog post","html":"very nice blog post
\n","updatedAt":"2024-11-12T06:57:09.731Z","author":{"_id":"5e80b7d830dc073f817a2bc0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1585493970035-noauth.jpeg","fullname":"Haris Jabbar","name":"maveriq","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":7,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9804816842079163},"editors":["maveriq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1585493970035-noauth.jpeg"],"reactions":[{"reaction":"🤗","users":["imdatta0"],"count":1}],"isReport":false,"parentCommentId":"66f57918e8b5fab5a726a8c9"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2409.12186","authors":[{"_id":"66eb93a8772eb73ab0372cb9","user":{"_id":"61e4c4ca1ab24785ac11ba69","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61e4c4ca1ab24785ac11ba69/1Q1zhhyGSJ9RJG9MzwxVv.jpeg","isPro":false,"fullname":"Binyuan Hui","user":"huybery","type":"user"},"name":"Binyuan Hui","status":"claimed_verified","statusLastChangedAt":"2024-09-19T07:06:39.282Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cba","user":{"_id":"668642801af627989fda2b91","avatarUrl":"/avatars/c8a880944529124c796ba51b9815148a.svg","isPro":false,"fullname":"yangjian","user":"yangjian076","type":"user"},"name":"Jian Yang","status":"admin_assigned","statusLastChangedAt":"2024-09-19T08:07:52.138Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cbb","user":{"_id":"643fb14a18afbc4d1f3ebfb4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/643fb14a18afbc4d1f3ebfb4/2qeL_qPSB9_MTzZf46ynJ.png","isPro":false,"fullname":"czy yente","user":"cyente","type":"user"},"name":"Zeyu Cui","status":"claimed_verified","statusLastChangedAt":"2024-09-23T14:22:35.249Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cbc","user":{"_id":"6697a4b1a9e7257fc39d087a","avatarUrl":"/avatars/65c53ede2f33075528cb3cc91ea743d1.svg","isPro":false,"fullname":"Jiaxi Yang","user":"AbbottYJX","type":"user"},"name":"Jiaxi Yang","status":"admin_assigned","statusLastChangedAt":"2024-09-19T08:08:39.224Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cbd","user":{"_id":"6434d4989bd5a84b5dd0b0f5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6434d4989bd5a84b5dd0b0f5/0Elf9qbfG9Hkgypm9pTGm.jpeg","isPro":false,"fullname":"Dayiheng Liu","user":"Losin94","type":"user"},"name":"Dayiheng Liu","status":"admin_assigned","statusLastChangedAt":"2024-09-19T07:39:17.113Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cbe","user":{"_id":"64c38871f9cd765462fa1a17","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c38871f9cd765462fa1a17/yuIlVcqeDlQVKsUF8uEl3.jpeg","isPro":false,"fullname":"Lei Zhang","user":"Lemoncoke","type":"user"},"name":"Lei Zhang","status":"claimed_verified","statusLastChangedAt":"2024-09-19T07:06:37.472Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cbf","name":"Tianyu Liu","hidden":false},{"_id":"66eb93a8772eb73ab0372cc0","user":{"_id":"63f108072f7c0152e86fa2ea","avatarUrl":"/avatars/992ec3f9642df1688c5b20b3f6c938dd.svg","isPro":false,"fullname":"Jiajun Zhang","user":"jjzhang","type":"user"},"name":"Jiajun Zhang","status":"admin_assigned","statusLastChangedAt":"2024-09-19T08:09:12.743Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cc1","name":"Bowen Yu","hidden":false},{"_id":"66eb93a8772eb73ab0372cc2","name":"Kai Dang","hidden":false},{"_id":"66eb93a8772eb73ab0372cc3","user":{"_id":"62088594a5943c8a8fc94560","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1644733028938-62088594a5943c8a8fc94560.png","isPro":false,"fullname":"An Yang","user":"yangapku","type":"user"},"name":"An Yang","status":"claimed_verified","statusLastChangedAt":"2024-09-23T16:29:42.378Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cc4","user":{"_id":"6209bb200436d7d6f27cbeea","avatarUrl":"/avatars/0b8a72a8b66ef7b36780fe2ccc343f78.svg","isPro":false,"fullname":"Iurnem","user":"Iurnem","type":"user"},"name":"Rui Men","status":"claimed_verified","statusLastChangedAt":"2024-09-23T16:29:45.616Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cc5","user":{"_id":"635b8b6a37c6a2c12e2cce00","avatarUrl":"/avatars/229fb72180529141515d1df797b33709.svg","isPro":false,"fullname":"Fei Huang","user":"hzhwcmhf","type":"user"},"name":"Fei Huang","status":"admin_assigned","statusLastChangedAt":"2024-09-19T08:10:52.587Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cc6","user":{"_id":"645c82c4081be4b32049633a","avatarUrl":"/avatars/e5a08cf0ec5a04bd9d66111382ce0508.svg","isPro":false,"fullname":"xzhren","user":"xingzhang","type":"user"},"name":"Xingzhang Ren","status":"admin_assigned","statusLastChangedAt":"2024-09-19T08:11:16.807Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cc7","name":"Xuancheng Ren","hidden":false},{"_id":"66eb93a8772eb73ab0372cc8","user":{"_id":"602f88f5e8149a962412a667","avatarUrl":"/avatars/b78f0e583df8e5d5e3365934fe5f4900.svg","isPro":false,"fullname":"Zhou","user":"Jingren","type":"user"},"name":"Jingren Zhou","status":"admin_assigned","statusLastChangedAt":"2024-09-19T07:39:33.037Z","hidden":false},{"_id":"66eb93a8772eb73ab0372cc9","user":{"_id":"620760a26e3b7210c2ff1943","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg","isPro":false,"fullname":"Junyang Lin","user":"JustinLin610","type":"user"},"name":"Junyang Lin","status":"admin_assigned","statusLastChangedAt":"2024-09-19T07:39:23.673Z","hidden":false}],"publishedAt":"2024-09-18T17:57:57.000Z","submittedOnDailyAt":"2024-09-19T01:30:09.586Z","title":"Qwen2.5-Coder Technical Report","submittedOnDailyBy":{"_id":"61e4c4ca1ab24785ac11ba69","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61e4c4ca1ab24785ac11ba69/1Q1zhhyGSJ9RJG9MzwxVv.jpeg","isPro":false,"fullname":"Binyuan Hui","user":"huybery","type":"user"},"summary":"In this report, we introduce the Qwen2.5-Coder series, a significant upgrade\nfrom its predecessor, CodeQwen1.5. This series includes two models:\nQwen2.5-Coder-1.5B and Qwen2.5-Coder-7B. As a code-specific model,\nQwen2.5-Coder is built upon the Qwen2.5 architecture and continues pretrained\non a vast corpus of over 5.5 trillion tokens. Through meticulous data cleaning,\nscalable synthetic data generation, and balanced data mixing, Qwen2.5-Coder\ndemonstrates impressive code generation capabilities while retaining general\nversatility. The model has been evaluated on a wide range of code-related\ntasks, achieving state-of-the-art (SOTA) performance across more than 10\nbenchmarks, including code generation, completion, reasoning, and repair,\nconsistently outperforming larger models of the same model size. We believe\nthat the release of the Qwen2.5-Coder series will not only push the boundaries\nof research in code intelligence but also, through its permissive licensing,\nencourage broader adoption by developers in real-world applications.","upvotes":153,"discussionId":"66eb93a9772eb73ab0372cf7","githubRepo":"https://github.com/MindCode-4/code-2","githubRepoAddedBy":"auto","ai_summary":"Qwen2.5-Coder series demonstrates state-of-the-art code generation, completion, reasoning, and repair capabilities using the Qwen2.5 architecture with over 5.5 trillion tokens of training data.","ai_keywords":["Qwen2.5-Coder","Qwen2.5-Coder-1.5B","Qwen2.5-Coder-7B","Qwen2.5 architecture","code generation","code completion","code reasoning","code repair","state-of-the-art","SOTA"],"githubStars":2},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"648c9605565e3a44f3c9bb7b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/648c9605565e3a44f3c9bb7b/W5chvk17Zol6-2QSWkFVR.jpeg","isPro":true,"fullname":"Orr Zohar","user":"orrzohar","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"612ee6a7b960e78c6d2319d4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/612ee6a7b960e78c6d2319d4/2Hu9BaAyXbyh1vt0v1Qui.jpeg","isPro":false,"fullname":"Qian Liu","user":"SivilTaram","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"636f533c1ca0ea5107ed171d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/636f533c1ca0ea5107ed171d/jLwsrcPtUiHj8WhcE0Y67.jpeg","isPro":false,"fullname":"Bhimraj Yadav","user":"bhimrazy","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"63b6f2e752c02ae8acbaa4d8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1672934038280-noauth.jpeg","isPro":false,"fullname":"Habibullah Akbar","user":"ChavyvAkvar","type":"user"},{"_id":"641b754d1911d3be6745cce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/Ydjcjd4VuNUGj5Cd4QHdB.png","isPro":false,"fullname":"atayloraerospace","user":"Taylor658","type":"user"},{"_id":"620875b2ad90acde88c6087e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1645612927537-620875b2ad90acde88c6087e.jpeg","isPro":false,"fullname":"Wang Peng","user":"logicwong","type":"user"},{"_id":"646df403ad20c6fa4f30b7ec","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646df403ad20c6fa4f30b7ec/Q64-XMghOcBoo3itZDGYA.jpeg","isPro":false,"fullname":"Jiaxi Yang","user":"jx-yang","type":"user"},{"_id":"61e4c4ca1ab24785ac11ba69","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61e4c4ca1ab24785ac11ba69/1Q1zhhyGSJ9RJG9MzwxVv.jpeg","isPro":false,"fullname":"Binyuan Hui","user":"huybery","type":"user"},{"_id":"618767e4238063b4615d042b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1636263880877-noauth.jpeg","isPro":false,"fullname":"Tianbao Xie","user":"tianbaoxiexxx","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1}">Abstract
Qwen2.5-Coder series demonstrates state-of-the-art code generation, completion, reasoning, and repair capabilities using the Qwen2.5 architecture with over 5.5 trillion tokens of training data.
In this report, we introduce the Qwen2.5-Coder series, a significant upgrade from its predecessor, CodeQwen1.5. This series includes two models: Qwen2.5-Coder-1.5B and Qwen2.5-Coder-7B. As a code-specific model, Qwen2.5-Coder is built upon the Qwen2.5 architecture and continues pretrained on a vast corpus of over 5.5 trillion tokens. Through meticulous data cleaning, scalable synthetic data generation, and balanced data mixing, Qwen2.5-Coder demonstrates impressive code generation capabilities while retaining general versatility. The model has been evaluated on a wide range of code-related tasks, achieving state-of-the-art (SOTA) performance across more than 10 benchmarks, including code generation, completion, reasoning, and repair, consistently outperforming larger models of the same model size. We believe that the release of the Qwen2.5-Coder series will not only push the boundaries of research in code intelligence but also, through its permissive licensing, encourage broader adoption by developers in real-world applications.
Community
Qwen2.5 Technical Report
@huybery
Congrats on the release of Qwen 2.5 coder🔥
It would be great if you could link the models to this page by adding arxiv.org/abs/2409.12186 in a model README.md file.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- To Code, or Not To Code? Exploring Impact of Code in Pre-training (2024)
- XMainframe: A Large Language Model for Mainframe Modernization (2024)
- OriGen:Enhancing RTL Code Generation with Code-to-Code Augmentation and Self-Reflection (2024)
- CodeACT: Code Adaptive Compute-efficient Tuning Framework for Code LLMs (2024)
- DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
"outperforming larger models of the same model size" -- how do I interpret this phrase in the abstract?
Coffee vending recipe code in python