The following papers were recommended by the Semantic Scholar API
\n- \n
- Beyond Size: How Gradients Shape Pruning Decisions in Large Language Models (2023) \n
- PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs (2023) \n
- ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization (2023) \n
- How Does Calibration Data Affect the Post-training Pruning and Quantization of Large Language Models? (2023) \n
- Fluctuation-based Adaptive Structured Pruning for Large Language Models (2023) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n","updatedAt":"2024-01-03T14:06:03.361Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7543120384216309},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2312.17244","authors":[{"_id":"658e398711f68f12eae6e23e","user":{"_id":"662046a80e8d8b41b2aaeefb","avatarUrl":"/avatars/45568fc404a6bc9bf9bc8b0b8ff5ba49.svg","isPro":true,"fullname":"Tycho van der Ouderaa","user":"tychovdo","type":"user"},"name":"Tycho F. A. van der Ouderaa","status":"claimed_verified","statusLastChangedAt":"2024-04-18T08:11:01.386Z","hidden":false},{"_id":"658e398711f68f12eae6e23f","name":"Markus Nagel","hidden":false},{"_id":"658e398711f68f12eae6e240","name":"Mart van Baalen","hidden":false},{"_id":"658e398711f68f12eae6e241","user":{"_id":"637d21239a5217b88b7549c3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/637d21239a5217b88b7549c3/LrIGPiva5VGVZG87rTAJz.jpeg","isPro":false,"fullname":"Yuki Asano","user":"yukimasano","type":"user"},"name":"Yuki M. Asano","status":"claimed_verified","statusLastChangedAt":"2024-10-15T09:12:10.015Z","hidden":false},{"_id":"658e398711f68f12eae6e242","name":"Tijmen Blankevoort","hidden":false}],"publishedAt":"2023-12-28T18:59:09.000Z","submittedOnDailyAt":"2023-12-29T00:44:15.904Z","title":"The LLM Surgeon","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"State-of-the-art language models are becoming increasingly large in an effort\nto achieve the highest performance on large corpora of available textual data.\nHowever, the sheer size of the Transformer architectures makes it difficult to\ndeploy models within computational, environmental or device-specific\nconstraints. We explore data-driven compression of existing pretrained models\nas an alternative to training smaller models from scratch. To do so, we scale\nKronecker-factored curvature approximations of the target loss landscape to\nlarge language models. In doing so, we can compute both the dynamic allocation\nof structures that can be removed as well as updates of remaining weights that\naccount for the removal. We provide a general framework for unstructured,\nsemi-structured and structured pruning and improve upon weight updates to\ncapture more correlations between weights, while remaining computationally\nefficient. Experimentally, our method can prune rows and columns from a range\nof OPT models and Llamav2-7B by 20%-30%, with a negligible loss in performance,\nand achieve state-of-the-art results in unstructured and semi-structured\npruning of large language models.","upvotes":9,"discussionId":"658e398711f68f12eae6e25d","githubRepo":"https://github.com/qualcomm-ai-research/llm-surgeon","githubRepoAddedBy":"auto","ai_summary":"Data-driven compression using Kronecker-factored curvature approximations efficiently prunes large language models with minimal performance impact.","ai_keywords":["Kronecker-factored curvature approximations","target loss landscape","unstructured pruning","semi-structured pruning","structured pruning","weight updates","OPT models","Llamav2-7B"],"githubStars":35},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"6478cef3bb9a5693c48941da","avatarUrl":"/avatars/8f47dbaed3f305c5e0ee147966a45505.svg","isPro":false,"fullname":"Bajra","user":"Mandur","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6032802e1f993496bc14d9e3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png","isPro":false,"fullname":"Omar Sanseviero","user":"osanseviero","type":"user"},{"_id":"64747f7e33192631bacd8831","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64747f7e33192631bacd8831/dstkZJ4sHJSeqLesV5cOC.jpeg","isPro":false,"fullname":"Taufiq Dwi Purnomo","user":"taufiqdp","type":"user"},{"_id":"648a210e9da3cc3506961585","avatarUrl":"/avatars/808e9d7ac99837fe79169d0b8d49c366.svg","isPro":false,"fullname":"Ajith V Prabhakar","user":"ajithprabhakar","type":"user"},{"_id":"6101c620900eaa0057c2ce1d","avatarUrl":"/avatars/bd282166c120711c65b5409dc860ac58.svg","isPro":false,"fullname":"Abdel-Dayane Marcos","user":"admarcosai","type":"user"},{"_id":"617296c180f98c89a18948d2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/617296c180f98c89a18948d2/--_gq5PIhTaI6CRsshn-u.jpeg","isPro":false,"fullname":"Bui Van Hop","user":"hllj","type":"user"},{"_id":"662046a80e8d8b41b2aaeefb","avatarUrl":"/avatars/45568fc404a6bc9bf9bc8b0b8ff5ba49.svg","isPro":true,"fullname":"Tycho van der Ouderaa","user":"tychovdo","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">Abstract
Data-driven compression using Kronecker-factored curvature approximations efficiently prunes large language models with minimal performance impact.
State-of-the-art language models are becoming increasingly large in an effort to achieve the highest performance on large corpora of available textual data. However, the sheer size of the Transformer architectures makes it difficult to deploy models within computational, environmental or device-specific constraints. We explore data-driven compression of existing pretrained models as an alternative to training smaller models from scratch. To do so, we scale Kronecker-factored curvature approximations of the target loss landscape to large language models. In doing so, we can compute both the dynamic allocation of structures that can be removed as well as updates of remaining weights that account for the removal. We provide a general framework for unstructured, semi-structured and structured pruning and improve upon weight updates to capture more correlations between weights, while remaining computationally efficient. Experimentally, our method can prune rows and columns from a range of OPT models and Llamav2-7B by 20%-30%, with a negligible loss in performance, and achieve state-of-the-art results in unstructured and semi-structured pruning of large language models.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beyond Size: How Gradients Shape Pruning Decisions in Large Language Models (2023)
- PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs (2023)
- ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization (2023)
- How Does Calibration Data Affect the Post-training Pruning and Quantization of Large Language Models? (2023)
- Fluctuation-based Adaptive Structured Pruning for Large Language Models (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper