The following papers were recommended by the Semantic Scholar API
\n- \n
- LLM-FP4: 4-Bit Floating-Point Quantized Transformers (2023) \n
- Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization (2023) \n
- AFPQ: Asymmetric Floating Point Quantization for LLMs (2023) \n
- Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs (2023) \n
- CBQ: Cross-Block Quantization for Large Language Models (2023) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n","updatedAt":"2023-12-21T14:47:44.444Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7599977850914001},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2312.08583","authors":[{"_id":"657bc7f3696ec3dda92fc12e","user":{"_id":"619721da1726b360d54d9ff7","avatarUrl":"/avatars/f5f8bf0f372bef58848530dd7d7f58de.svg","isPro":false,"fullname":"wu","user":"xiaoxia-microsoft","type":"user"},"name":"Xiaoxia Wu","status":"admin_assigned","statusLastChangedAt":"2023-12-15T13:00:05.048Z","hidden":false},{"_id":"657bc7f3696ec3dda92fc12f","name":"Haojun Xia","hidden":false},{"_id":"657bc7f3696ec3dda92fc130","name":"Stephen Youn","hidden":false},{"_id":"657bc7f3696ec3dda92fc131","user":{"_id":"65373b2c89dd48faca859d02","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65373b2c89dd48faca859d02/42HukqnvMykvaoTxnQJjk.jpeg","isPro":false,"fullname":"Zhen Zheng","user":"JamesTheZ","type":"user"},"name":"Zhen Zheng","status":"claimed_verified","statusLastChangedAt":"2024-01-26T14:12:07.834Z","hidden":false},{"_id":"657bc7f3696ec3dda92fc132","name":"Shiyang Chen","hidden":false},{"_id":"657bc7f3696ec3dda92fc133","user":{"_id":"6553cda0c81411e2aaf930b1","avatarUrl":"/avatars/291e4e9cef092d649ab734a3f09a3af1.svg","isPro":false,"fullname":"Arash Bakhtiari","user":"arashb","type":"user"},"name":"Arash Bakhtiari","status":"claimed_verified","statusLastChangedAt":"2023-12-15T19:10:40.497Z","hidden":false},{"_id":"657bc7f3696ec3dda92fc134","user":{"_id":"62910c52d1be2630c0d38a72","avatarUrl":"/avatars/40a72a10e3ea17809a09bde77360c194.svg","isPro":false,"fullname":"Michael Wyatt","user":"mwyatt","type":"user"},"name":"Michael Wyatt","status":"admin_assigned","statusLastChangedAt":"2023-12-15T13:01:35.693Z","hidden":false},{"_id":"657bc7f3696ec3dda92fc135","name":"Yuxiong He","hidden":false},{"_id":"657bc7f3696ec3dda92fc136","user":{"_id":"6499d685ea2cdac80992e742","avatarUrl":"/avatars/c32741e7fc57c9a08722fab3877a7b81.svg","isPro":false,"fullname":"Olatunji Ruwase","user":"tjruwase","type":"user"},"name":"Olatunji Ruwase","status":"admin_assigned","statusLastChangedAt":"2023-12-15T13:01:52.108Z","hidden":false},{"_id":"657bc7f3696ec3dda92fc137","user":{"_id":"651442f5c75a3d4c44de4ac9","avatarUrl":"/avatars/c60185389bf6379e4360cb54615922ed.svg","isPro":false,"fullname":"Shuaiwen Leon Song","user":"leonangel991","type":"user"},"name":"Leon Song","status":"extracted_pending","statusLastChangedAt":"2023-12-15T03:28:52.104Z","hidden":false},{"_id":"657bc7f3696ec3dda92fc138","user":{"_id":"6352e1953199c2612ec2ff01","avatarUrl":"/avatars/f75444f1b7cba63ed085774a950a3fc7.svg","isPro":false,"fullname":"Zhewei Yao","user":"zheweiyao","type":"user"},"name":"Zhewei Yao","status":"admin_assigned","statusLastChangedAt":"2023-12-15T13:01:59.162Z","hidden":false}],"publishedAt":"2023-12-14T01:06:37.000Z","submittedOnDailyAt":"2023-12-15T00:58:52.131Z","title":"ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric\n Strategy for Diverse Generative Tasks","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"This study examines 4-bit quantization methods like GPTQ in large language\nmodels (LLMs), highlighting GPTQ's overfitting and limited enhancement in\nZero-Shot tasks. While prior works merely focusing on zero-shot measurement, we\nextend task scope to more generative categories such as code generation and\nabstractive summarization, in which we found that INT4 quantization can\nsignificantly underperform. However, simply shifting to higher precision\nformats like FP6 has been particularly challenging, thus overlooked, due to\npoor performance caused by the lack of sophisticated integration and system\nacceleration strategies on current AI hardware. Our results show that FP6, even\nwith a coarse-grain quantization scheme, performs robustly across various\nalgorithms and tasks, demonstrating its superiority in accuracy and\nversatility. Notably, with the FP6 quantization, \\codestar-15B model performs\ncomparably to its FP16 counterpart in code generation, and for smaller models\nlike the 406M it closely matches their baselines in summarization. Neither can\nbe achieved by INT4. To better accommodate various AI hardware and achieve the\nbest system performance, we propose a novel 4+2 design for FP6 to achieve\nsimilar latency to the state-of-the-art INT4 fine-grain quantization. With our\ndesign, FP6 can become a promising solution to the current 4-bit quantization\nmethods used in LLMs.","upvotes":11,"discussionId":"657bc7f4696ec3dda92fc143","ai_summary":"FP6 quantization offers robust performance across various tasks in large language models, surpassing INT4 quantization through a novel 4+2 design that maintains latency.","ai_keywords":["GPTQ","quantization methods","large language models","zero-shot tasks","code generation","abstractive summarization","INT4","FP6","quantization scheme","system performance","4+2 design"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"619721da1726b360d54d9ff7","avatarUrl":"/avatars/f5f8bf0f372bef58848530dd7d7f58de.svg","isPro":false,"fullname":"wu","user":"xiaoxia-microsoft","type":"user"},{"_id":"6101c620900eaa0057c2ce1d","avatarUrl":"/avatars/bd282166c120711c65b5409dc860ac58.svg","isPro":false,"fullname":"Abdel-Dayane Marcos","user":"admarcosai","type":"user"},{"_id":"62441d1d9fdefb55a0b7d12c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1648631057413-noauth.png","isPro":false,"fullname":"Younes B","user":"ybelkada","type":"user"},{"_id":"63ce875d199b36f7552d4f07","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63ce875d199b36f7552d4f07/bpUrvhXDagzRqZ3vxTcSF.jpeg","isPro":false,"fullname":"Marc Sun","user":"marcsun13","type":"user"},{"_id":"6352e1953199c2612ec2ff01","avatarUrl":"/avatars/f75444f1b7cba63ed085774a950a3fc7.svg","isPro":false,"fullname":"Zhewei Yao","user":"zheweiyao","type":"user"},{"_id":"62b7be0fafaecc6720f93282","avatarUrl":"/avatars/8e08b54acf6b8f782592ca76cf8cf1f1.svg","isPro":false,"fullname":"robbinhan","user":"robbinhan","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"650c8bfb3d3542884da1a845","avatarUrl":"/avatars/863a5deebf2ac6d4faedc4dd368e0561.svg","isPro":false,"fullname":"Adhurim ","user":"Limi07","type":"user"},{"_id":"67b1937446b132008197d856","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67b1937446b132008197d856/t9N1XCp7e2--P-xSa8pbk.jpeg","isPro":false,"fullname":"KSBMyu05","user":"KangSubMi","type":"user"},{"_id":"67b1963c73b4976b632ab4c4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67b1963c73b4976b632ab4c4/0mFN3OvlUO1NH9hKvOfZR.jpeg","isPro":false,"fullname":"Aki","user":"Caleboo","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Abstract
FP6 quantization offers robust performance across various tasks in large language models, surpassing INT4 quantization through a novel 4+2 design that maintains latency.
This study examines 4-bit quantization methods like GPTQ in large language models (LLMs), highlighting GPTQ's overfitting and limited enhancement in Zero-Shot tasks. While prior works merely focusing on zero-shot measurement, we extend task scope to more generative categories such as code generation and abstractive summarization, in which we found that INT4 quantization can significantly underperform. However, simply shifting to higher precision formats like FP6 has been particularly challenging, thus overlooked, due to poor performance caused by the lack of sophisticated integration and system acceleration strategies on current AI hardware. Our results show that FP6, even with a coarse-grain quantization scheme, performs robustly across various algorithms and tasks, demonstrating its superiority in accuracy and versatility. Notably, with the FP6 quantization, \codestar-15B model performs comparably to its FP16 counterpart in code generation, and for smaller models like the 406M it closely matches their baselines in summarization. Neither can be achieved by INT4. To better accommodate various AI hardware and achieve the best system performance, we propose a novel 4+2 design for FP6 to achieve similar latency to the state-of-the-art INT4 fine-grain quantization. With our design, FP6 can become a promising solution to the current 4-bit quantization methods used in LLMs.
Community
I think the exllama v2 does sparse quantization and is similar to your proposal
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LLM-FP4: 4-Bit Floating-Point Quantized Transformers (2023)
- Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization (2023)
- AFPQ: Asymmetric Floating Point Quantization for LLMs (2023)
- Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs (2023)
- CBQ: Cross-Block Quantization for Large Language Models (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper