Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - A Refined Analysis of Massive Activations in LLMs
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-01T01:36:37.126Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7237492203712463},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"67f68a6c909725d7baafe474","author":{"_id":"62d1ddfac58f969c1528f1b5","avatarUrl":"/avatars/75c372a831cde3c7c6dce3bc875488a7.svg","fullname":"Kalle Hilsenbek","name":"Bachstelze","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":8,"isUserFollowing":false},"createdAt":"2025-04-09T14:55:40.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Do you see a connection between massive activations and attention sinks?","html":"

Do you see a connection between massive activations and attention sinks?

\n","updatedAt":"2025-04-09T14:55:40.024Z","author":{"_id":"62d1ddfac58f969c1528f1b5","avatarUrl":"/avatars/75c372a831cde3c7c6dce3bc875488a7.svg","fullname":"Kalle Hilsenbek","name":"Bachstelze","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":8,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9437890648841858},"editors":["Bachstelze"],"editorAvatarUrls":["/avatars/75c372a831cde3c7c6dce3bc875488a7.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.22329","authors":[{"_id":"67ea01e3d13d75fc155fa69d","user":{"_id":"6071c4b270e11b30cfcfd7a3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6071c4b270e11b30cfcfd7a3/-1ekCBzSTpqxkkul0bgmI.jpeg","isPro":false,"fullname":"Louis Owen","user":"louisowen6","type":"user"},"name":"Louis Owen","status":"claimed_verified","statusLastChangedAt":"2025-03-31T08:11:39.976Z","hidden":false},{"_id":"67ea01e3d13d75fc155fa69e","user":{"_id":"645a0d3dd6648853107c5fdc","avatarUrl":"/avatars/1e3b6a4f5ce81a707ba7cbdf81631091.svg","isPro":false,"fullname":"Nilabhra Roy Chowdhury","user":"nilabhra","type":"user"},"name":"Nilabhra Roy Chowdhury","status":"claimed_verified","statusLastChangedAt":"2025-03-31T08:11:37.938Z","hidden":false},{"_id":"67ea01e3d13d75fc155fa69f","user":{"_id":"62cd4b03c5cc157be82f0b56","avatarUrl":"/avatars/351e963c1c763d507ae78cbcd62966a3.svg","isPro":false,"fullname":"Abhay kumar","user":"akanyaani","type":"user"},"name":"Abhay Kumar","status":"claimed_verified","statusLastChangedAt":"2025-03-31T08:11:35.757Z","hidden":false},{"_id":"67ea01e3d13d75fc155fa6a0","user":{"_id":"65e4be59e8b017ee1310a1b6","avatarUrl":"/avatars/c3f7cdf5d0859cb80bfb2b970a675dfa.svg","isPro":false,"fullname":"Fabian","user":"gueraf","type":"user"},"name":"Fabian Güra","status":"claimed_verified","statusLastChangedAt":"2025-04-03T13:33:11.338Z","hidden":false}],"publishedAt":"2025-03-28T11:08:34.000Z","submittedOnDailyAt":"2025-03-31T01:17:56.852Z","title":"A Refined Analysis of Massive Activations in LLMs","submittedOnDailyBy":{"_id":"6071c4b270e11b30cfcfd7a3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6071c4b270e11b30cfcfd7a3/-1ekCBzSTpqxkkul0bgmI.jpeg","isPro":false,"fullname":"Louis Owen","user":"louisowen6","type":"user"},"summary":"Motivated in part by their relevance for low-precision training and\nquantization, massive activations in large language models (LLMs) have recently\nemerged as a topic of interest. However, existing analyses are limited in\nscope, and generalizability across architectures is unclear. This paper helps\naddress some of these gaps by conducting an analysis of massive activations\nacross a broad range of LLMs, including both GLU-based and non-GLU-based\narchitectures. Our findings challenge several prior assumptions, most\nimportantly: (1) not all massive activations are detrimental, i.e. suppressing\nthem does not lead to an explosion of perplexity or a collapse in downstream\ntask performance; (2) proposed mitigation strategies such as Attention KV bias\nare model-specific and ineffective in certain cases. We consequently\ninvestigate novel hybrid mitigation strategies; in particular pairing Target\nVariance Rescaling (TVR) with Attention KV bias or Dynamic Tanh (DyT)\nsuccessfully balances the mitigation of massive activations with preserved\ndownstream model performance in the scenarios we investigated. Our code is\navailable at: https://github.com/bluorion-com/refine_massive_activations.","upvotes":14,"discussionId":"67ea01e4d13d75fc155fa6d2","githubRepo":"https://github.com/bluorion-com/refine_massive_activations","githubRepoAddedBy":"user","ai_summary":"An analysis of massive activations in large language models shows that suppression is not always detrimental and proposes hybrid mitigation strategies that balance activation management with model performance.","ai_keywords":["low-precision training","quantization","massive activations","large language models","LLMs","GLU-based","non-GLU-based","perplexity","downstream task performance","Attention KV bias","Target Variance Rescaling","TVR","Dynamic Tanh","DyT"],"githubStars":11},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6071c4b270e11b30cfcfd7a3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6071c4b270e11b30cfcfd7a3/-1ekCBzSTpqxkkul0bgmI.jpeg","isPro":false,"fullname":"Louis Owen","user":"louisowen6","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"675c3d031be214c054109e0f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/675c3d031be214c054109e0f/JFeLuc4WFyWa-3Aj0xYvf.jpeg","isPro":false,"fullname":"James Lewis","user":"lewissssq","type":"user"},{"_id":"645a0d3dd6648853107c5fdc","avatarUrl":"/avatars/1e3b6a4f5ce81a707ba7cbdf81631091.svg","isPro":false,"fullname":"Nilabhra Roy Chowdhury","user":"nilabhra","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"},{"_id":"651c80a26ba9ab9b9582c273","avatarUrl":"/avatars/e963452eafd21f517d800f2e58e0f918.svg","isPro":false,"fullname":"siyeng feng","user":"siyengfeng","type":"user"},{"_id":"668cd4bbe990292e5f6974d3","avatarUrl":"/avatars/d1747b2372e94500ecb5fb56809b482d.svg","isPro":false,"fullname":"Jinyeong Kim","user":"rubatoyeong","type":"user"},{"_id":"62cd4b03c5cc157be82f0b56","avatarUrl":"/avatars/351e963c1c763d507ae78cbcd62966a3.svg","isPro":false,"fullname":"Abhay kumar","user":"akanyaani","type":"user"},{"_id":"66ac9d3f712274e16d84b63a","avatarUrl":"/avatars/761f7533e877633b56d4cb0be1e97eb3.svg","isPro":false,"fullname":"bluorion","user":"bluorion-ci","type":"user"},{"_id":"5f7fbd813e94f16a85448745","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1649681653581-5f7fbd813e94f16a85448745.jpeg","isPro":true,"fullname":"Sayak Paul","user":"sayakpaul","type":"user"},{"_id":"67100846171e6d8ec612e44a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/b442635LOSVlWxXaT7bJL.png","isPro":false,"fullname":"Daris Dzakwan Hoesien","user":"darisdzakwanhoesien","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2503.22329

A Refined Analysis of Massive Activations in LLMs

Published on Mar 28, 2025
· Submitted by
Louis Owen
on Mar 31, 2025

Abstract

An analysis of massive activations in large language models shows that suppression is not always detrimental and proposes hybrid mitigation strategies that balance activation management with model performance.

AI-generated summary

Motivated in part by their relevance for low-precision training and quantization, massive activations in large language models (LLMs) have recently emerged as a topic of interest. However, existing analyses are limited in scope, and generalizability across architectures is unclear. This paper helps address some of these gaps by conducting an analysis of massive activations across a broad range of LLMs, including both GLU-based and non-GLU-based architectures. Our findings challenge several prior assumptions, most importantly: (1) not all massive activations are detrimental, i.e. suppressing them does not lead to an explosion of perplexity or a collapse in downstream task performance; (2) proposed mitigation strategies such as Attention KV bias are model-specific and ineffective in certain cases. We consequently investigate novel hybrid mitigation strategies; in particular pairing Target Variance Rescaling (TVR) with Attention KV bias or Dynamic Tanh (DyT) successfully balances the mitigation of massive activations with preserved downstream model performance in the scenarios we investigated. Our code is available at: https://github.com/bluorion-com/refine_massive_activations.

Community

Paper author Paper submitter

A Refined Analysis of Massive Activations in LLMs

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Do you see a connection between massive activations and attention sinks?

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.22329 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.22329 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.22329 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.