Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Sequence can Secretly Tell You What to Discard
@librarian-bot\n\t recommend\n","updatedAt":"2024-06-05T20:45:36.194Z","author":{"_id":"646b8e6f31968a60a0201a12","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg","fullname":")))?!?(((","name":"stereoplegic","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3927,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7918877601623535},"editors":["stereoplegic"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"6660ce799f766a50e4ec92fe","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-06-05T20:45:45.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [MiniCache: KV Cache Compression in Depth Dimension for Large Language Models](https://huggingface.co/papers/2405.14366) (2024)\n* [SnapKV: LLM Knows What You are Looking for Before Generation](https://huggingface.co/papers/2404.14469) (2024)\n* [Efficient LLM Inference with Kcache](https://huggingface.co/papers/2404.18057) (2024)\n* [TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding](https://huggingface.co/papers/2404.11912) (2024)\n* [PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference](https://huggingface.co/papers/2405.12532) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-06-05T20:45:45.873Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7414749264717102},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"6660ce70f57dbfbdc33226d0"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2404.15949","authors":[{"_id":"66355355b86274d6944d9af4","name":"Jincheng Dai","hidden":false},{"_id":"66355355b86274d6944d9af5","name":"Zhuowei Huang","hidden":false},{"_id":"66355355b86274d6944d9af6","name":"Haiyun Jiang","hidden":false},{"_id":"66355355b86274d6944d9af7","name":"Chen Chen","hidden":false},{"_id":"66355355b86274d6944d9af8","name":"Deng Cai","hidden":false},{"_id":"66355355b86274d6944d9af9","name":"Wei Bi","hidden":false},{"_id":"66355355b86274d6944d9afa","user":{"_id":"62eb8e9eaf9bed70f5ad5a7e","avatarUrl":"/avatars/971763f3b724d63bc64b7d3599cfc753.svg","isPro":false,"fullname":"Shuming Shi","user":"Shuming","type":"user"},"name":"Shuming Shi","status":"extracted_pending","statusLastChangedAt":"2024-05-03T21:12:54.846Z","hidden":false}],"publishedAt":"2024-04-24T16:11:54.000Z","title":"Sequence can Secretly Tell You What to Discard","summary":"Large Language Models (LLMs), despite their impressive performance on a wide\nrange of tasks, require significant GPU memory and consume substantial\ncomputational resources. In addition to model weights, the memory occupied by\nKV cache increases linearly with sequence length, becoming a main bottleneck\nfor inference. In this paper, we introduce a novel approach for optimizing the\nKV cache which significantly reduces its memory footprint. Through a\ncomprehensive investigation, we find that on LLaMA2 series models, (i) the\nsimilarity between adjacent tokens' query vectors is remarkably high, and (ii)\ncurrent query's attention calculation can rely solely on the attention\ninformation of a small portion of the preceding queries. Based on these\nobservations, we propose CORM, a KV cache eviction policy that dynamically\nretains important key-value pairs for inference without finetuning the model.\nWe validate that CORM reduces the inference memory usage of KV cache by up to\n70% without noticeable performance degradation across six tasks in LongBench.","upvotes":1,"discussionId":"66355356b86274d6944d9b56","ai_summary":"A novel KV cache optimization technique, CORM, dynamically retains key-value pairs in Large Language Models, reducing inference memory usage by up to 70% without performance loss.","ai_keywords":["Large Language Models","GPU memory","KV cache","query vectors","attention calculation","eviction policy","inference memory usage","LongBench"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"628d95759ec8275172d9b940","avatarUrl":"/avatars/ef9922f0d2b4e51e35b5cf2c8af4043f.svg","isPro":false,"fullname":"Charles I Niswander II","user":"charlesniswander","type":"user"}],"acceptLanguages":["*"]}">
A novel KV cache optimization technique, CORM, dynamically retains key-value pairs in Large Language Models, reducing inference memory usage by up to 70% without performance loss.
AI-generated summary
Large Language Models (LLMs), despite their impressive performance on a wide
range of tasks, require significant GPU memory and consume substantial
computational resources. In addition to model weights, the memory occupied by
KV cache increases linearly with sequence length, becoming a main bottleneck
for inference. In this paper, we introduce a novel approach for optimizing the
KV cache which significantly reduces its memory footprint. Through a
comprehensive investigation, we find that on LLaMA2 series models, (i) the
similarity between adjacent tokens' query vectors is remarkably high, and (ii)
current query's attention calculation can rely solely on the attention
information of a small portion of the preceding queries. Based on these
observations, we propose CORM, a KV cacheeviction policy that dynamically
retains important key-value pairs for inference without finetuning the model.
We validate that CORM reduces the inference memory usage of KV cache by up to
70% without noticeable performance degradation across six tasks in LongBench.