Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection
[go: Go Back, main page]

https://arxivexplained.com/papers/token-sparse-attention-efficient-long-context-inference-with-interleaved-token-selection

\n","updatedAt":"2026-02-05T00:07:43.198Z","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6153251528739929},"editors":["grantsing"],"editorAvatarUrls":["/avatars/3aedb9522cc3cd08349d654f523fd792.svg"],"reactions":[],"isReport":false}},{"id":"6983f4c1da28eecdae9a5874","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-05T01:39:13.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [A Unified Sparse Attention via Multi-Granularity Compression](https://huggingface.co/papers/2512.14082) (2025)\n* [Training-free Context-adaptive Attention for Efficient Long Context Modeling](https://huggingface.co/papers/2512.09238) (2025)\n* [BLASST: Dynamic BLocked Attention Sparsity via Softmax Thresholding](https://huggingface.co/papers/2512.12087) (2025)\n* [HyLRA: Hybrid Layer Reuse Attention for Efficient Long-Context Inference](https://huggingface.co/papers/2602.00777) (2026)\n* [Focus-dLLM: Accelerating Long-Context Diffusion LLM Inference via Confidence-Guided Context Focusing](https://huggingface.co/papers/2602.02159) (2026)\n* [KV Admission: Learning What to Write for Efficient Long-Context Inference](https://huggingface.co/papers/2512.17452) (2025)\n* [Block Sparse Flash Attention](https://huggingface.co/papers/2512.07011) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-05T01:39:13.218Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6870947480201721},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"6987846b28d2264727b00c59","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-02-07T18:28:59.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/token-sparse-attention-efficient-long-context-inference-with-interleaved-token-selection-8983-9be724a0\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"

arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/token-sparse-attention-efficient-long-context-inference-with-interleaved-token-selection-8983-9be724a0

\n
    \n
  • Executive Summary
  • \n
  • Detailed Breakdown
  • \n
  • Practical Applications
  • \n
\n","updatedAt":"2026-02-07T18:28:59.192Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5945194363594055},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.03216","authors":[{"_id":"6982e8cd9084cb4f0ecb5880","user":{"_id":"639ffbc6beb95d698de9640d","avatarUrl":"/avatars/7ef1aaadd5b378d00e17dc548e42cb7e.svg","isPro":false,"fullname":"Dongwon Jo","user":"dongwonjo","type":"user"},"name":"Dongwon Jo","status":"claimed_verified","statusLastChangedAt":"2026-02-04T12:27:22.258Z","hidden":false},{"_id":"6982e8cd9084cb4f0ecb5881","name":"Beomseok Kang","hidden":false},{"_id":"6982e8cd9084cb4f0ecb5882","user":{"_id":"662672eaebdfec5cfdf1d034","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/662672eaebdfec5cfdf1d034/RhsKly3KvbtPkDuVnEdWb.jpeg","isPro":false,"fullname":"Jiwon Song","user":"jiwonsong","type":"user"},"name":"Jiwon Song","status":"claimed_verified","statusLastChangedAt":"2026-02-05T10:55:16.539Z","hidden":false},{"_id":"6982e8cd9084cb4f0ecb5883","name":"Jae-Joon Kim","hidden":false}],"publishedAt":"2026-02-03T07:31:14.000Z","submittedOnDailyAt":"2026-02-04T04:14:17.125Z","title":"Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection","submittedOnDailyBy":{"_id":"639ffbc6beb95d698de9640d","avatarUrl":"/avatars/7ef1aaadd5b378d00e17dc548e42cb7e.svg","isPro":false,"fullname":"Dongwon Jo","user":"dongwonjo","type":"user"},"summary":"The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head Q, K, V to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels. Experimental results show that Token Sparse Attention consistently improves accuracy-latency trade-off, achieving up to times3.23 attention speedup at 128K context with less than 1% accuracy degradation. These results demonstrate that dynamic and interleaved token-level sparsification is a complementary and effective strategy for scalable long-context inference.","upvotes":12,"discussionId":"6982e8ce9084cb4f0ecb5884","githubRepo":"https://github.com/dongwonjo/Token-Sparse-Attention","githubRepoAddedBy":"user","ai_summary":"Token Sparse Attention enables efficient long-context inference by dynamically compressing and decompressing attention tensors at the token level, achieving significant speedup with minimal accuracy loss.","ai_keywords":["attention","token-level sparsification","QKV","Flash Attention","attention speedup","long-context inference","token selection","sparse attention"],"githubStars":4,"organization":{"_id":"698422913a080cd2873577a4","name":"SNU-VLSI","fullname":"Seoul National University VLSI Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/662672eaebdfec5cfdf1d034/7kyeWE2-6lCuFC2PG3xhz.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"639ffbc6beb95d698de9640d","avatarUrl":"/avatars/7ef1aaadd5b378d00e17dc548e42cb7e.svg","isPro":false,"fullname":"Dongwon Jo","user":"dongwonjo","type":"user"},{"_id":"6400208acafc9d549863af59","avatarUrl":"/avatars/6c383c810a038ce61e803f1d75132471.svg","isPro":false,"fullname":"Hyesung Jeon","user":"hjeon2k","type":"user"},{"_id":"67a1aba68b6584b24ffb5d28","avatarUrl":"/avatars/712cce125d9834023d62d85aec9dc601.svg","isPro":false,"fullname":"Beomseok Kang","user":"beomseokg","type":"user"},{"_id":"6757ab225ce1ff3af158f149","avatarUrl":"/avatars/cd0c94a389ed9e50c73abc30ef043b2d.svg","isPro":false,"fullname":"Hyeongju Ha","user":"Hyeongju97","type":"user"},{"_id":"662672eaebdfec5cfdf1d034","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/662672eaebdfec5cfdf1d034/RhsKly3KvbtPkDuVnEdWb.jpeg","isPro":false,"fullname":"Jiwon Song","user":"jiwonsong","type":"user"},{"_id":"65a8cdfe91ec5d1ec6f4b26c","avatarUrl":"/avatars/dee7115938f40da9457720153ea7a69e.svg","isPro":false,"fullname":"Son Donghwee","user":"Sonny0402","type":"user"},{"_id":"66d8512c54209e9101811e8e","avatarUrl":"/avatars/62dfd8e6261108f2508efe678d5a2a57.svg","isPro":false,"fullname":"M Saad Salman","user":"MSS444","type":"user"},{"_id":"63c68fad83ce71db8edb341e","avatarUrl":"/avatars/878a2747738a1d1c626c4084ae6c22aa.svg","isPro":false,"fullname":"Juchan","user":"praisechan","type":"user"},{"_id":"69831982d3f1224304db49df","avatarUrl":"/avatars/11cdd7704e32430fd523168cbd7562a7.svg","isPro":false,"fullname":"junyhyeok lee","user":"Junhye0k","type":"user"},{"_id":"6351e5bb3734c6e8a5c1bec1","avatarUrl":"/avatars/a784a51b369b197398575c3afbd5ceab.svg","isPro":false,"fullname":"Han-Bit Kang","user":"hbkang","type":"user"},{"_id":"668657d664da708c0f7f64f2","avatarUrl":"/avatars/44a139431b087ba199e468a6ae74d1d4.svg","isPro":false,"fullname":"Munyeol Park","user":"opendoor99","type":"user"},{"_id":"6676179e4b1e661916d0c654","avatarUrl":"/avatars/a074b2c7baa49de9324329c752b49dfd.svg","isPro":false,"fullname":"Thomas Katraouras","user":"Tomk187","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"698422913a080cd2873577a4","name":"SNU-VLSI","fullname":"Seoul National University VLSI Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/662672eaebdfec5cfdf1d034/7kyeWE2-6lCuFC2PG3xhz.png"}}">
Papers
arxiv:2602.03216

Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

Published on Feb 3
ยท Submitted by
Dongwon Jo
on Feb 4
Authors:
,

Abstract

Token Sparse Attention enables efficient long-context inference by dynamically compressing and decompressing attention tensors at the token level, achieving significant speedup with minimal accuracy loss.

AI-generated summary

The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head Q, K, V to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels. Experimental results show that Token Sparse Attention consistently improves accuracy-latency trade-off, achieving up to times3.23 attention speedup at 128K context with less than 1% accuracy degradation. These results demonstrate that dynamic and interleaved token-level sparsification is a complementary and effective strategy for scalable long-context inference.

Community

Paper author Paper submitter

Token Sparse Attention is a complementary approach to efficient sparse attention that dynamically performs token-level compression during attention and reversibly decompresses the representations afterward.

Code release is in progress; a cleaned and documented implementation will be released soon.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/token-sparse-attention-efficient-long-context-inference-with-interleaved-token-selection-8983-9be724a0

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.03216 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.03216 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.03216 in a Space README.md to link it from this page.

Collections including this paper 1