Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention
[go: Go Back, main page]

\"image.png\"

\n","updatedAt":"2025-02-18T11:07:36.222Z","author":{"_id":"645e054ff7a55f0d780a8ff7","avatarUrl":"/avatars/9614510443bee3bd5d6266efd1c39fc1.svg","fullname":"Chunjiang Ge","name":"HelloJiang","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.38774892687797546},"editors":["HelloJiang"],"editorAvatarUrls":["/avatars/9614510443bee3bd5d6266efd1c39fc1.svg"],"reactions":[{"reaction":"๐Ÿ”ฅ","users":["AdinaY","getnamo","pplmx","mozhu","valer14356","luojunyu","kz919"],"count":7},{"reaction":"๐Ÿš€","users":["AdinaY","pplmx","luojunyu","kz919"],"count":4}],"isReport":false}},{"id":"67b53520824d77f2bbaaa92f","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-02-19T01:34:24.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [LLM Pretraining with Continuous Concepts](https://huggingface.co/papers/2502.08524) (2025)\n* [Softplus Attention with Re-weighting Boosts Length Extrapolation in Large Language Models](https://huggingface.co/papers/2501.13428) (2025)\n* [Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning](https://huggingface.co/papers/2502.02770) (2025)\n* [AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference](https://huggingface.co/papers/2502.04077) (2025)\n* [LeMo: Enabling LEss Token Involvement for MOre Context Fine-tuning](https://huggingface.co/papers/2501.09767) (2025)\n* [GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference](https://huggingface.co/papers/2412.17560) (2024)\n* [Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration](https://huggingface.co/papers/2501.05179) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-02-19T01:34:24.369Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7399430871009827},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[{"reaction":"๐Ÿ‘","users":["machingwen","merlin123","Guillaume117","Torcvi"],"count":4},{"reaction":"๐Ÿ”ฅ","users":["Torcvi"],"count":1}],"isReport":false}},{"id":"67b53ef61071ecc840888673","author":{"_id":"643304bb2bfb2b0ec7599d44","avatarUrl":"/avatars/07676820004fe954d86d0a9121fa30a6.svg","fullname":"Main Horse","name":"main-horse","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"createdAt":"2025-02-19T02:16:22.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Does anyone have an idea of how ๐œ‘ (3.3.1 Token Compression) should be implemented?\n\nI made a simple downpooling approach, but I do not believe this will perform well in training:\n\n```python\ndef Linear(a: int, b: int): return nn.Linear(a,b,bias=False)\nclass Phi(nn.Module):\n def __init__(self, dim: int, block_l: int):\n super().__init__()\n downpools = int(math.log2(block_l))\n assert 1< [... seqlen//stride_d headdim ]\n # This is roughly, \"downproject 2->1 adjacent tokens + activation fn\",\n # repeated log2(block_l) times, with an extra final nn.Linear.\n for l in self.down:\n x = x.unflatten(-2, (x.size(-2)//2, 2)).flatten(-2)\n x = F.silu(l(x))\n return self.stop(x)\n```\n\nCurious to hear if anyone has thought about this.","html":"

Does anyone have an idea of how ๐œ‘ (3.3.1 Token Compression) should be implemented?

\n

I made a simple downpooling approach, but I do not believe this will perform well in training:

\n
def Linear(a: int, b: int): return nn.Linear(a,b,bias=False)\nclass Phi(nn.Module):\n    def __init__(self, dim: int, block_l: int):\n        super().__init__()\n        downpools = int(math.log2(block_l))\n        assert 1&lt;&lt;downpools == block_l\n        self.down = nn.ModuleList([Linear(dim*2, dim) for _ in range(downpools)])\n        self.stop = Linear(dim,dim)\n    def forward(self, x):\n        # x: [... seqlen//stride_d block_l headdim ] -&gt; [... seqlen//stride_d headdim ]\n        # This is roughly, \"downproject 2-&gt;1 adjacent tokens + activation fn\",\n        # repeated log2(block_l) times, with an extra final nn.Linear.\n        for l in self.down:\n            x = x.unflatten(-2, (x.size(-2)//2, 2)).flatten(-2)\n            x = F.silu(l(x))\n        return self.stop(x)\n
\n

Curious to hear if anyone has thought about this.

\n","updatedAt":"2025-02-19T02:16:22.047Z","author":{"_id":"643304bb2bfb2b0ec7599d44","avatarUrl":"/avatars/07676820004fe954d86d0a9121fa30a6.svg","fullname":"Main Horse","name":"main-horse","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7297365069389343},"editors":["main-horse"],"editorAvatarUrls":["/avatars/07676820004fe954d86d0a9121fa30a6.svg"],"reactions":[],"isReport":false},"replies":[{"id":"67b5cd6f3cd5860d8566b1c4","author":{"_id":"5f1158120c833276f61f1a84","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg","fullname":"Niels Rogge","name":"nielsr","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":1096,"isUserFollowing":false},"createdAt":"2025-02-19T12:24:15.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Lucidrains has started an implementation here: https://github.com/lucidrains/native-sparse-attention-pytorch/blob/main/native_sparse_attention_pytorch/nsa.py","html":"

Lucidrains has started an implementation here: https://github.com/lucidrains/native-sparse-attention-pytorch/blob/main/native_sparse_attention_pytorch/nsa.py

\n","updatedAt":"2025-02-19T12:24:15.985Z","author":{"_id":"5f1158120c833276f61f1a84","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg","fullname":"Niels Rogge","name":"nielsr","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":1096,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5617029666900635},"editors":["nielsr"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg"],"reactions":[{"reaction":"๐Ÿš€","users":["travisking","Yeserumo"],"count":2}],"isReport":false,"parentCommentId":"67b53ef61071ecc840888673"}},{"id":"67b5d716b8994a4b71509eca","author":{"_id":"643304bb2bfb2b0ec7599d44","avatarUrl":"/avatars/07676820004fe954d86d0a9121fa30a6.svg","fullname":"Main Horse","name":"main-horse","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"createdAt":"2025-02-19T13:05:26.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"yea look his current wip approach for cmp is a single conv1d of (l=d=compress_block_size) chunks to compressed values.\n\nit's certainly simple but it also does not even match the paper's claimed approach of using an MLP\n\ni eagerly await what he cooks up in the future","html":"

yea look his current wip approach for cmp is a single conv1d of (l=d=compress_block_size) chunks to compressed values.

\n

it's certainly simple but it also does not even match the paper's claimed approach of using an MLP

\n

i eagerly await what he cooks up in the future

\n","updatedAt":"2025-02-19T13:05:26.717Z","author":{"_id":"643304bb2bfb2b0ec7599d44","avatarUrl":"/avatars/07676820004fe954d86d0a9121fa30a6.svg","fullname":"Main Horse","name":"main-horse","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9659983515739441},"editors":["main-horse"],"editorAvatarUrls":["/avatars/07676820004fe954d86d0a9121fa30a6.svg"],"reactions":[],"isReport":false,"parentCommentId":"67b53ef61071ecc840888673"}},{"id":"67b99272dc6198a3e945d89d","author":{"_id":"5f1158120c833276f61f1a84","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg","fullname":"Niels Rogge","name":"nielsr","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":1096,"isUserFollowing":false},"createdAt":"2025-02-22T09:01:38.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Update: open replication now available here: https://github.com/fla-org/native-sparse-attention","html":"

Update: open replication now available here: https://github.com/fla-org/native-sparse-attention

\n","updatedAt":"2025-02-22T09:01:38.559Z","author":{"_id":"5f1158120c833276f61f1a84","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg","fullname":"Niels Rogge","name":"nielsr","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":1096,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7089588642120361},"editors":["nielsr"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"67b53ef61071ecc840888673"}}]},{"id":"67b8faf39f20b646d3c204ea","author":{"_id":"648a210e9da3cc3506961585","avatarUrl":"/avatars/808e9d7ac99837fe79169d0b8d49c366.svg","fullname":"Ajith V Prabhakar","name":"ajithprabhakar","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2025-02-21T22:15:15.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Here is the Article featuring this post on Ajith's AI Pulse : https://ajithp.com/2025/02/21/natively-sparse-attention-nsa-the-future-of-efficient-long-context-modeling-in-large-language-models/?_thumbnail_id=3832","html":"

Here is the Article featuring this post on Ajith's AI Pulse : https://ajithp.com/2025/02/21/natively-sparse-attention-nsa-the-future-of-efficient-long-context-modeling-in-large-language-models/?_thumbnail_id=3832

\n","updatedAt":"2025-02-21T22:15:15.773Z","author":{"_id":"648a210e9da3cc3506961585","avatarUrl":"/avatars/808e9d7ac99837fe79169d0b8d49c366.svg","fullname":"Ajith V Prabhakar","name":"ajithprabhakar","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7498038411140442},"editors":["ajithprabhakar"],"editorAvatarUrls":["/avatars/808e9d7ac99837fe79169d0b8d49c366.svg"],"reactions":[],"isReport":false}},{"id":"67c43e2eb236f0d365f3f8c1","author":{"_id":"67c43d347eba0d79a8fb1ea4","avatarUrl":"/avatars/8b2ab1c04504c1888d04b0ab6d18a08e.svg","fullname":"Moody","name":"Lorria","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-03-02T11:17:02.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","hiddenReason":"Off-Topic","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2025-03-03T09:31:21.683Z","author":{"_id":"67c43d347eba0d79a8fb1ea4","avatarUrl":"/avatars/8b2ab1c04504c1888d04b0ab6d18a08e.svg","fullname":"Moody","name":"Lorria","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}},{"id":"67c43e3923c133f5cb49b574","author":{"_id":"67c43d347eba0d79a8fb1ea4","avatarUrl":"/avatars/8b2ab1c04504c1888d04b0ab6d18a08e.svg","fullname":"Moody","name":"Lorria","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-03-02T11:17:13.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Holding paper ","html":"

Holding paper

\n","updatedAt":"2025-03-02T11:17:13.542Z","author":{"_id":"67c43d347eba0d79a8fb1ea4","avatarUrl":"/avatars/8b2ab1c04504c1888d04b0ab6d18a08e.svg","fullname":"Moody","name":"Lorria","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7235925197601318},"editors":["Lorria"],"editorAvatarUrls":["/avatars/8b2ab1c04504c1888d04b0ab6d18a08e.svg"],"reactions":[],"isReport":false}},{"id":"680a355bee7f310691933119","author":{"_id":"62a33256cba43e363e8bcb8b","avatarUrl":"/avatars/612a16dbb0e033524c184f80ee5c1bf7.svg","fullname":"Adrian Gabriel","name":"gabriead","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false},"createdAt":"2025-04-24T12:58:03.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Would it be feasible to bring this attention mechanism to encoder-only models as well (say BERT?)\n","html":"

Would it be feasible to bring this attention mechanism to encoder-only models as well (say BERT?)

\n","updatedAt":"2025-04-24T12:58:03.368Z","author":{"_id":"62a33256cba43e363e8bcb8b","avatarUrl":"/avatars/612a16dbb0e033524c184f80ee5c1bf7.svg","fullname":"Adrian Gabriel","name":"gabriead","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9434608221054077},"editors":["gabriead"],"editorAvatarUrls":["/avatars/612a16dbb0e033524c184f80ee5c1bf7.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.11089","authors":[{"_id":"67b43211d3c5f50aa9c03a2d","name":"Jingyang Yuan","hidden":false},{"_id":"67b43211d3c5f50aa9c03a2e","user":{"_id":"64e370be59aa5366642ac329","avatarUrl":"/avatars/0fa1eb6ac6c1aeff3e65bc86a6617f64.svg","isPro":false,"fullname":"Huazuo Gao","user":"gaohuazuo","type":"user"},"name":"Huazuo Gao","status":"admin_assigned","statusLastChangedAt":"2025-02-18T16:43:19.672Z","hidden":false},{"_id":"67b43211d3c5f50aa9c03a2f","user":{"_id":"659389f8de82e1ef7b9a8b13","avatarUrl":"/avatars/896ed9f4cdbd317493b303d070b7e12a.svg","isPro":false,"fullname":"Damai Dai","user":"DeepSeekDDM","type":"user"},"name":"Damai Dai","status":"admin_assigned","statusLastChangedAt":"2025-02-18T16:43:30.267Z","hidden":false},{"_id":"67b43211d3c5f50aa9c03a30","user":{"_id":"66e6c6372c78909baf44cdf8","avatarUrl":"/avatars/458ea1d545d7c022b0463e7fbbd91db1.svg","isPro":false,"fullname":"Junyu Luo","user":"junyuluo","type":"user"},"name":"Junyu Luo","status":"admin_assigned","statusLastChangedAt":"2025-02-18T16:43:36.295Z","hidden":false},{"_id":"67b43211d3c5f50aa9c03a31","name":"Liang Zhao","hidden":false},{"_id":"67b43211d3c5f50aa9c03a32","user":{"_id":"65654ed2219af7f841640f27","avatarUrl":"/avatars/e6904b3479fc5e65ea1f752919ca8290.svg","isPro":false,"fullname":"Zhengyan Zhang","user":"ZhengyanZhang","type":"user"},"name":"Zhengyan Zhang","status":"admin_assigned","statusLastChangedAt":"2025-02-18T16:44:02.477Z","hidden":false},{"_id":"67b43211d3c5f50aa9c03a33","user":{"_id":"6797ca96e9e2793006a15110","avatarUrl":"/avatars/2d393d6e5fc2e1a867f7fdd44e055a2f.svg","isPro":false,"fullname":"zhenda xie","user":"Zhendaxie","type":"user"},"name":"Zhenda Xie","status":"admin_assigned","statusLastChangedAt":"2025-02-18T16:44:16.691Z","hidden":false},{"_id":"67b43211d3c5f50aa9c03a34","name":"Y. X. Wei","hidden":false},{"_id":"67b43211d3c5f50aa9c03a35","user":{"_id":"650c509472afb1e60e6151ae","avatarUrl":"/avatars/c16ab5053a586819dc2b965303215ff7.svg","isPro":false,"fullname":"Lean Wang","user":"AdaHousman","type":"user"},"name":"Lean Wang","status":"admin_assigned","statusLastChangedAt":"2025-02-18T16:44:26.979Z","hidden":false},{"_id":"67b43211d3c5f50aa9c03a36","name":"Zhiping Xiao","hidden":false},{"_id":"67b43211d3c5f50aa9c03a37","name":"Yuqing Wang","hidden":false},{"_id":"67b43211d3c5f50aa9c03a38","user":{"_id":"6398203609f12714ed1935c2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6398203609f12714ed1935c2/uXgl0LgKnFYjq1Wz39-a6.jpeg","isPro":false,"fullname":"Chong Ruan","user":"Chester111","type":"user"},"name":"Chong Ruan","status":"admin_assigned","statusLastChangedAt":"2025-02-18T16:45:33.988Z","hidden":false},{"_id":"67b43211d3c5f50aa9c03a39","name":"Ming Zhang","hidden":false},{"_id":"67b43211d3c5f50aa9c03a3a","name":"Wenfeng Liang","hidden":false},{"_id":"67b43211d3c5f50aa9c03a3b","name":"Wangding Zeng","hidden":false}],"publishedAt":"2025-02-16T11:53:44.000Z","submittedOnDailyAt":"2025-02-18T08:37:36.212Z","title":"Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse\n Attention","submittedOnDailyBy":{"_id":"645e054ff7a55f0d780a8ff7","avatarUrl":"/avatars/9614510443bee3bd5d6266efd1c39fc1.svg","isPro":false,"fullname":"Chunjiang Ge","user":"HelloJiang","type":"user"},"summary":"Long-context modeling is crucial for next-generation language models, yet the\nhigh computational cost of standard attention mechanisms poses significant\ncomputational challenges. Sparse attention offers a promising direction for\nimproving efficiency while maintaining model capabilities. We present NSA, a\nNatively trainable Sparse Attention mechanism that integrates algorithmic\ninnovations with hardware-aligned optimizations to achieve efficient\nlong-context modeling. NSA employs a dynamic hierarchical sparse strategy,\ncombining coarse-grained token compression with fine-grained token selection to\npreserve both global context awareness and local precision. Our approach\nadvances sparse attention design with two key innovations: (1) We achieve\nsubstantial speedups through arithmetic intensity-balanced algorithm design,\nwith implementation optimizations for modern hardware. (2) We enable end-to-end\ntraining, reducing pretraining computation without sacrificing model\nperformance. As shown in Figure 1, experiments show the model pretrained with\nNSA maintains or exceeds Full Attention models across general benchmarks,\nlong-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves\nsubstantial speedups over Full Attention on 64k-length sequences across\ndecoding, forward propagation, and backward propagation, validating its\nefficiency throughout the model lifecycle.","upvotes":167,"discussionId":"67b43212d3c5f50aa9c03a5c","ai_summary":"NSA, a trainable sparse attention mechanism, enhances long-context modeling efficiency without sacrificing performance, achieving improvements in speed and accuracy over full attention models.","ai_keywords":["sparse attention","long-context modeling","token compression","token selection","arithmetic intensity-balanced","full attention","end-to-end training"],"organization":{"_id":"652faff917096ceb6bf53f3f","name":"deepseek-ai","fullname":"DeepSeek","avatar":"https://cdn-uploads.huggingface.co/production/uploads/6538815d1bdb3c40db94fbfa/xMBly9PUMphrFVMxLX4kq.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"655e4c26d5c0d3db535cdd66","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655e4c26d5c0d3db535cdd66/7gUJ8urq7mEZ4OE4ppQCj.png","isPro":false,"fullname":"Lincoln","user":"Presidentlin","type":"user"},{"_id":"632053d09f7a9f2208c53843","avatarUrl":"/avatars/3e5186fa0ab3bb071b7204b4cc5f5bee.svg","isPro":false,"fullname":"Janupalli Pranay","user":"pranay-j","type":"user"},{"_id":"663f07d029be04778ba97871","avatarUrl":"/avatars/fb7c9d4a2c537d918a3267e7cbc03f04.svg","isPro":false,"fullname":"Xingtai Lv","user":"XingtaiHF","type":"user"},{"_id":"646fce0528638f11a83ee890","avatarUrl":"/avatars/6bbe81608f9fb82506dec7cbd182d94b.svg","isPro":false,"fullname":"Hristo Panev","user":"hppdqdq","type":"user"},{"_id":"6312bf7a17e4bd0d77eda975","avatarUrl":"/avatars/c1b7de22824cb65f26496fe75ebc0b51.svg","isPro":false,"fullname":"Terra Nulles","user":"TerraNull","type":"user"},{"_id":"6462def82a83863b97c0611e","avatarUrl":"/avatars/c03e9cc7d75b0266fcc56ecb6ee62148.svg","isPro":false,"fullname":"Yuzhen Huang","user":"yuzhen17","type":"user"},{"_id":"63b6f2e752c02ae8acbaa4d8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1672934038280-noauth.jpeg","isPro":false,"fullname":"Habibullah Akbar","user":"ChavyvAkvar","type":"user"},{"_id":"67a8d7c7d0dc1ed66404607d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/sA684RXgcGM0lhis289I5.png","isPro":false,"fullname":"Kaiser","user":"xkcdz","type":"user"},{"_id":"64747f7e33192631bacd8831","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64747f7e33192631bacd8831/dstkZJ4sHJSeqLesV5cOC.jpeg","isPro":false,"fullname":"Taufiq Dwi Purnomo","user":"taufiqdp","type":"user"},{"_id":"66a0631401cfae79a128dfcf","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66a0631401cfae79a128dfcf/s0-0otwBVua_tbM6a_-pE.png","isPro":false,"fullname":"Muhammad Athif Humam","user":"athif23","type":"user"},{"_id":"6794cd13e0bae7ff7046c1a0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/F8U4AtQ7O_OXjeyFeK3zZ.png","isPro":false,"fullname":"Arafat vai","user":"arafatvai","type":"user"},{"_id":"62ff1230425f04f01963e22b","avatarUrl":"/avatars/ac124c71c0a2b536d3b22972a0fb8398.svg","isPro":false,"fullname":"Ray Miller","user":"comraderamen","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1,"organization":{"_id":"652faff917096ceb6bf53f3f","name":"deepseek-ai","fullname":"DeepSeek","avatar":"https://cdn-uploads.huggingface.co/production/uploads/6538815d1bdb3c40db94fbfa/xMBly9PUMphrFVMxLX4kq.png"}}">
Papers
arxiv:2502.11089

Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention

Published on Feb 16, 2025
ยท Submitted by
Chunjiang Ge
on Feb 18, 2025
#1 Paper of the day
ยท deepseek-ai DeepSeek
Authors:
,
,
,
,
,
,
,

Abstract

NSA, a trainable sparse attention mechanism, enhances long-context modeling efficiency without sacrificing performance, achieving improvements in speed and accuracy over full attention models.

AI-generated summary

Long-context modeling is crucial for next-generation language models, yet the high computational cost of standard attention mechanisms poses significant computational challenges. Sparse attention offers a promising direction for improving efficiency while maintaining model capabilities. We present NSA, a Natively trainable Sparse Attention mechanism that integrates algorithmic innovations with hardware-aligned optimizations to achieve efficient long-context modeling. NSA employs a dynamic hierarchical sparse strategy, combining coarse-grained token compression with fine-grained token selection to preserve both global context awareness and local precision. Our approach advances sparse attention design with two key innovations: (1) We achieve substantial speedups through arithmetic intensity-balanced algorithm design, with implementation optimizations for modern hardware. (2) We enable end-to-end training, reducing pretraining computation without sacrificing model performance. As shown in Figure 1, experiments show the model pretrained with NSA maintains or exceeds Full Attention models across general benchmarks, long-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves substantial speedups over Full Attention on 64k-length sequences across decoding, forward propagation, and backward propagation, validating its efficiency throughout the model lifecycle.

Community

Paper submitter

image.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Does anyone have an idea of how ๐œ‘ (3.3.1 Token Compression) should be implemented?

I made a simple downpooling approach, but I do not believe this will perform well in training:

def Linear(a: int, b: int): return nn.Linear(a,b,bias=False)
class Phi(nn.Module):
    def __init__(self, dim: int, block_l: int):
        super().__init__()
        downpools = int(math.log2(block_l))
        assert 1<<downpools == block_l
        self.down = nn.ModuleList([Linear(dim*2, dim) for _ in range(downpools)])
        self.stop = Linear(dim,dim)
    def forward(self, x):
        # x: [... seqlen//stride_d block_l headdim ] -> [... seqlen//stride_d headdim ]
        # This is roughly, "downproject 2->1 adjacent tokens + activation fn",
        # repeated log2(block_l) times, with an extra final nn.Linear.
        for l in self.down:
            x = x.unflatten(-2, (x.size(-2)//2, 2)).flatten(-2)
            x = F.silu(l(x))
        return self.stop(x)

Curious to hear if anyone has thought about this.

ยท
This comment has been hidden (marked as Off-Topic)

Holding paper

Would it be feasible to bring this attention mechanism to encoder-only models as well (say BERT?)

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 2

Collections including this paper 33