The following papers were recommended by the Semantic Scholar API
\n- \n
- Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions (2023) \n
- Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture (2023) \n
- Accelerating Machine Learning Primitives on Commodity Hardware (2023) \n
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity (2023) \n
- DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models (2023) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n","updatedAt":"2023-11-14T15:30:56.902Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7218372821807861},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[{"reaction":"❤️","users":["neuralink"],"count":1}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2311.05908","authors":[{"_id":"65519cf643baee6b4d2fa8fd","user":{"_id":"633c8d1a475fefe61c597ff7","avatarUrl":"/avatars/32157bf89dbe9c7385b4816ea15ec240.svg","isPro":false,"fullname":"Dan Fu","user":"danfu09","type":"user"},"name":"Daniel Y. Fu","status":"admin_assigned","statusLastChangedAt":"2023-11-13T09:06:03.717Z","hidden":false},{"_id":"65519cf643baee6b4d2fa8fe","user":{"_id":"6525fd79e0094b7bf21c01e6","avatarUrl":"/avatars/04616a2a0363213ed4e577e92cf8f1b8.svg","isPro":false,"fullname":"Hermann Kumbong","user":"kumboh","type":"user"},"name":"Hermann Kumbong","status":"admin_assigned","statusLastChangedAt":"2023-11-13T09:04:31.542Z","hidden":false},{"_id":"65519cf643baee6b4d2fa8ff","name":"Eric Nguyen","hidden":false},{"_id":"65519cf643baee6b4d2fa900","name":"Christopher Ré","hidden":false}],"publishedAt":"2023-11-10T07:33:35.000Z","submittedOnDailyAt":"2023-11-13T01:20:14.779Z","title":"FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor\n Cores","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Convolution models with long filters have demonstrated state-of-the-art\nreasoning abilities in many long-sequence tasks but lag behind the most\noptimized Transformers in wall-clock time. A major bottleneck is the Fast\nFourier Transform (FFT)--which allows long convolutions to run in O(N logN)\ntime in sequence length N but has poor hardware utilization. In this paper,\nwe study how to optimize the FFT convolution. We find two key bottlenecks: the\nFFT does not effectively use specialized matrix multiply units, and it incurs\nexpensive I/O between layers of the memory hierarchy. In response, we propose\nFlashFFTConv. FlashFFTConv uses a matrix decomposition that computes the FFT\nusing matrix multiply units and enables kernel fusion for long sequences,\nreducing I/O. We also present two sparse convolution algorithms--1) partial\nconvolutions and 2) frequency-sparse convolutions--which can be implemented\nsimply by skipping blocks in the matrix decomposition, enabling further\nopportunities for memory and compute savings. FlashFFTConv speeds up exact FFT\nconvolutions by up to 7.93times over PyTorch and achieves up to 4.4times\nspeedup end-to-end. Given the same compute budget, FlashFFTConv allows\nHyena-GPT-s to achieve 2.3 points better perplexity on the PILE and\nM2-BERT-base to achieve 3.3 points higher GLUE score--matching models with\ntwice the parameter count. FlashFFTConv also achieves 96.1% accuracy on\nPath-512, a high-resolution vision task where no model had previously achieved\nbetter than 50%. Furthermore, partial convolutions enable longer-sequence\nmodels--yielding the first DNA model that can process the longest human genes\n(2.3M base pairs)--and frequency-sparse convolutions speed up pretrained models\nwhile maintaining or improving model quality.","upvotes":14,"discussionId":"65519cf643baee6b4d2fa915","ai_summary":"FlashFFTConv optimizes FFT convolutions by leveraging matrix multiply units and sparse techniques, improving speed and model performance across various tasks including long-sequence DNA processing and high-resolution vision.","ai_keywords":["FFT convolution","matrix decomposition","matrix multiply units","kernel fusion","partial convolutions","frequency-sparse convolutions","Hyena-GPT-s","M2-BERT-base","PILE","GLUE score","Path-512","DNA model"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64522233ea94bf023430dd95","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/CVDqDeJ_fLTULhCTTSogb.png","isPro":true,"fullname":"Chenhui Zhang","user":"danielz01","type":"user"},{"_id":"6311bca0ae8896941da24e66","avatarUrl":"/avatars/48de64894fc3c9397e26e4d6da3ff537.svg","isPro":false,"fullname":"Fynn Kröger","user":"fynnkroeger","type":"user"},{"_id":"62cd5917299c0c2e0e435847","avatarUrl":"/avatars/b956b4feab86f6866c43cc87a44e25fc.svg","isPro":false,"fullname":"Yang Yan","user":"kurileo","type":"user"},{"_id":"6305d67b5b87d4feaacbae78","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6305d67b5b87d4feaacbae78/_pCzWJYzng734f2W8AlLI.jpeg","isPro":false,"fullname":"Rasoul","user":"rasoul-nikbakht","type":"user"},{"_id":"6525fd79e0094b7bf21c01e6","avatarUrl":"/avatars/04616a2a0363213ed4e577e92cf8f1b8.svg","isPro":false,"fullname":"Hermann Kumbong","user":"kumboh","type":"user"},{"_id":"64ca7c04710645aa7bdbbfff","avatarUrl":"/avatars/c12f4cb6dc1ff0010edb3ef4cfcccd7c.svg","isPro":false,"fullname":"Lize Pirenne","user":"Inversta","type":"user"},{"_id":"63941cac1ef92a72582ff09f","avatarUrl":"/avatars/6d5250f50e82bbc8c3daede117b3d031.svg","isPro":false,"fullname":"Elias","user":"werelax","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"650c8bfb3d3542884da1a845","avatarUrl":"/avatars/863a5deebf2ac6d4faedc4dd368e0561.svg","isPro":false,"fullname":"Adhurim ","user":"Limi07","type":"user"},{"_id":"62a4ac6fd83c3facafa50892","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a4ac6fd83c3facafa50892/qFpobw9B5XaLZvwn0XbmB.jpeg","isPro":false,"fullname":"Mohammed Brıman","user":"mohammedbriman","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
Abstract
FlashFFTConv optimizes FFT convolutions by leveraging matrix multiply units and sparse techniques, improving speed and model performance across various tasks including long-sequence DNA processing and high-resolution vision.
Convolution models with long filters have demonstrated state-of-the-art reasoning abilities in many long-sequence tasks but lag behind the most optimized Transformers in wall-clock time. A major bottleneck is the Fast Fourier Transform (FFT)--which allows long convolutions to run in O(N logN) time in sequence length N but has poor hardware utilization. In this paper, we study how to optimize the FFT convolution. We find two key bottlenecks: the FFT does not effectively use specialized matrix multiply units, and it incurs expensive I/O between layers of the memory hierarchy. In response, we propose FlashFFTConv. FlashFFTConv uses a matrix decomposition that computes the FFT using matrix multiply units and enables kernel fusion for long sequences, reducing I/O. We also present two sparse convolution algorithms--1) partial convolutions and 2) frequency-sparse convolutions--which can be implemented simply by skipping blocks in the matrix decomposition, enabling further opportunities for memory and compute savings. FlashFFTConv speeds up exact FFT convolutions by up to 7.93times over PyTorch and achieves up to 4.4times speedup end-to-end. Given the same compute budget, FlashFFTConv allows Hyena-GPT-s to achieve 2.3 points better perplexity on the PILE and M2-BERT-base to achieve 3.3 points higher GLUE score--matching models with twice the parameter count. FlashFFTConv also achieves 96.1% accuracy on Path-512, a high-resolution vision task where no model had previously achieved better than 50%. Furthermore, partial convolutions enable longer-sequence models--yielding the first DNA model that can process the longest human genes (2.3M base pairs)--and frequency-sparse convolutions speed up pretrained models while maintaining or improving model quality.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions (2023)
- Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture (2023)
- Accelerating Machine Learning Primitives on Commodity Hardware (2023)
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity (2023)
- DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper