Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - AudioSAE: Towards Understanding of Audio-Processing Models with Sparse AutoEncoders
\n","updatedAt":"2026-02-09T15:10:48.988Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6378664970397949},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[{"reaction":"๐","users":["Kushnareva"],"count":1}],"isReport":false}},{"id":"698a8d1230e5f51cf4b3bd06","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-10T01:42:42.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [UniAudio 2.0: A Unified Audio Language Model with Text-Aligned Factorized Audio Tokenization](https://huggingface.co/papers/2602.04683) (2026)\n* [QuarkAudio Technical Report](https://huggingface.co/papers/2512.20151) (2025)\n* [Representation-Regularized Convolutional Audio Transformer for Audio Understanding](https://huggingface.co/papers/2601.21612) (2026)\n* [Sparse CLIP: Co-Optimizing Interpretability and Performance in Contrastive Learning](https://huggingface.co/papers/2601.20075) (2026)\n* [SpidR: Learning Fast and Stable Linguistic Units for Spoken Language Models Without Supervision](https://huggingface.co/papers/2512.20308) (2025)\n* [MiMo-Audio: Audio Language Models are Few-Shot Learners](https://huggingface.co/papers/2512.23808) (2025)\n* [SACodec: Asymmetric Quantization with Semantic Anchoring for Low-Bitrate High-Fidelity Neural Speech Codecs](https://huggingface.co/papers/2512.20944) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-10T01:42:42.094Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.718089759349823},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.05027","authors":[{"_id":"69871a872d626112378ad69f","user":{"_id":"660fd34df03515e4ff3f2b64","avatarUrl":"/avatars/0c2a29b1081ece881234acdd8ef9371a.svg","isPro":false,"fullname":"Georgii Aparin","user":"Egorgij21","type":"user"},"name":"Georgii Aparin","status":"claimed_verified","statusLastChangedAt":"2026-02-09T08:34:22.635Z","hidden":false},{"_id":"69871a872d626112378ad6a0","name":"Tasnima Sadekova","hidden":false},{"_id":"69871a872d626112378ad6a1","name":"Alexey Rukhovich","hidden":false},{"_id":"69871a872d626112378ad6a2","name":"Assel Yermekova","hidden":false},{"_id":"69871a872d626112378ad6a3","name":"Laida Kushnareva","hidden":false},{"_id":"69871a872d626112378ad6a4","name":"Vadim Popov","hidden":false},{"_id":"69871a872d626112378ad6a5","name":"Kristian Kuznetsov","hidden":false},{"_id":"69871a872d626112378ad6a6","name":"Irina Piontkovskaya","hidden":false}],"publishedAt":"2026-02-04T20:29:16.000Z","submittedOnDailyAt":"2026-02-09T05:28:19.089Z","title":"AudioSAE: Towards Understanding of Audio-Processing Models with Sparse AutoEncoders","submittedOnDailyBy":{"_id":"636254dc2691058b19d9276a","avatarUrl":"/avatars/36eb0e27e0e321fb0ac513f0d4d67c95.svg","isPro":false,"fullname":"Kushnareva","user":"Kushnareva","type":"user"},"summary":"Sparse Autoencoders (SAEs) are powerful tools for interpreting neural representations, yet their use in audio remains underexplored. We train SAEs across all encoder layers of Whisper and HuBERT, provide an extensive evaluation of their stability, interpretability, and show their practical utility. Over 50% of the features remain consistent across random seeds, and reconstruction quality is preserved. SAE features capture general acoustic and semantic information as well as specific events, including environmental noises and paralinguistic sounds (e.g. laughter, whispering) and disentangle them effectively, requiring removal of only 19-27% of features to erase a concept. Feature steering reduces Whisper's false speech detections by 70% with negligible WER increase, demonstrating real-world applicability. Finally, we find SAE features correlated with human EEG activity during speech perception, indicating alignment with human neural processing. The code and checkpoints are available at https://github.com/audiosae/audiosae_demo.","upvotes":59,"discussionId":"69871a872d626112378ad6a7","githubRepo":"https://github.com/audiosae/audiosae_demo","githubRepoAddedBy":"user","ai_summary":"Sparse Autoencoders trained on Whisper and HuBERT models demonstrate stable feature extraction and effective disentanglement of acoustic and semantic information, showing practical applications in audio processing and correlation with human neural activity.","ai_keywords":["Sparse Autoencoders","encoder layers","Whisper","HuBERT","feature steering","false speech detections","WER","EEG activity","speech perception"],"githubStars":11,"organization":{"_id":"5f83c275f0801648bf88454a","name":"huawei-noah","fullname":"HUAWEI Noah's Ark Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1602470452594-5f83c19ff0801648bf884549.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"660fd34df03515e4ff3f2b64","avatarUrl":"/avatars/0c2a29b1081ece881234acdd8ef9371a.svg","isPro":false,"fullname":"Georgii Aparin","user":"Egorgij21","type":"user"},{"_id":"636254dc2691058b19d9276a","avatarUrl":"/avatars/36eb0e27e0e321fb0ac513f0d4d67c95.svg","isPro":false,"fullname":"Kushnareva","user":"Kushnareva","type":"user"},{"_id":"63177d85f957903db971a173","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1665094764329-63177d85f957903db971a173.png","isPro":false,"fullname":"Artem","user":"kabachuha","type":"user"},{"_id":"67d145625e5bc6c00f3556bd","avatarUrl":"/avatars/c7a27485d21f7ee155cfeaacd07691a4.svg","isPro":false,"fullname":"Aleksei","user":"LeonMeon","type":"user"},{"_id":"603f8056076aa73940921525","avatarUrl":"/avatars/6aeffe1021af17ced8480a4c718083f6.svg","isPro":false,"fullname":"Pavel Efimov","user":"pefimov","type":"user"},{"_id":"668e3e02d501232e63a75778","avatarUrl":"/avatars/fd8b93b61d3035520e4f2cf56709831b.svg","isPro":false,"fullname":"Tasnima","user":"str12","type":"user"},{"_id":"62b02707744b9a896b990cdf","avatarUrl":"/avatars/d0661fad9f321e633fb7ff91d304c2e1.svg","isPro":false,"fullname":"Vadim Popov","user":"ghlwk","type":"user"},{"_id":"679770b6cbb6655a3c93eb43","avatarUrl":"/avatars/5bbccf36af7d4dae2028079b95692f94.svg","isPro":false,"fullname":"Areg Barseghyan","user":"aregbars","type":"user"},{"_id":"67941a952cf025f69f79512c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/dydmuNKpE3SwfrYIEDnLY.png","isPro":false,"fullname":"Irina","user":"irapiont","type":"user"},{"_id":"65b68acc7ccceb5ece8efdba","avatarUrl":"/avatars/c7e0e5f852b5e746ecb15f205e021e08.svg","isPro":false,"fullname":"Vladislav Pedashenko","user":"candelabrum","type":"user"},{"_id":"64959186e39e4409c5d3a9cd","avatarUrl":"/avatars/e13b08e1e75d88b2552ec39d6cf5bb32.svg","isPro":false,"fullname":"R","user":"tyuhgf","type":"user"},{"_id":"656f057ab467bcf6d3eac265","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/G-RiPwgG0mFvuKKiorPUh.jpeg","isPro":false,"fullname":"Mikhail Krevskiy","user":"ComradeKrevskiy","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2,"organization":{"_id":"5f83c275f0801648bf88454a","name":"huawei-noah","fullname":"HUAWEI Noah's Ark Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1602470452594-5f83c19ff0801648bf884549.png"}}">
Sparse Autoencoders trained on Whisper and HuBERT models demonstrate stable feature extraction and effective disentanglement of acoustic and semantic information, showing practical applications in audio processing and correlation with human neural activity.
AI-generated summary
Sparse Autoencoders (SAEs) are powerful tools for interpreting neural representations, yet their use in audio remains underexplored. We train SAEs across all encoder layers of Whisper and HuBERT, provide an extensive evaluation of their stability, interpretability, and show their practical utility. Over 50% of the features remain consistent across random seeds, and reconstruction quality is preserved. SAE features capture general acoustic and semantic information as well as specific events, including environmental noises and paralinguistic sounds (e.g. laughter, whispering) and disentangle them effectively, requiring removal of only 19-27% of features to erase a concept. Feature steering reduces Whisper's false speech detections by 70% with negligible WER increase, demonstrating real-world applicability. Finally, we find SAE features correlated with human EEG activity during speech perception, indicating alignment with human neural processing. The code and checkpoints are available at https://github.com/audiosae/audiosae_demo.
Sparse Autoencoders (SAEs) are powerful tools for interpreting neural representations, yet their use in audio remains underexplored. We train SAEs across all encoder layers of Whisper and HuBERT, provide an extensive evaluation of their stability, interpretability, and show their practical utility. Over 50% of the features remain consistent across random seeds, and reconstruction quality is preserved. SAE features capture general acoustic and semantic information as well as specific events, including environmental noises and paralinguistic sounds (e.g. laughter, whispering) and disentangle them effectively, requiring removal of only 19-27% of features to erase a concept. Feature steering reduces Whisper's false speech detections by 70% with negligible WER increase, demonstrating real-world applicability. Finally, we find SAE features correlated with human EEG activity during speech perception, indicating alignment with human neural processing.