Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Jumping Ahead: Improving Reconstruction Fidelity with JumpReLU Sparse
Autoencoders
\n","updatedAt":"2024-07-22T02:14:33.039Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9179,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.36053138971328735},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"669f084419845ee054b17c7a","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-07-23T01:32:52.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Scaling and evaluating sparse autoencoders](https://huggingface.co/papers/2406.04093) (2024)\n* [Interpreting Attention Layer Outputs with Sparse Autoencoders](https://huggingface.co/papers/2406.17759) (2024)\n* [Transcoders Find Interpretable LLM Feature Circuits](https://huggingface.co/papers/2406.11944) (2024)\n* [Sparse maximal update parameterization: A holistic approach to sparse training dynamics](https://huggingface.co/papers/2405.15743) (2024)\n* [The Missing Curve Detectors of InceptionV1: Applying Sparse Autoencoders to InceptionV1 Early Vision](https://huggingface.co/papers/2406.03662) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-07-23T01:32:52.478Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.689473569393158},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2407.14435","authors":[{"_id":"669dc082a8b62d0515902f8b","user":{"_id":"6477122c99a5ce743ccf2f55","avatarUrl":"/avatars/dee617374fc609e07eba3bcb2cd16810.svg","isPro":false,"fullname":"Senthooran Rajamanoharan","user":"srdm","type":"user"},"name":"Senthooran Rajamanoharan","status":"claimed_verified","statusLastChangedAt":"2024-07-22T10:05:01.064Z","hidden":false},{"_id":"669dc082a8b62d0515902f8c","user":{"_id":"632cb97554e2c512c8f7bb31","avatarUrl":"/avatars/634be7f8a59b6cf639e9e6c5e7ca1fbd.svg","isPro":false,"fullname":"TomL","user":"Aric","type":"user"},"name":"Tom Lieberum","status":"claimed_verified","statusLastChangedAt":"2024-07-22T11:36:28.359Z","hidden":false},{"_id":"669dc082a8b62d0515902f8d","name":"Nicolas Sonnerat","hidden":false},{"_id":"669dc082a8b62d0515902f8e","user":{"_id":"631ecc58daa9591e522e1494","avatarUrl":"/avatars/4f808fae966e808105e89712c97d90d2.svg","isPro":false,"fullname":"VConm","user":"ArthurConmy","type":"user"},"name":"Arthur Conmy","status":"admin_assigned","statusLastChangedAt":"2024-07-22T08:52:46.744Z","hidden":false},{"_id":"669dc082a8b62d0515902f8f","user":{"_id":"667f25c67a2adf8ac1c940b2","avatarUrl":"/avatars/ccf62e2e4913d9a8615613f7bb0bfbe2.svg","isPro":false,"fullname":"Vikrant Varma","user":"vikrantvarma","type":"user"},"name":"Vikrant Varma","status":"admin_assigned","statusLastChangedAt":"2024-07-22T08:52:40.931Z","hidden":false},{"_id":"669dc082a8b62d0515902f90","user":{"_id":"6647488b45ee22196e798ba6","avatarUrl":"/avatars/7c068c30dcd5f07e4d896fd1dce5be2c.svg","isPro":false,"fullname":"Janos Kramar","user":"jkramar","type":"user"},"name":"János Kramár","status":"admin_assigned","statusLastChangedAt":"2024-07-22T08:52:59.268Z","hidden":false},{"_id":"669dc082a8b62d0515902f91","user":{"_id":"66918e43a9cf6ccd89d69bf8","avatarUrl":"/avatars/557d5befd01e8a5bfd4b1b7afd285203.svg","isPro":false,"fullname":"Neel Nanda","user":"NeelNanda2","type":"user"},"name":"Neel Nanda","status":"extracted_pending","statusLastChangedAt":"2024-07-22T02:14:26.915Z","hidden":false}],"publishedAt":"2024-07-19T16:07:19.000Z","submittedOnDailyAt":"2024-07-22T00:44:33.033Z","title":"Jumping Ahead: Improving Reconstruction Fidelity with JumpReLU Sparse\n Autoencoders","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Sparse autoencoders (SAEs) are a promising unsupervised approach for\nidentifying causally relevant and interpretable linear features in a language\nmodel's (LM) activations. To be useful for downstream tasks, SAEs need to\ndecompose LM activations faithfully; yet to be interpretable the decomposition\nmust be sparse -- two objectives that are in tension. In this paper, we\nintroduce JumpReLU SAEs, which achieve state-of-the-art reconstruction fidelity\nat a given sparsity level on Gemma 2 9B activations, compared to other recent\nadvances such as Gated and TopK SAEs. We also show that this improvement does\nnot come at the cost of interpretability through manual and automated\ninterpretability studies. JumpReLU SAEs are a simple modification of vanilla\n(ReLU) SAEs -- where we replace the ReLU with a discontinuous JumpReLU\nactivation function -- and are similarly efficient to train and run. By\nutilising straight-through-estimators (STEs) in a principled manner, we show\nhow it is possible to train JumpReLU SAEs effectively despite the discontinuous\nJumpReLU function introduced in the SAE's forward pass. Similarly, we use STEs\nto directly train L0 to be sparse, instead of training on proxies such as L1,\navoiding problems like shrinkage.","upvotes":7,"discussionId":"669dc082a8b62d0515902fc2","ai_summary":"JumpReLU SAEs achieve high reconstruction fidelity and interpretability in language model activations by using a discontinuous activation function and straight-through-estimators.","ai_keywords":["Sparse autoencoders","JumpReLU","Gemma 2 9B","Gated SAEs","TopK SAEs","interpretability","straight-through-estimators","L0 regularization"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"635fd74e14657fb8cff2bc13","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/635fd74e14657fb8cff2bc13/lUlHB0z1CRPJpwwT3JcnO.jpeg","isPro":false,"fullname":"Chan Kim","user":"chanmuzi","type":"user"},{"_id":"5f0514845d08220171a0ad70","avatarUrl":"/avatars/c643af7c7f230ba29a38450af832a14c.svg","isPro":false,"fullname":"Sam Petulla","user":"petulla","type":"user"},{"_id":"655e490d3beaa281e50aff5f","avatarUrl":"/avatars/7e6d9a096a71d601f66279e317d11e1f.svg","isPro":false,"fullname":"Amil Dravid","user":"amildravid4292","type":"user"},{"_id":"653f1ef4aabbf15fc76a259c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/653f1ef4aabbf15fc76a259c/1jJDeTOJaJIKQZ4g3i8V3.jpeg","isPro":false,"fullname":"LLLeo Li","user":"LLLeo612","type":"user"},{"_id":"60c8d264224e250fb0178f77","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60c8d264224e250fb0178f77/i8fbkBVcoFeJRmkQ9kYAE.png","isPro":false,"fullname":"Adam Lee","user":"Abecid","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"663ccbff3a74a20189d4aa2e","avatarUrl":"/avatars/83a54455e0157480f65c498cd9057cf2.svg","isPro":false,"fullname":"Nguyen Van Thanh","user":"NguyenVanThanhHust","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
JumpReLU SAEs achieve high reconstruction fidelity and interpretability in language model activations by using a discontinuous activation function and straight-through-estimators.
AI-generated summary
Sparse autoencoders (SAEs) are a promising unsupervised approach for
identifying causally relevant and interpretable linear features in a language
model's (LM) activations. To be useful for downstream tasks, SAEs need to
decompose LM activations faithfully; yet to be interpretable the decomposition
must be sparse -- two objectives that are in tension. In this paper, we
introduce JumpReLU SAEs, which achieve state-of-the-art reconstruction fidelity
at a given sparsity level on Gemma 2 9B activations, compared to other recent
advances such as Gated and TopK SAEs. We also show that this improvement does
not come at the cost of interpretability through manual and automated
interpretability studies. JumpReLU SAEs are a simple modification of vanilla
(ReLU) SAEs -- where we replace the ReLU with a discontinuous JumpReLU
activation function -- and are similarly efficient to train and run. By
utilising straight-through-estimators (STEs) in a principled manner, we show
how it is possible to train JumpReLU SAEs effectively despite the discontinuous
JumpReLU function introduced in the SAE's forward pass. Similarly, we use STEs
to directly train L0 to be sparse, instead of training on proxies such as L1,
avoiding problems like shrinkage.