Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity
[go: Go Back, main page]

\"image.png\"

\n","updatedAt":"2025-02-19T09:43:42.985Z","author":{"_id":"639c6e978a34ed9a404c6a7b","avatarUrl":"/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg","fullname":"MIKHAIL BURTSEV","name":"mbur","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.36895567178726196},"editors":["mbur"],"editorAvatarUrls":["/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg"],"reactions":[],"isReport":false}},{"id":"67b5a819b11285555fc0ae1d","author":{"_id":"639c6e978a34ed9a404c6a7b","avatarUrl":"/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg","fullname":"MIKHAIL BURTSEV","name":"mbur","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11,"isUserFollowing":false},"createdAt":"2025-02-19T09:44:57.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"\n![image.png](https://cdn-uploads.huggingface.co/production/uploads/639c6e978a34ed9a404c6a7b/HYA3w_BEjSwh2TjbX1hu1.png)\n","html":"

\"image.png\"

\n","updatedAt":"2025-02-19T09:44:57.921Z","author":{"_id":"639c6e978a34ed9a404c6a7b","avatarUrl":"/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg","fullname":"MIKHAIL BURTSEV","name":"mbur","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3391558825969696},"editors":["mbur"],"editorAvatarUrls":["/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg"],"reactions":[],"isReport":false}},{"id":"67b5a86d4f5db7d4bed8117f","author":{"_id":"639c6e978a34ed9a404c6a7b","avatarUrl":"/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg","fullname":"MIKHAIL BURTSEV","name":"mbur","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11,"isUserFollowing":false},"createdAt":"2025-02-19T09:46:21.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"\n![image.png](https://cdn-uploads.huggingface.co/production/uploads/639c6e978a34ed9a404c6a7b/YEilGBN4NeXRPwFOFPeBA.png)\n","html":"

\"image.png\"

\n","updatedAt":"2025-02-19T09:46:21.741Z","author":{"_id":"639c6e978a34ed9a404c6a7b","avatarUrl":"/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg","fullname":"MIKHAIL BURTSEV","name":"mbur","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5119754672050476},"editors":["mbur"],"editorAvatarUrls":["/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg"],"reactions":[],"isReport":false}},{"id":"67b686b5d820c1bcdc5b1286","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-02-20T01:34:45.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Better Prompt Compression Without Multi-Layer Perceptrons](https://huggingface.co/papers/2501.06730) (2025)\n* [Vision-centric Token Compression in Large Language Model](https://huggingface.co/papers/2502.00791) (2025)\n* [LCIRC: A Recurrent Compression Approach for Efficient Long-form Context and Query Dependent Modeling in LLMs](https://huggingface.co/papers/2502.06139) (2025)\n* [A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression](https://huggingface.co/papers/2412.17483) (2024)\n* [Scaling Embedding Layers in Language Models](https://huggingface.co/papers/2502.01637) (2025)\n* [Following the Autoregressive Nature of LLM Embeddings via Compression and Alignment](https://huggingface.co/papers/2502.11401) (2025)\n* [ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment and Generation](https://huggingface.co/papers/2502.11308) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-02-20T01:34:45.337Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7008699774742126},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.13063","authors":[{"_id":"67b5a7896f72266cb765e744","user":{"_id":"618b9540682ec1c38327e586","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/618b9540682ec1c38327e586/v_ZBkfh8O9Zh6C2YQpuBX.jpeg","isPro":false,"fullname":"Yury Kuratov","user":"yurakuratov","type":"user"},"name":"Yuri Kuratov","status":"extracted_confirmed","statusLastChangedAt":"2025-06-06T17:43:24.656Z","hidden":false},{"_id":"67b5a7896f72266cb765e745","name":"Mikhail Arkhipov","hidden":false},{"_id":"67b5a7896f72266cb765e746","user":{"_id":"64c8b321cb2f1bf0e7c0f54b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c8b321cb2f1bf0e7c0f54b/JflXxMVnG9I0IB5YNyhXF.jpeg","isPro":false,"fullname":"Aydar Bulatov","user":"booydar","type":"user"},"name":"Aydar Bulatov","status":"claimed_verified","statusLastChangedAt":"2025-02-21T14:43:06.400Z","hidden":false},{"_id":"67b5a7896f72266cb765e747","user":{"_id":"639c6e978a34ed9a404c6a7b","avatarUrl":"/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg","isPro":false,"fullname":"MIKHAIL BURTSEV","user":"mbur","type":"user"},"name":"Mikhail Burtsev","status":"claimed_verified","statusLastChangedAt":"2025-02-19T09:56:59.080Z","hidden":false}],"publishedAt":"2025-02-18T17:08:45.000Z","submittedOnDailyAt":"2025-02-19T07:13:42.973Z","title":"Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the\n Limits of Embedding Space Capacity","submittedOnDailyBy":{"_id":"639c6e978a34ed9a404c6a7b","avatarUrl":"/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg","isPro":false,"fullname":"MIKHAIL BURTSEV","user":"mbur","type":"user"},"summary":"A range of recent works addresses the problem of compression of sequence of\ntokens into a shorter sequence of real-valued vectors to be used as inputs\ninstead of token embeddings or key-value cache. These approaches allow to\nreduce the amount of compute in existing language models. Despite relying on\npowerful models as encoders, the maximum attainable lossless compression ratio\nis typically not higher than x10. This fact is highly intriguing because, in\ntheory, the maximum information capacity of large real-valued vectors is far\nbeyond the presented rates even for 16-bit precision and a modest vector size.\nIn this work, we explore the limits of compression by replacing the encoder\nwith a per-sample optimization procedure. We show that vectors with compression\nratios up to x1500 exist, which highlights two orders of magnitude gap between\nexisting and practically attainable solutions. Furthermore, we empirically show\nthat the compression limits are determined not by the length of the input but\nby the amount of uncertainty to be reduced, namely, the cross-entropy loss on\nthis sequence without any conditioning. The obtained limits highlight the\nsubstantial gap between the theoretical capacity of input embeddings and their\npractical utilization, suggesting significant room for optimization in model\ndesign.","upvotes":74,"discussionId":"67b5a78a6f72266cb765e779","githubRepo":"https://github.com/yurakuratov/hidden_capacity","githubRepoAddedBy":"user","ai_summary":"Using per-sample optimization, compression ratios of up to x1500 are achieved for sequence-to-vector compression in language models, highlighting a large gap between theoretical and practical limits.","ai_keywords":["sequence compression","token embeddings","per-sample optimization","compression ratio","cross-entropy loss","model design"],"githubStars":28},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"639c6e978a34ed9a404c6a7b","avatarUrl":"/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg","isPro":false,"fullname":"MIKHAIL BURTSEV","user":"mbur","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"618b9540682ec1c38327e586","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/618b9540682ec1c38327e586/v_ZBkfh8O9Zh6C2YQpuBX.jpeg","isPro":false,"fullname":"Yury Kuratov","user":"yurakuratov","type":"user"},{"_id":"65c0db0fbda79a18292dfbb7","avatarUrl":"/avatars/1201b8282664c2d8c18beaba2396c03b.svg","isPro":false,"fullname":"Alsu Sagirova","user":"alsu-sagirova","type":"user"},{"_id":"604647a51444cd9b263b7f48","avatarUrl":"/avatars/80e890c0c0b3c3e2b89d0bb555d2c658.svg","isPro":false,"fullname":"web","user":"dim","type":"user"},{"_id":"67b5ba2924a91173c229bde2","avatarUrl":"/avatars/7f7f4483ba186a17a64a09116aec12fe.svg","isPro":false,"fullname":"Sergey Vasilyev","user":"ZergLev","type":"user"},{"_id":"678bf132da686d5964ec445b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/5j-1yiJ5OXiqLs8pGnPZ2.jpeg","isPro":false,"fullname":"Mark Wilson","user":"mks-logic","type":"user"},{"_id":"6632388ba2354b0f50d47aae","avatarUrl":"/avatars/52e17d81601b61831e571a23cb843abc.svg","isPro":false,"fullname":"Shaposhnikov","user":"Volodimirich","type":"user"},{"_id":"659923913b0b56c5e0ab3c8c","avatarUrl":"/avatars/dbb15811584c48efa4bdd39d71a9f73c.svg","isPro":false,"fullname":"Artyom Iudin","user":"Tomas245","type":"user"},{"_id":"65c35e0755a0bab6fd8d9230","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65c35e0755a0bab6fd8d9230/9S0io55H8OHe_Qv7mzDid.jpeg","isPro":false,"fullname":"Ilia Semenkov","user":"isemenkov","type":"user"},{"_id":"6626c5d0a329de26e7eb16fa","avatarUrl":"/avatars/124f389f768fb666efd8b5a9b54c3b3c.svg","isPro":false,"fullname":"Matvey Skripkin","user":"barracuda049","type":"user"},{"_id":"665b10fb270e47e678f2ddf1","avatarUrl":"/avatars/1bc7a9211acf767f7bfca998c24315a0.svg","isPro":false,"fullname":"max","user":"maksimko123","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2502.13063

Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity

Published on Feb 18, 2025
· Submitted by
MIKHAIL BURTSEV
on Feb 19, 2025
#2 Paper of the day
Authors:
,

Abstract

Using per-sample optimization, compression ratios of up to x1500 are achieved for sequence-to-vector compression in language models, highlighting a large gap between theoretical and practical limits.

AI-generated summary

A range of recent works addresses the problem of compression of sequence of tokens into a shorter sequence of real-valued vectors to be used as inputs instead of token embeddings or key-value cache. These approaches allow to reduce the amount of compute in existing language models. Despite relying on powerful models as encoders, the maximum attainable lossless compression ratio is typically not higher than x10. This fact is highly intriguing because, in theory, the maximum information capacity of large real-valued vectors is far beyond the presented rates even for 16-bit precision and a modest vector size. In this work, we explore the limits of compression by replacing the encoder with a per-sample optimization procedure. We show that vectors with compression ratios up to x1500 exist, which highlights two orders of magnitude gap between existing and practically attainable solutions. Furthermore, we empirically show that the compression limits are determined not by the length of the input but by the amount of uncertainty to be reduced, namely, the cross-entropy loss on this sequence without any conditioning. The obtained limits highlight the substantial gap between the theoretical capacity of input embeddings and their practical utilization, suggesting significant room for optimization in model design.

Community

Paper author Paper submitter

image.png

Paper author Paper submitter

image.png

Paper author Paper submitter

image.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.13063 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.13063 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.13063 in a Space README.md to link it from this page.

Collections including this paper 11