Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - CAMELoT: Towards Large Language Models with Training-Free Consolidated Associative Memory
[go: Go Back, main page]

@librarian-bot\n\t recommend

\n","updatedAt":"2024-06-08T17:44:45.857Z","author":{"_id":"646b8e6f31968a60a0201a12","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg","fullname":")))?!?(((","name":"stereoplegic","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3926,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7918877601623535},"editors":["stereoplegic"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"666498985adc67db96996606","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-06-08T17:44:56.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [HMT: Hierarchical Memory Transformer for Long Context Language Processing](https://huggingface.co/papers/2405.06067) (2024)\n* [Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention](https://huggingface.co/papers/2404.07143) (2024)\n* [Extended Mind Transformers](https://huggingface.co/papers/2406.02332) (2024)\n* [FlashBack: Efficient Retrieval-Augmented Language Modeling for Long Context Inference](https://huggingface.co/papers/2405.04065) (2024)\n* [LLoCO: Learning Long Contexts Offline](https://huggingface.co/papers/2404.07979) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-06-08T17:44:56.074Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7140897512435913},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"6664988d12668a9185ed7eac"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2402.13449","authors":[{"_id":"65fcff6a50ca05251d28822e","user":{"_id":"64d3f0462ad1d0608d95550f","avatarUrl":"/avatars/75055e03ceade47a50f5111e27079a3a.svg","isPro":false,"fullname":"Zexue He","user":"JJesssie","type":"user"},"name":"Zexue He","status":"claimed_verified","statusLastChangedAt":"2025-07-08T08:09:14.984Z","hidden":false},{"_id":"65fcff6a50ca05251d28822f","name":"Leonid Karlinsky","hidden":false},{"_id":"65fcff6a50ca05251d288230","name":"Donghyun Kim","hidden":false},{"_id":"65fcff6a50ca05251d288231","name":"Julian McAuley","hidden":false},{"_id":"65fcff6a50ca05251d288232","name":"Dmitry Krotov","hidden":false},{"_id":"65fcff6a50ca05251d288233","name":"Rogerio Feris","hidden":false}],"publishedAt":"2024-02-21T01:00:17.000Z","title":"CAMELoT: Towards Large Language Models with Training-Free Consolidated\n Associative Memory","summary":"Large Language Models (LLMs) struggle to handle long input sequences due to\nhigh memory and runtime costs. Memory-augmented models have emerged as a\npromising solution to this problem, but current methods are hindered by limited\nmemory capacity and require costly re-training to integrate with a new LLM. In\nthis work, we introduce an associative memory module which can be coupled to\nany pre-trained (frozen) attention-based LLM without re-training, enabling it\nto handle arbitrarily long input sequences. Unlike previous methods, our\nassociative memory module consolidates representations of individual tokens\ninto a non-parametric distribution model, dynamically managed by properly\nbalancing the novelty and recency of the incoming data. By retrieving\ninformation from this consolidated associative memory, the base LLM can achieve\nsignificant (up to 29.7% on Arxiv) perplexity reduction in long-context\nmodeling compared to other baselines evaluated on standard benchmarks. This\narchitecture, which we call CAMELoT (Consolidated Associative Memory Enhanced\nLong Transformer), demonstrates superior performance even with a tiny context\nwindow of 128 tokens, and also enables improved in-context learning with a much\nlarger set of demonstrations.","upvotes":1,"discussionId":"65fcff6b50ca05251d28827b","ai_summary":"CAMELoT, an associative memory module, enables pre-trained LLMs to handle long input sequences without re-training, reducing perplexity and improving in-context learning.","ai_keywords":["Large Language Models (LLMs)","memory-augmented models","associative memory module","pre-trained (frozen) attention-based LLM","non-parametric distribution model","perplexity reduction","long-context modeling","context window","in-context learning"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"628d95759ec8275172d9b940","avatarUrl":"/avatars/ef9922f0d2b4e51e35b5cf2c8af4043f.svg","isPro":false,"fullname":"Charles I Niswander II","user":"charlesniswander","type":"user"}],"acceptLanguages":["*"]}">
Papers
arxiv:2402.13449

CAMELoT: Towards Large Language Models with Training-Free Consolidated Associative Memory

Published on Feb 21, 2024
Authors:
,
,
,
,

Abstract

CAMELoT, an associative memory module, enables pre-trained LLMs to handle long input sequences without re-training, reducing perplexity and improving in-context learning.

AI-generated summary

Large Language Models (LLMs) struggle to handle long input sequences due to high memory and runtime costs. Memory-augmented models have emerged as a promising solution to this problem, but current methods are hindered by limited memory capacity and require costly re-training to integrate with a new LLM. In this work, we introduce an associative memory module which can be coupled to any pre-trained (frozen) attention-based LLM without re-training, enabling it to handle arbitrarily long input sequences. Unlike previous methods, our associative memory module consolidates representations of individual tokens into a non-parametric distribution model, dynamically managed by properly balancing the novelty and recency of the incoming data. By retrieving information from this consolidated associative memory, the base LLM can achieve significant (up to 29.7% on Arxiv) perplexity reduction in long-context modeling compared to other baselines evaluated on standard benchmarks. This architecture, which we call CAMELoT (Consolidated Associative Memory Enhanced Long Transformer), demonstrates superior performance even with a tiny context window of 128 tokens, and also enables improved in-context learning with a much larger set of demonstrations.

Community

@librarian-bot recommend

·

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2402.13449 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2402.13449 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2402.13449 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.