This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\nThe following papers were recommended by the Semantic Scholar API
\n- \n
- LLoCO: Learning Long Contexts Offline (2024) \n
- Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts (2024) \n
- Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation (2024) \n
- XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference (2024) \n
- Imagination Augmented Generation: Learning to Imagine Richer Context for Question Answering over Large Language Models (2024) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
where's the public code?
\n","updatedAt":"2025-02-12T08:57:28.835Z","author":{"_id":"6403c20ddbfbea2a0540983b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6403c20ddbfbea2a0540983b/AHxgTaS-FuFNnoGJBDgXI.jpeg","fullname":"Lael","name":"laelhalawani","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8623876571655273},"editors":["laelhalawani"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6403c20ddbfbea2a0540983b/AHxgTaS-FuFNnoGJBDgXI.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2405.04065","authors":[{"_id":"6641a07d1cd6897588516be4","name":"Runheng Liu","hidden":false},{"_id":"6641a07d1cd6897588516be5","name":"Xingchen Xiao","hidden":false},{"_id":"6641a07d1cd6897588516be6","name":"Heyan Huang","hidden":false},{"_id":"6641a07d1cd6897588516be7","user":{"_id":"60f6d61f89b21b8fd2d471c6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60f6d61f89b21b8fd2d471c6/RmLFf97vUoXMoCT3rWbhm.jpeg","isPro":false,"fullname":"Zewen Chi","user":"CZWin32768","type":"user"},"name":"Zewen Chi","status":"claimed_verified","statusLastChangedAt":"2025-10-31T14:34:21.129Z","hidden":false},{"_id":"6641a07d1cd6897588516be8","name":"Zhijing Wu","hidden":false}],"publishedAt":"2024-05-07T07:14:38.000Z","title":"FlashBack:Efficient Retrieval-Augmented Language Modeling for Long\n Context Inference","summary":"Retrieval-Augmented Language Modeling (RALM) by integrating large language\nmodels (LLM) with relevant documents from an external corpus is a proven method\nfor enabling the LLM to generate information beyond the scope of its\npre-training corpus. Previous work using utilizing retrieved content by simply\nprepending retrieved contents to the input poses a high runtime issue, which\ndegrades the inference efficiency of the LLMs because they fail to use the\nKey-Value (KV) cache efficiently. In this paper, we propose FlashBack,\na modular RALM designed to improve the inference efficiency of RALM with\nappending context pattern while maintaining decent performance after specific\nfine-tuning without heavily destruct the knowledge integrity of the LLM.\nFlashBack appends retrieved documents at the end of the context for\nefficiently utilizing the KV cache instead of prepending them. Our experiment\nshows that the inference speed of FlashBack is up to 4times faster\nthan the prepending method on a 7B LLM (Llama 2). Via bypassing unnecessary\nre-computation, it demonstrates an advancement by achieving significantly\nfaster inference speed, and this heightened efficiency will substantially\nreduce inferential cost. Our code will be publicly available.","upvotes":0,"discussionId":"6641a07e1cd6897588516c0a","ai_summary":"FlashBack, a modular Retrieval-Augmented Language Model, improves inference efficiency by appending context patterns instead of prepending, achieving up to 4x faster performance on a 7B LLM while maintaining knowledge integrity.","ai_keywords":["Retrieval-Augmented Language Modeling","large language models","Key-Value (KV) cache","FlashBack","efficient utilization","inference efficiency","bench-marking","specific fine-tuning","knowledge integrity"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[],"acceptLanguages":["*"]}">FlashBack:Efficient Retrieval-Augmented Language Modeling for Long Context Inference
Abstract
FlashBack, a modular Retrieval-Augmented Language Model, improves inference efficiency by appending context patterns instead of prepending, achieving up to 4x faster performance on a 7B LLM while maintaining knowledge integrity.
Retrieval-Augmented Language Modeling (RALM) by integrating large language models (LLM) with relevant documents from an external corpus is a proven method for enabling the LLM to generate information beyond the scope of its pre-training corpus. Previous work using utilizing retrieved content by simply prepending retrieved contents to the input poses a high runtime issue, which degrades the inference efficiency of the LLMs because they fail to use the Key-Value (KV) cache efficiently. In this paper, we propose FlashBack, a modular RALM designed to improve the inference efficiency of RALM with appending context pattern while maintaining decent performance after specific fine-tuning without heavily destruct the knowledge integrity of the LLM. FlashBack appends retrieved documents at the end of the context for efficiently utilizing the KV cache instead of prepending them. Our experiment shows that the inference speed of FlashBack is up to 4times faster than the prepending method on a 7B LLM (Llama 2). Via bypassing unnecessary re-computation, it demonstrates an advancement by achieving significantly faster inference speed, and this heightened efficiency will substantially reduce inferential cost. Our code will be publicly available.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LLoCO: Learning Long Contexts Offline (2024)
- Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts (2024)
- Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation (2024)
- XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference (2024)
- Imagination Augmented Generation: Learning to Imagine Richer Context for Question Answering over Large Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
where's the public code?
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper