Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - QuickLLaMA: Query-aware Inference Acceleration for Large Language Models
[go: Go Back, main page]

@librarian-bot\n\t recommend

\n","updatedAt":"2024-06-15T11:43:50.339Z","author":{"_id":"646b8e6f31968a60a0201a12","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg","fullname":")))?!?(((","name":"stereoplegic","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3929,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7918877601623535},"editors":["stereoplegic"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"666d7e7c091b7c8e3fd8f69a","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-06-15T11:43:56.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inference](https://huggingface.co/papers/2405.17755) (2024)\n* [Recurrent Context Compression: Efficiently Expanding the Context Window of LLM](https://huggingface.co/papers/2406.06110) (2024)\n* [SirLLM: Streaming Infinite Retentive LLM](https://huggingface.co/papers/2405.12528) (2024)\n* [Equipping Transformer with Random-Access Reading for Long-Context Understanding](https://huggingface.co/papers/2405.13216) (2024)\n* [Make Your LLM Fully Utilize the Context](https://huggingface.co/papers/2404.16811) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-06-15T11:43:56.381Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7624656558036804},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"666d7e76ec7f94986606c9e4"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2406.07528","authors":[{"_id":"666cab2a9494795851be390f","name":"Jingyao Li","hidden":false},{"_id":"666cab2a9494795851be3910","name":"Han Shi","hidden":false},{"_id":"666cab2a9494795851be3911","name":"Xin Jiang","hidden":false},{"_id":"666cab2a9494795851be3912","name":"Zhenguo Li","hidden":false},{"_id":"666cab2a9494795851be3913","name":"Hong Xu","hidden":false},{"_id":"666cab2a9494795851be3914","name":"Jiaya Jia","hidden":false}],"publishedAt":"2024-06-11T17:55:03.000Z","title":"QuickLLaMA: Query-aware Inference Acceleration for Large Language Models","summary":"The capacity of Large Language Models (LLMs) to comprehend and reason over\nlong contexts is pivotal for advancements in diverse fields. Yet, they still\nstuggle with capturing long-distance dependencies within sequences to deeply\nunderstand semantics. To address this issue, we introduce Query-aware Inference\nfor LLMs (Q-LLM), a system designed to process extensive sequences akin to\nhuman cognition. By focusing on memory data relevant to a given query, Q-LLM\ncan accurately capture pertinent information within a fixed window size and\nprovide precise answers to queries. It doesn't require extra training and can\nbe seamlessly integrated with any LLMs. Q-LLM using LLaMA3 (QuickLLaMA) can\nread Harry Potter within 30s and accurately answer the questions. Q-LLM\nimproved by 7.17% compared to the current state-of-the-art on LLaMA3, and by\n3.26% on Mistral on the infty-bench. In the Needle-in-a-Haystack task, On\nwidely recognized benchmarks, Q-LLM improved upon the current SOTA by 7.0% on\nMistral and achieves 100% on LLaMA3. Our code can be found in\nhttps://github.com/dvlab-research/Q-LLM.","upvotes":0,"discussionId":"666cab2c9494795851be398d","githubRepo":"https://github.com/dvlab-research/q-llm","githubRepoAddedBy":"auto","ai_summary":"Q-LLM enhances LLMs' ability to capture long-distance dependencies and answer questions accurately by focusing on relevant memory data within a fixed window size.","ai_keywords":["Query-aware Inference","LLMs","LLaMA3","QuickLLaMA","memory data","long-distance dependencies","$\\infty$-bench","Needle-in-a-Haystack task"],"githubStars":55},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[],"acceptLanguages":["*"]}">
Papers
arxiv:2406.07528

QuickLLaMA: Query-aware Inference Acceleration for Large Language Models

Published on Jun 11, 2024
Authors:
,
,
,
,
,

Abstract

Q-LLM enhances LLMs' ability to capture long-distance dependencies and answer questions accurately by focusing on relevant memory data within a fixed window size.

AI-generated summary

The capacity of Large Language Models (LLMs) to comprehend and reason over long contexts is pivotal for advancements in diverse fields. Yet, they still stuggle with capturing long-distance dependencies within sequences to deeply understand semantics. To address this issue, we introduce Query-aware Inference for LLMs (Q-LLM), a system designed to process extensive sequences akin to human cognition. By focusing on memory data relevant to a given query, Q-LLM can accurately capture pertinent information within a fixed window size and provide precise answers to queries. It doesn't require extra training and can be seamlessly integrated with any LLMs. Q-LLM using LLaMA3 (QuickLLaMA) can read Harry Potter within 30s and accurately answer the questions. Q-LLM improved by 7.17% compared to the current state-of-the-art on LLaMA3, and by 3.26% on Mistral on the infty-bench. In the Needle-in-a-Haystack task, On widely recognized benchmarks, Q-LLM improved upon the current SOTA by 7.0% on Mistral and achieves 100% on LLaMA3. Our code can be found in https://github.com/dvlab-research/Q-LLM.

Community

@librarian-bot recommend

·

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.07528 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.07528 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.07528 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.