Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Optimizing Speculative Decoding for Serving Large Language Models Using
Goodput
@librarian-bot\n\t recommend\n","updatedAt":"2024-07-19T02:09:58.043Z","author":{"_id":"646b8e6f31968a60a0201a12","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg","fullname":")))?!?(((","name":"stereoplegic","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3925,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7918877601623535},"editors":["stereoplegic"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"6699cb022ef5162d0d3ade01","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false},"createdAt":"2024-07-19T02:10:10.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Hardware-Aware Parallel Prompt Decoding for Memory-Efficient Acceleration of LLM Inference](https://huggingface.co/papers/2405.18628) (2024)\n* [SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices](https://huggingface.co/papers/2406.02532) (2024)\n* [SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding](https://huggingface.co/papers/2406.18200) (2024)\n* [PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation](https://huggingface.co/papers/2407.11798) (2024)\n* [SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths](https://huggingface.co/papers/2405.19715) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-07-19T02:10:10.096Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7210657000541687},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"6699caf6004e418e713ee85a"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2406.14066","authors":[{"_id":"6688e6a4c1c9eaffe4865246","name":"Xiaoxuan Liu","hidden":false},{"_id":"6688e6a4c1c9eaffe4865247","user":{"_id":"64b86754a771ebc065edb572","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b86754a771ebc065edb572/nYlmDtjNTBx8eZOuSvCP-.png","isPro":true,"fullname":"Cade Daniel","user":"cdnamz","type":"user"},"name":"Cade Daniel","status":"claimed_verified","statusLastChangedAt":"2024-10-04T07:32:31.154Z","hidden":false},{"_id":"6688e6a4c1c9eaffe4865248","user":{"_id":"6301d6455e305a35cb0846a7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6301d6455e305a35cb0846a7/aT2AtzRMSY_T3y02MIUap.jpeg","isPro":true,"fullname":"Lanxiang Hu","user":"Snyhlxde","type":"user"},"name":"Langxiang Hu","status":"claimed_verified","statusLastChangedAt":"2024-07-08T08:50:25.928Z","hidden":false},{"_id":"6688e6a4c1c9eaffe4865249","name":"Woosuk Kwon","hidden":false},{"_id":"6688e6a4c1c9eaffe486524a","name":"Zhuohan Li","hidden":false},{"_id":"6688e6a4c1c9eaffe486524b","name":"Xiangxi Mo","hidden":false},{"_id":"6688e6a4c1c9eaffe486524c","name":"Alvin Cheung","hidden":false},{"_id":"6688e6a4c1c9eaffe486524d","user":{"_id":"64bba541da140e461924dfed","avatarUrl":"/avatars/367993765b0ca3734b2b100db33ed787.svg","isPro":true,"fullname":"zhijie deng","user":"zhijie3","type":"user"},"name":"Zhijie Deng","status":"claimed_verified","statusLastChangedAt":"2025-02-11T10:04:50.825Z","hidden":false},{"_id":"6688e6a4c1c9eaffe486524e","name":"Ion Stoica","hidden":false},{"_id":"6688e6a4c1c9eaffe486524f","user":{"_id":"62d363143eebd640a4fa41fa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62d363143eebd640a4fa41fa/pvPwXlJ5OOb-UIfmffv4E.jpeg","isPro":false,"fullname":"Hao Zhang","user":"zhisbug","type":"user"},"name":"Hao Zhang","status":"claimed_verified","statusLastChangedAt":"2025-01-01T20:14:35.755Z","hidden":false}],"publishedAt":"2024-06-20T07:43:33.000Z","title":"Optimizing Speculative Decoding for Serving Large Language Models Using\n Goodput","summary":"Reducing the inference latency of large language models (LLMs) is crucial,\nand speculative decoding (SD) stands out as one of the most effective\ntechniques. Rather than letting the LLM generate all tokens directly,\nspeculative decoding employs effective proxies to predict potential outputs,\nwhich are then verified by the LLM without compromising the generation quality.\nYet, deploying SD in real online LLM serving systems (with continuous batching)\ndoes not always yield improvement -- under higher request rates or low\nspeculation accuracy, it paradoxically increases latency. Furthermore, there is\nno best speculation length work for all workloads under different system loads.\nBased on the observations, we develop a dynamic framework SmartSpec. SmartSpec\ndynamically determines the best speculation length for each request (from 0,\ni.e., no speculation, to many tokens) -- hence the associated speculative\nexecution costs -- based on a new metric called goodput, which characterizes\nthe current observed load of the entire system and the speculation accuracy. We\nshow that SmartSpec consistently reduces average request latency by up to 3.2x\ncompared to non-speculative decoding baselines across different sizes of target\nmodels, draft models, request rates, and datasets. Moreover, SmartSpec can be\napplied to different styles of speculative decoding, including traditional,\nmodel-based approaches as well as model-free methods like prompt lookup and\ntree-style decoding.","upvotes":3,"discussionId":"6688e6a5c1c9eaffe48652b3","ai_summary":"A dynamic framework called SmartSpec optimizes speculative decoding in large language models to reduce average request latency by dynamically determining the best speculation length based on system load and speculation accuracy.","ai_keywords":["speculative decoding","large language models","inference latency","continuous batching","goodput","prompt lookup","tree-style decoding"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6452883a1a0c17bb7d591234","avatarUrl":"/avatars/19551ee41239da8670360dfcf4de39a8.svg","isPro":false,"fullname":"Lily Liu","user":"eqhylxx","type":"user"},{"_id":"6301d6455e305a35cb0846a7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6301d6455e305a35cb0846a7/aT2AtzRMSY_T3y02MIUap.jpeg","isPro":true,"fullname":"Lanxiang Hu","user":"Snyhlxde","type":"user"},{"_id":"63be1068b3b8c44f8ceb6598","avatarUrl":"/avatars/5fc626315f037d14c94e5df4144cc74a.svg","isPro":false,"fullname":"Simon Mo","user":"simon-mo","type":"user"}],"acceptLanguages":["*"]}">
A dynamic framework called SmartSpec optimizes speculative decoding in large language models to reduce average request latency by dynamically determining the best speculation length based on system load and speculation accuracy.
AI-generated summary
Reducing the inference latency of large language models (LLMs) is crucial,
and speculative decoding (SD) stands out as one of the most effective
techniques. Rather than letting the LLM generate all tokens directly,
speculative decoding employs effective proxies to predict potential outputs,
which are then verified by the LLM without compromising the generation quality.
Yet, deploying SD in real online LLM serving systems (with continuous batching)
does not always yield improvement -- under higher request rates or low
speculation accuracy, it paradoxically increases latency. Furthermore, there is
no best speculation length work for all workloads under different system loads.
Based on the observations, we develop a dynamic framework SmartSpec. SmartSpec
dynamically determines the best speculation length for each request (from 0,
i.e., no speculation, to many tokens) -- hence the associated speculative
execution costs -- based on a new metric called goodput, which characterizes
the current observed load of the entire system and the speculation accuracy. We
show that SmartSpec consistently reduces average request latency by up to 3.2x
compared to non-speculative decoding baselines across different sizes of target
models, draft models, request rates, and datasets. Moreover, SmartSpec can be
applied to different styles of speculative decoding, including traditional,
model-based approaches as well as model-free methods like prompt lookup and
tree-style decoding.