Librarian Bot. I found the following papers similar to this paper. \n
The following papers were recommended by the Semantic Scholar API
\n
\n
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-12-12T01:35:24.635Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6812998056411743},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2512.02892","authors":[{"_id":"693a4c4d74fced5bf9c32493","user":{"_id":"655efd24afee0e00788bb589","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655efd24afee0e00788bb589/22guLxIWNybbJR3jI-c4w.jpeg","isPro":false,"fullname":"Amr Mohamed","user":"amr-mohamed","type":"user"},"name":"Amr Mohamed","status":"claimed_verified","statusLastChangedAt":"2025-12-11T10:12:35.890Z","hidden":false},{"_id":"693a4c4d74fced5bf9c32494","user":{"_id":"64f1f92eefacc7da583a9e22","avatarUrl":"/avatars/18abbf9e31ca916f8d9a4495639a1329.svg","isPro":false,"fullname":"Yang ZHANG","user":"yangzhang33","type":"user"},"name":"Yang Zhang","status":"claimed_verified","statusLastChangedAt":"2025-12-12T09:16:56.863Z","hidden":false},{"_id":"693a4c4d74fced5bf9c32495","user":{"_id":"6839c2d132331eaf76bea940","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/U2daTtJiD-9JYYnbr4Xbu.png","isPro":false,"fullname":"Michalis Vazirgiannis","user":"mvazirg","type":"user"},"name":"Michalis Vazirgiannis","status":"admin_assigned","statusLastChangedAt":"2025-12-11T10:48:25.471Z","hidden":false},{"_id":"693a4c4d74fced5bf9c32496","user":{"_id":"6087e598e2b7cc3a117b0dc5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6087e598e2b7cc3a117b0dc5/Ctz_W-uo1gOQRBHXalD1P.png","isPro":false,"fullname":"Guokan Shang","user":"guokan-shang","type":"user"},"name":"Guokan Shang","status":"admin_assigned","statusLastChangedAt":"2025-12-11T10:48:31.176Z","hidden":false}],"publishedAt":"2025-12-02T16:01:08.000Z","submittedOnDailyAt":"2025-12-11T02:21:01.009Z","title":"Fast-Decoding Diffusion Language Models via Progress-Aware Confidence Schedules","submittedOnDailyBy":{"_id":"655efd24afee0e00788bb589","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655efd24afee0e00788bb589/22guLxIWNybbJR3jI-c4w.jpeg","isPro":false,"fullname":"Amr Mohamed","user":"amr-mohamed","type":"user"},"summary":"Diffusion large language models (dLLMs) offer a promising alternative to autoregressive models, but their practical utility is severely hampered by slow, iterative sampling. We present SchED, a training-free, model-agnostic early-exit algorithm that aggregates full-span logit margins and halts decoding once a smooth, progress-dependent confidence threshold is met. We evaluated SchED on two dLLM families (Dream and LLaDA), in base and instruction-tuned variants across ten benchmarks spanning downstream tasks including multiple-choice question answering (MCQ), math, long-form QA/summarization, and translation. SchED delivers large, stable accelerations: on instruction-tuned models, it achieves 3.8-4.0times speedups while retaining 99.8-100% of the baseline score on average. On base models, SchED yields consistent speedup gains with 99.1-100% performance retention, with up to 2.34times under more aggressive settings. Using a conservative speed metric that heavily penalizes quality loss (QPS, γ{=}4), we show that SchED is robust and clearly outperforms prior confidence-based early-exit methods, which break down on long-form generation. An entropy analysis of the model's token predictions reveals that instruction tuning speeds up the decay of predictive entropy. By turning genuine confidence stabilization into computational savings, SchED makes dLLM decoding substantially more efficient.","upvotes":12,"discussionId":"693a4c4d74fced5bf9c32497","githubRepo":"https://github.com/amr-mohamedd/SchED","githubRepoAddedBy":"auto","ai_summary":"SchED, a training-free early-exit algorithm, accelerates diffusion large language model decoding with minimal performance loss across various tasks.","ai_keywords":["diffusion large language models","dLLMs","autoregressive models","SchED","early-exit algorithm","full-span logit margins","confidence threshold","multiple-choice question answering","math","long-form QA/summarization","translation","instruction-tuned models","base models","QPS","entropy analysis","predictive entropy"],"githubStars":5},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"655efd24afee0e00788bb589","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655efd24afee0e00788bb589/22guLxIWNybbJR3jI-c4w.jpeg","isPro":false,"fullname":"Amr Mohamed","user":"amr-mohamed","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6087e598e2b7cc3a117b0dc5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6087e598e2b7cc3a117b0dc5/Ctz_W-uo1gOQRBHXalD1P.png","isPro":false,"fullname":"Guokan Shang","user":"guokan-shang","type":"user"},{"_id":"66448b4fecac3bc79b26304f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66448b4fecac3bc79b26304f/aPH3UFbc20CL2Bz2yn7nH.jpeg","isPro":false,"fullname":"Hadi Abdine","user":"habdine","type":"user"},{"_id":"6751b0caecaa275e389dd5eb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/RI2Y_nnjg5xJnv5sJVx_v.png","isPro":false,"fullname":"Ahmad Chamma","user":"AC-723","type":"user"},{"_id":"64fc409d304b8cb412d352eb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/dwkbTo7xN99Yo4he3VT9S.png","isPro":false,"fullname":"Dani Bouch","user":"Rateddany","type":"user"},{"_id":"6828ed494cd344b234726328","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/LnUhHY60V3Ope9Eb3LVnj.png","isPro":false,"fullname":"Winky","user":"Winky24","type":"user"},{"_id":"64f1f92eefacc7da583a9e22","avatarUrl":"/avatars/18abbf9e31ca916f8d9a4495639a1329.svg","isPro":false,"fullname":"Yang ZHANG","user":"yangzhang33","type":"user"},{"_id":"6575171654d1749612e21eed","avatarUrl":"/avatars/c032c1b942b3cb9450a49db88fce5c70.svg","isPro":false,"fullname":"Yulai Zhao","user":"sarosavo","type":"user"},{"_id":"695d283aab76479ff144f50a","avatarUrl":"/avatars/654fb1602311599ac05c62edb9d7074f.svg","isPro":false,"fullname":"Hassan","user":"lha25","type":"user"},{"_id":"686db5d4af2b856fabbf13aa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/6BjMv2LVNoqvbX8fQSTPI.png","isPro":false,"fullname":"V bbbb","user":"Bbbbbnnn","type":"user"},{"_id":"67a621a777d94969c979dade","avatarUrl":"/avatars/b693d6648b590fd52823fc297749149f.svg","isPro":false,"fullname":"Dipan Maity","user":"DipanM2","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Fast-Decoding Diffusion Language Models via Progress-Aware Confidence Schedules
Abstract
SchED, a training-free early-exit algorithm, accelerates diffusion large language model decoding with minimal performance loss across various tasks.
Diffusion large language models (dLLMs) offer a promising alternative to autoregressive models, but their practical utility is severely hampered by slow, iterative sampling. We present SchED, a training-free, model-agnostic early-exit algorithm that aggregates full-span logit margins and halts decoding once a smooth, progress-dependent confidence threshold is met. We evaluated SchED on two dLLM families (Dream and LLaDA), in base and instruction-tuned variants across ten benchmarks spanning downstream tasks including multiple-choice question answering (MCQ), math, long-form QA/summarization, and translation. SchED delivers large, stable accelerations: on instruction-tuned models, it achieves 3.8-4.0times speedups while retaining 99.8-100% of the baseline score on average. On base models, SchED yields consistent speedup gains with 99.1-100% performance retention, with up to 2.34times under more aggressive settings. Using a conservative speed metric that heavily penalizes quality loss (QPS, γ{=}4), we show that SchED is robust and clearly outperforms prior confidence-based early-exit methods, which break down on long-form generation. An entropy analysis of the model's token predictions reveals that instruction tuning speeds up the decay of predictive entropy. By turning genuine confidence stabilization into computational savings, SchED makes dLLM decoding substantially more efficient.