Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Progressive Distillation for Fast Sampling of Diffusion Models
[go: Go Back, main page]

\n\t\t\n\t\n\t\n\t\tSpeed Up Diffusion Models with Progressive Distillation!\n\t\n\n

\n\n

\n\t\n\t\t\n\t\n\t\n\t\tLinks πŸ”—:\n\t\n

\n

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-08T20:54:12.407Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":176,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.474698543548584},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2202.00512","authors":[{"_id":"6411c77f6b75ddced3890ce9","name":"Tim Salimans","hidden":false},{"_id":"6411c77f6b75ddced3890cea","name":"Jonathan Ho","hidden":false}],"publishedAt":"2022-02-01T16:07:25.000Z","title":"Progressive Distillation for Fast Sampling of Diffusion Models","summary":"Diffusion models have recently shown great promise for generative modeling,\noutperforming GANs on perceptual quality and autoregressive models at density\nestimation. A remaining downside is their slow sampling time: generating high\nquality samples takes many hundreds or thousands of model evaluations. Here we\nmake two contributions to help eliminate this downside: First, we present new\nparameterizations of diffusion models that provide increased stability when\nusing few sampling steps. Second, we present a method to distill a trained\ndeterministic diffusion sampler, using many steps, into a new diffusion model\nthat takes half as many sampling steps. We then keep progressively applying\nthis distillation procedure to our model, halving the number of required\nsampling steps each time. On standard image generation benchmarks like\nCIFAR-10, ImageNet, and LSUN, we start out with state-of-the-art samplers\ntaking as many as 8192 steps, and are able to distill down to models taking as\nfew as 4 steps without losing much perceptual quality; achieving, for example,\na FID of 3.0 on CIFAR-10 in 4 steps. Finally, we show that the full progressive\ndistillation procedure does not take more time than it takes to train the\noriginal model, thus representing an efficient solution for generative modeling\nusing diffusion at both train and test time.","upvotes":1,"discussionId":"641192363ea54b1aa7e2f438","ai_summary":"New parameterizations and progressive distillation methods improve the efficiency of diffusion models without sacrificing perceptual quality.","ai_keywords":["diffusion models","GANs","autoregressive models","perceptual quality","sampling time","parameterizations","deterministic diffusion sampler","distillation","FID","CIFAR-10","ImageNet","LSUN"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"}],"acceptLanguages":["*"]}">
Papers
arxiv:2202.00512

Progressive Distillation for Fast Sampling of Diffusion Models

Published on Feb 1, 2022
Authors:
,

Abstract

New parameterizations and progressive distillation methods improve the efficiency of diffusion models without sacrificing perceptual quality.

AI-generated summary

Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes many hundreds or thousands of model evaluations. Here we make two contributions to help eliminate this downside: First, we present new parameterizations of diffusion models that provide increased stability when using few sampling steps. Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps. We then keep progressively applying this distillation procedure to our model, halving the number of required sampling steps each time. On standard image generation benchmarks like CIFAR-10, ImageNet, and LSUN, we start out with state-of-the-art samplers taking as many as 8192 steps, and are able to distill down to models taking as few as 4 steps without losing much perceptual quality; achieving, for example, a FID of 3.0 on CIFAR-10 in 4 steps. Finally, we show that the full progressive distillation procedure does not take more time than it takes to train the original model, thus representing an efficient solution for generative modeling using diffusion at both train and test time.

Community

Speed Up Diffusion Models with Progressive Distillation!

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 86

Browse 86 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2202.00512 in a dataset README.md to link it from this page.

Spaces citing this paper 265

Collections including this paper 2