Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Lotus: Diffusion-based Visual Foundation Model for High-quality Dense
Prediction
https://lotus3d.github.io/\n","updatedAt":"2024-09-27T03:49:48.002Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9179,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"de","probability":0.30113744735717773},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"66f75ce5290c1dc129d47360","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-09-28T01:33:25.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors](https://huggingface.co/papers/2409.17058) (2024)\n* [Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think](https://huggingface.co/papers/2409.11355) (2024)\n* [SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation](https://huggingface.co/papers/2409.06633) (2024)\n* [Zero-Shot Uncertainty Quantification using Diffusion Probabilistic Models](https://huggingface.co/papers/2408.04718) (2024)\n* [EDADepth: Enhanced Data Augmentation for Monocular Depth Estimation](https://huggingface.co/papers/2409.06183) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-09-28T01:33:25.765Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6929894685745239},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2409.18124","authors":[{"_id":"66f62b506b0e782fa32a02db","user":{"_id":"6548fae806396ffce13092f4","avatarUrl":"/avatars/aaee62e508ed96790cab1e3f0158fafd.svg","isPro":false,"fullname":"Jing He","user":"jingheya","type":"user"},"name":"Jing He","status":"claimed_verified","statusLastChangedAt":"2024-10-03T08:31:50.539Z","hidden":false},{"_id":"66f62b506b0e782fa32a02dc","user":{"_id":"641d211e353524fe41f16387","avatarUrl":"/avatars/c6e72c82c029b415a035beebee50b52c.svg","isPro":true,"fullname":"Haodong Li","user":"haodongli","type":"user"},"name":"Haodong Li","status":"claimed_verified","statusLastChangedAt":"2024-09-27T06:58:00.560Z","hidden":false},{"_id":"66f62b506b0e782fa32a02dd","user":{"_id":"654a2b1a83e7bfc4313a5cc7","avatarUrl":"/avatars/dc3dfc3fcd26bb7350a9db0d075c5ea0.svg","isPro":false,"fullname":"Wei Yin","user":"WonderingWorld","type":"user"},"name":"Wei Yin","status":"claimed_verified","statusLastChangedAt":"2025-03-17T08:46:11.070Z","hidden":false},{"_id":"66f62b506b0e782fa32a02de","name":"Yixun Liang","hidden":false},{"_id":"66f62b506b0e782fa32a02df","user":{"_id":"6343e37f73b4f9cedab1c846","avatarUrl":"/avatars/2638af4626e8a4e3a95f845b94ad94f6.svg","isPro":false,"fullname":"Leheng Li","user":"lilelife","type":"user"},"name":"Leheng Li","status":"claimed_verified","statusLastChangedAt":"2024-10-02T07:41:30.485Z","hidden":false},{"_id":"66f62b506b0e782fa32a02e0","name":"Kaiqiang Zhou","hidden":false},{"_id":"66f62b506b0e782fa32a02e1","name":"Hongbo Liu","hidden":false},{"_id":"66f62b506b0e782fa32a02e2","name":"Bingbing Liu","hidden":false},{"_id":"66f62b506b0e782fa32a02e3","name":"Ying-Cong Chen","hidden":false}],"publishedAt":"2024-09-26T17:58:55.000Z","submittedOnDailyAt":"2024-09-27T02:19:47.994Z","title":"Lotus: Diffusion-based Visual Foundation Model for High-quality Dense\n Prediction","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Leveraging the visual priors of pre-trained text-to-image diffusion models\noffers a promising solution to enhance zero-shot generalization in dense\nprediction tasks. However, existing methods often uncritically use the original\ndiffusion formulation, which may not be optimal due to the fundamental\ndifferences between dense prediction and image generation. In this paper, we\nprovide a systemic analysis of the diffusion formulation for the dense\nprediction, focusing on both quality and efficiency. And we find that the\noriginal parameterization type for image generation, which learns to predict\nnoise, is harmful for dense prediction; the multi-step noising/denoising\ndiffusion process is also unnecessary and challenging to optimize. Based on\nthese insights, we introduce Lotus, a diffusion-based visual foundation model\nwith a simple yet effective adaptation protocol for dense prediction.\nSpecifically, Lotus is trained to directly predict annotations instead of\nnoise, thereby avoiding harmful variance. We also reformulate the diffusion\nprocess into a single-step procedure, simplifying optimization and\nsignificantly boosting inference speed. Additionally, we introduce a novel\ntuning strategy called detail preserver, which achieves more accurate and\nfine-grained predictions. Without scaling up the training data or model\ncapacity, Lotus achieves SoTA performance in zero-shot depth and normal\nestimation across various datasets. It also significantly enhances efficiency,\nbeing hundreds of times faster than most existing diffusion-based methods.","upvotes":33,"discussionId":"66f62b586b0e782fa32a04b7","githubRepo":"https://github.com/envision-research/lotus","githubRepoAddedBy":"auto","ai_summary":"A diffusion-based visual foundation model, Lotus, is introduced for dense prediction tasks, achieving state-of-the-art performance in zero-shot depth and normal estimation while improving efficiency.","ai_keywords":["text-to-image diffusion models","zero-shot generalization","dense prediction","diffusion formulation","noise prediction","multi-step noising/denoising","Lotus","adaptation protocol","direct annotation prediction","single-step diffusion process","detail preserver","SoTA performance"],"githubStars":783},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6548fae806396ffce13092f4","avatarUrl":"/avatars/aaee62e508ed96790cab1e3f0158fafd.svg","isPro":false,"fullname":"Jing He","user":"jingheya","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"65b3846a85b6c214481ac431","avatarUrl":"/avatars/74964bfe341b865400ca36a6fc8042a0.svg","isPro":false,"fullname":"ccllet","user":"ccllet","type":"user"},{"_id":"61b3576de49318df54457d8f","avatarUrl":"/avatars/5d425bea6ee6319d413e965b4499ec5c.svg","isPro":false,"fullname":"Biziel","user":"Grzegorz","type":"user"},{"_id":"664714a8973dacf80ed465d0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/KCg3_LBJLQr7gdmKfIl9h.jpeg","isPro":false,"fullname":"Robert Rusiecki","user":"Lirbi","type":"user"},{"_id":"62ca018b9261697476c52551","avatarUrl":"/avatars/ee668e8b83e009db359aedea24c5a518.svg","isPro":false,"fullname":"Almukhtar ","user":"Malmuk1","type":"user"},{"_id":"6493306970d925ae80523a53","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","isPro":false,"fullname":"Dmitry Ryumin","user":"DmitryRyumin","type":"user"},{"_id":"635964636a61954080850e1d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/635964636a61954080850e1d/0bfExuDTrHTtm8c-40cDM.png","isPro":false,"fullname":"William Lamkin","user":"phanes","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"62716952bcef985363db8485","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62716952bcef985363db8485/zJPPo5xlwZRJdEuwYsYKp.jpeg","isPro":true,"fullname":"JB D.","user":"IAMJB","type":"user"},{"_id":"646d0c1c534e52f8c30500a6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646d0c1c534e52f8c30500a6/75VH8ClbRaP75BU2ONfXE.png","isPro":true,"fullname":"Pavlo Molchanov","user":"pmolchanov","type":"user"},{"_id":"651c240a37fecec1fe96c60b","avatarUrl":"/avatars/5af52af97b7907e138efecac0f20799b.svg","isPro":false,"fullname":"S.F.","user":"search-facility","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
A diffusion-based visual foundation model, Lotus, is introduced for dense prediction tasks, achieving state-of-the-art performance in zero-shot depth and normal estimation while improving efficiency.
AI-generated summary
Leveraging the visual priors of pre-trained text-to-image diffusion models
offers a promising solution to enhance zero-shot generalization in dense
prediction tasks. However, existing methods often uncritically use the original
diffusion formulation, which may not be optimal due to the fundamental
differences between dense prediction and image generation. In this paper, we
provide a systemic analysis of the diffusion formulation for the dense
prediction, focusing on both quality and efficiency. And we find that the
original parameterization type for image generation, which learns to predict
noise, is harmful for dense prediction; the multi-step noising/denoising
diffusion process is also unnecessary and challenging to optimize. Based on
these insights, we introduce Lotus, a diffusion-based visual foundation model
with a simple yet effective adaptation protocol for dense prediction.
Specifically, Lotus is trained to directly predict annotations instead of
noise, thereby avoiding harmful variance. We also reformulate the diffusion
process into a single-step procedure, simplifying optimization and
significantly boosting inference speed. Additionally, we introduce a novel
tuning strategy called detail preserver, which achieves more accurate and
fine-grained predictions. Without scaling up the training data or model
capacity, Lotus achieves SoTA performance in zero-shot depth and normal
estimation across various datasets. It also significantly enhances efficiency,
being hundreds of times faster than most existing diffusion-based methods.