The following papers were recommended by the Semantic Scholar API
\n- \n
- FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting (2023) \n
- 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering (2023) \n
- An Efficient 3D Gaussian Representation for Monocular/Multi-view Dynamic Scenes (2023) \n
- 4K4D: Real-Time 4D View Synthesis at 4K Resolution (2023) \n
- GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting (2023) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n","updatedAt":"2023-12-06T16:01:27.403Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7245811820030212},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2312.02155","authors":[{"_id":"656e9b922e0a38afd19218ee","name":"Shunyuan Zheng","hidden":false},{"_id":"656e9b922e0a38afd19218ef","name":"Boyao Zhou","hidden":false},{"_id":"656e9b922e0a38afd19218f0","user":{"_id":"63f52856b51da4d61da7aa21","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63f52856b51da4d61da7aa21/qSvSk1TnRp6DzdRzcoC8F.jpeg","isPro":false,"fullname":"Ruizhi Shao","user":"Saurus","type":"user"},"name":"Ruizhi Shao","status":"admin_assigned","statusLastChangedAt":"2023-12-06T10:30:11.913Z","hidden":false},{"_id":"656e9b922e0a38afd19218f1","name":"Boning Liu","hidden":false},{"_id":"656e9b922e0a38afd19218f2","name":"Shengping Zhang","hidden":false},{"_id":"656e9b922e0a38afd19218f3","name":"Liqiang Nie","hidden":false},{"_id":"656e9b922e0a38afd19218f4","user":{"_id":"62e14dbe4db2175cd2735a80","avatarUrl":"/avatars/e6385dcedcb97e1b36281b49210321aa.svg","isPro":false,"fullname":"Yebin Liu","user":"YebinLiu","type":"user"},"name":"Yebin Liu","status":"admin_assigned","statusLastChangedAt":"2023-12-05T12:20:19.679Z","hidden":false}],"publishedAt":"2023-12-04T18:59:55.000Z","submittedOnDailyAt":"2023-12-05T01:10:03.356Z","title":"GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for\n Real-time Human Novel View Synthesis","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"We present a new approach, termed GPS-Gaussian, for synthesizing novel views\nof a character in a real-time manner. The proposed method enables 2K-resolution\nrendering under a sparse-view camera setting. Unlike the original Gaussian\nSplatting or neural implicit rendering methods that necessitate per-subject\noptimizations, we introduce Gaussian parameter maps defined on the source views\nand regress directly Gaussian Splatting properties for instant novel view\nsynthesis without any fine-tuning or optimization. To this end, we train our\nGaussian parameter regression module on a large amount of human scan data,\njointly with a depth estimation module to lift 2D parameter maps to 3D space.\nThe proposed framework is fully differentiable and experiments on several\ndatasets demonstrate that our method outperforms state-of-the-art methods while\nachieving an exceeding rendering speed.","upvotes":14,"discussionId":"656e9b932e0a38afd1921925","githubRepo":"https://github.com/aipixel/gps-gaussian","githubRepoAddedBy":"auto","ai_summary":"A GPS-Gaussian approach synthesizes 2K-resolution views in real-time using Gaussian parameter maps and depth estimation, outperforming existing methods.","ai_keywords":["Gaussian parameter maps","Gaussian Splatting","neural implicit rendering","depth estimation module","2D parameter maps","3D space","differentiable framework","novel view synthesis"],"githubStars":609},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"63c241edc58fcfeac18f1253","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c241edc58fcfeac18f1253/U3x4II6TcSU_gdXFftLCf.jpeg","isPro":false,"fullname":"Mike Staub","user":"mikestaub","type":"user"},{"_id":"60c8d264224e250fb0178f77","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60c8d264224e250fb0178f77/i8fbkBVcoFeJRmkQ9kYAE.png","isPro":false,"fullname":"Adam Lee","user":"Abecid","type":"user"},{"_id":"63ddc7b80f6d2d6c3efe3600","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63ddc7b80f6d2d6c3efe3600/RX5q9T80Jl3tn6z03ls0l.jpeg","isPro":false,"fullname":"J","user":"dashfunnydashdash","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"6335349d495073b8870b0a34","avatarUrl":"/avatars/0e1fd4d8e3fee8c883b5f5a4d34ca46d.svg","isPro":false,"fullname":"calmaus","user":"calmaus","type":"user"},{"_id":"6410213f928400b416424f6e","avatarUrl":"/avatars/4ce6a2a33d73119dc840217d7d053343.svg","isPro":false,"fullname":"Xudong Xu","user":"Sheldoooon","type":"user"},{"_id":"6343f83791049e1bce85373e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1665398834110-noauth.png","isPro":false,"fullname":"Zhang ning","user":"pe65374","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"6549135c196ae037a74e10a3","avatarUrl":"/avatars/86194456844c7b2b5389de36cb258472.svg","isPro":false,"fullname":"Richrich","user":"RichardForests","type":"user"},{"_id":"643efec9e9d063936911026c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/643efec9e9d063936911026c/25TPUXWzFyBtdr7iH-T25.jpeg","isPro":false,"fullname":"Promptmetheus","user":"azure-arc-0","type":"user"},{"_id":"6689857212de1f2acc920945","avatarUrl":"/avatars/d451966e6ad81a0cf9b838ae3d3aef33.svg","isPro":false,"fullname":"Chet Down","user":"ChetDown","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis
Abstract
A GPS-Gaussian approach synthesizes 2K-resolution views in real-time using Gaussian parameter maps and depth estimation, outperforming existing methods.
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner. The proposed method enables 2K-resolution rendering under a sparse-view camera setting. Unlike the original Gaussian Splatting or neural implicit rendering methods that necessitate per-subject optimizations, we introduce Gaussian parameter maps defined on the source views and regress directly Gaussian Splatting properties for instant novel view synthesis without any fine-tuning or optimization. To this end, we train our Gaussian parameter regression module on a large amount of human scan data, jointly with a depth estimation module to lift 2D parameter maps to 3D space. The proposed framework is fully differentiable and experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting (2023)
- 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering (2023)
- An Efficient 3D Gaussian Representation for Monocular/Multi-view Dynamic Scenes (2023)
- 4K4D: Real-Time 4D View Synthesis at 4K Resolution (2023)
- GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper