Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera
Control
https://research.nvidia.com/labs/toronto-ai/GEN3C/\n","updatedAt":"2025-03-06T03:13:22.558Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9179,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.681841254234314},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"67c915637aba6ae9f1f97136","author":{"_id":"63d4c8ce13ae45b780792f32","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63d4c8ce13ae45b780792f32/QasegimoxBqfZwDzorukz.png","fullname":"Ohenenoo","name":"PeepDaSlan9","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":155,"isUserFollowing":false},"createdAt":"2025-03-06T03:24:19.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Thanks for sharing","html":"
Thanks for sharing
\n","updatedAt":"2025-03-06T03:24:19.015Z","author":{"_id":"63d4c8ce13ae45b780792f32","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63d4c8ce13ae45b780792f32/QasegimoxBqfZwDzorukz.png","fullname":"Ohenenoo","name":"PeepDaSlan9","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":155,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9311317801475525},"editors":["PeepDaSlan9"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/63d4c8ce13ae45b780792f32/QasegimoxBqfZwDzorukz.png"],"reactions":[],"isReport":false}},{"id":"67ca4d2f16535d9a83986393","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-03-07T01:34:39.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [CamCtrl3D: Single-Image Scene Exploration with Precise 3D Camera Control](https://huggingface.co/papers/2501.06006) (2025)\n* [VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video Generation](https://huggingface.co/papers/2502.07531) (2025)\n* [Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models](https://huggingface.co/papers/2503.01774) (2025)\n* [F3D-Gaus: Feed-forward 3D-aware Generation on ImageNet with Cycle-Consistent Gaussian Splatting](https://huggingface.co/papers/2501.06714) (2025)\n* [Joint Learning of Depth and Appearance for Portrait Image Animation](https://huggingface.co/papers/2501.08649) (2025)\n* [Towards Physical Understanding in Video Generation: A 3D Point Regularization Approach](https://huggingface.co/papers/2502.03639) (2025)\n* [Matrix3D: Large Photogrammetry Model All-in-One](https://huggingface.co/papers/2502.07685) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-03-07T01:34:39.396Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7077003121376038},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"67cb49e2101af374ebbeb09f","author":{"_id":"67818b1fa6b75c5dc3cf430c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67818b1fa6b75c5dc3cf430c/5aA0gP8ZvIkMndNA7CqqE.png","fullname":"Ribbit Ribbit","name":"ribbitribbit365","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false},"createdAt":"2025-03-07T19:32:50.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"We made a deep dive video for this paper: https://www.youtube.com/watch?v=Q80Mgm-0JCM. Happy learning together!\n\n","html":"
\n","updatedAt":"2025-03-07T19:32:50.556Z","author":{"_id":"67818b1fa6b75c5dc3cf430c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67818b1fa6b75c5dc3cf430c/5aA0gP8ZvIkMndNA7CqqE.png","fullname":"Ribbit Ribbit","name":"ribbitribbit365","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5553673505783081},"editors":["ribbitribbit365"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/67818b1fa6b75c5dc3cf430c/5aA0gP8ZvIkMndNA7CqqE.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.03751","authors":[{"_id":"67c912b1b5903dd437cc2370","user":{"_id":"658529d61c461dfe88afe8e8","avatarUrl":"/avatars/a22c1b07d28c2662833c462c6537d835.svg","isPro":false,"fullname":"Xuanchi Ren","user":"xrenaa","type":"user"},"name":"Xuanchi Ren","status":"admin_assigned","statusLastChangedAt":"2025-03-06T09:55:04.321Z","hidden":false},{"_id":"67c912b1b5903dd437cc2371","name":"Tianchang Shen","hidden":false},{"_id":"67c912b1b5903dd437cc2372","name":"Jiahui Huang","hidden":false},{"_id":"67c912b1b5903dd437cc2373","name":"Huan Ling","hidden":false},{"_id":"67c912b1b5903dd437cc2374","name":"Yifan Lu","hidden":false},{"_id":"67c912b1b5903dd437cc2375","name":"Merlin Nimier-David","hidden":false},{"_id":"67c912b1b5903dd437cc2376","name":"Thomas Müller","hidden":false},{"_id":"67c912b1b5903dd437cc2377","name":"Alexander Keller","hidden":false},{"_id":"67c912b1b5903dd437cc2378","name":"Sanja Fidler","hidden":false},{"_id":"67c912b1b5903dd437cc2379","name":"Jun Gao","hidden":false}],"publishedAt":"2025-03-05T18:59:50.000Z","submittedOnDailyAt":"2025-03-06T00:43:22.552Z","title":"GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera\n Control","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"We present GEN3C, a generative video model with precise Camera Control and\ntemporal 3D Consistency. Prior video models already generate realistic videos,\nbut they tend to leverage little 3D information, leading to inconsistencies,\nsuch as objects popping in and out of existence. Camera control, if implemented\nat all, is imprecise, because camera parameters are mere inputs to the neural\nnetwork which must then infer how the video depends on the camera. In contrast,\nGEN3C is guided by a 3D cache: point clouds obtained by predicting the\npixel-wise depth of seed images or previously generated frames. When generating\nthe next frames, GEN3C is conditioned on the 2D renderings of the 3D cache with\nthe new camera trajectory provided by the user. Crucially, this means that\nGEN3C neither has to remember what it previously generated nor does it have to\ninfer the image structure from the camera pose. The model, instead, can focus\nall its generative power on previously unobserved regions, as well as advancing\nthe scene state to the next frame. Our results demonstrate more precise camera\ncontrol than prior work, as well as state-of-the-art results in sparse-view\nnovel view synthesis, even in challenging settings such as driving scenes and\nmonocular dynamic video. Results are best viewed in videos. Check out our\nwebpage! https://research.nvidia.com/labs/toronto-ai/GEN3C/","upvotes":24,"discussionId":"67c912b9b5903dd437cc2505","githubRepo":"https://github.com/nv-tlabs/GEN3C","githubRepoAddedBy":"auto","ai_summary":"GEN3C, a generative video model, uses a 3D cache to achieve precise camera control and temporal consistency in video generation, outperforming previous methods in sparse-view novel view synthesis.","ai_keywords":["generative video model","camera control","temporal 3D consistency","point clouds","pixel-wise depth","3D cache","2D renderings","camera trajectory","sparse-view novel view synthesis","driving scenes","monocular dynamic video"],"githubStars":1264},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63d4c8ce13ae45b780792f32","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63d4c8ce13ae45b780792f32/QasegimoxBqfZwDzorukz.png","isPro":false,"fullname":"Ohenenoo","user":"PeepDaSlan9","type":"user"},{"_id":"637c7503fe115289cfecbe6b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1676361945047-637c7503fe115289cfecbe6b.jpeg","isPro":false,"fullname":"Wenhao Chai","user":"wchai","type":"user"},{"_id":"66f612b934b8ac9ffa44f084","avatarUrl":"/avatars/6836c122e19c66c90f1673f28b30d7f0.svg","isPro":false,"fullname":"Tang","user":"tommysally","type":"user"},{"_id":"635022a614fb199c76581e3b","avatarUrl":"/avatars/a3ac1033cec679c66f706b0ae320ea0c.svg","isPro":true,"fullname":"Jonathan Clark","user":"JC-Hexa","type":"user"},{"_id":"6058a23b5ab91954363a6511","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6058a23b5ab91954363a6511/wam6ertF3GgNdsxLXuP38.png","isPro":false,"fullname":"Sukesh Perla","user":"hitchhiker3010","type":"user"},{"_id":"6683fc5344a65be1aab25dc0","avatarUrl":"/avatars/e13cde3f87b59e418838d702807df3b5.svg","isPro":false,"fullname":"hjkim","user":"hojie11","type":"user"},{"_id":"634dffc49b777beec3bc6448","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1670144568552-634dffc49b777beec3bc6448.jpeg","isPro":false,"fullname":"Zhipeng Yang","user":"svjack","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"66f6748404f2d5ae979663fe","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/tWN9twcAuVWKmTUsZVkV4.png","isPro":false,"fullname":"Alina Belko","user":"alinabelko","type":"user"},{"_id":"650c8bfb3d3542884da1a845","avatarUrl":"/avatars/863a5deebf2ac6d4faedc4dd368e0561.svg","isPro":false,"fullname":"Adhurim ","user":"Limi07","type":"user"},{"_id":"6350884a0f376d3c482bda54","avatarUrl":"/avatars/824c49aa2fb2e85801c001e2843c0576.svg","isPro":false,"fullname":"Wei Yu","user":"yuweiao","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
GEN3C, a generative video model, uses a 3D cache to achieve precise camera control and temporal consistency in video generation, outperforming previous methods in sparse-view novel view synthesis.
AI-generated summary
We present GEN3C, a generative video model with precise Camera Control and
temporal 3D Consistency. Prior video models already generate realistic videos,
but they tend to leverage little 3D information, leading to inconsistencies,
such as objects popping in and out of existence. Camera control, if implemented
at all, is imprecise, because camera parameters are mere inputs to the neural
network which must then infer how the video depends on the camera. In contrast,
GEN3C is guided by a 3D cache: point clouds obtained by predicting the
pixel-wise depth of seed images or previously generated frames. When generating
the next frames, GEN3C is conditioned on the 2D renderings of the 3D cache with
the new camera trajectory provided by the user. Crucially, this means that
GEN3C neither has to remember what it previously generated nor does it have to
infer the image structure from the camera pose. The model, instead, can focus
all its generative power on previously unobserved regions, as well as advancing
the scene state to the next frame. Our results demonstrate more precise camera
control than prior work, as well as state-of-the-art results in sparse-view
novel view synthesis, even in challenging settings such as driving scenes and
monocular dynamic video. Results are best viewed in videos. Check out our
webpage! https://research.nvidia.com/labs/toronto-ai/GEN3C/