Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - SemanticGen: Video Generation in Semantic Space
https://jianhongbai.github.io/SemanticGen/\n","updatedAt":"2025-12-24T03:50:51.127Z","author":{"_id":"6530bf50f145530101ec03a2","avatarUrl":"/avatars/c61c00c314cf202b64968e51e855694d.svg","fullname":"Jianhong Bai","name":"jianhongbai","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":14,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.4754405617713928},"editors":["jianhongbai"],"editorAvatarUrls":["/avatars/c61c00c314cf202b64968e51e855694d.svg"],"reactions":[],"isReport":false}},{"id":"694c54cba8d391e3fb4e3069","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false},"createdAt":"2025-12-24T21:02:03.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXiv explained breakdown of this paper ๐ https://arxivexplained.com/papers/semanticgen-video-generation-in-semantic-space","html":"
\n","updatedAt":"2025-12-24T21:02:03.295Z","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6244381666183472},"editors":["grantsing"],"editorAvatarUrls":["/avatars/3aedb9522cc3cd08349d654f523fd792.svg"],"reactions":[],"isReport":false}},{"id":"694f0f39329f48253242bf83","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2025-12-26T22:42:01.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXiv lens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/semanticgen-video-generation-in-semantic-space-9853-92c3f4c3\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"
\n","updatedAt":"2025-12-26T22:42:01.236Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6659800410270691},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2512.20619","authors":[{"_id":"694b614d746a34b55dd53d1a","name":"Jianhong Bai","hidden":false},{"_id":"694b614d746a34b55dd53d1b","name":"Xiaoshi Wu","hidden":false},{"_id":"694b614d746a34b55dd53d1c","name":"Xintao Wang","hidden":false},{"_id":"694b614d746a34b55dd53d1d","name":"Fu Xiao","hidden":false},{"_id":"694b614d746a34b55dd53d1e","name":"Yuanxing Zhang","hidden":false},{"_id":"694b614d746a34b55dd53d1f","user":{"_id":"646f3418a6a58aa29505fd30","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646f3418a6a58aa29505fd30/1z13rnpb6rsUgQsYumWPg.png","isPro":false,"fullname":"QINGHE WANG","user":"Qinghew","type":"user"},"name":"Qinghe Wang","status":"claimed_verified","statusLastChangedAt":"2025-12-25T20:45:45.478Z","hidden":false},{"_id":"694b614d746a34b55dd53d20","name":"Xiaoyu Shi","hidden":false},{"_id":"694b614d746a34b55dd53d21","name":"Menghan Xia","hidden":false},{"_id":"694b614d746a34b55dd53d22","name":"Zuozhu Liu","hidden":false},{"_id":"694b614d746a34b55dd53d23","name":"Haoji Hu","hidden":false},{"_id":"694b614d746a34b55dd53d24","name":"Pengfei Wan","hidden":false},{"_id":"694b614d746a34b55dd53d25","name":"Kun Gai","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/6530bf50f145530101ec03a2/amGfUgsGwtKhlryqSDYnU.png"],"publishedAt":"2025-12-23T18:59:56.000Z","submittedOnDailyAt":"2025-12-24T01:20:51.117Z","title":"SemanticGen: Video Generation in Semantic Space","submittedOnDailyBy":{"_id":"6530bf50f145530101ec03a2","avatarUrl":"/avatars/c61c00c314cf202b64968e51e855694d.svg","isPro":false,"fullname":"Jianhong Bai","user":"jianhongbai","type":"user"},"summary":"State-of-the-art video generative models typically learn the distribution of video latents in the VAE space and map them to pixels using a VAE decoder. While this approach can generate high-quality videos, it suffers from slow convergence and is computationally expensive when generating long videos. In this paper, we introduce SemanticGen, a novel solution to address these limitations by generating videos in the semantic space. Our main insight is that, due to the inherent redundancy in videos, the generation process should begin in a compact, high-level semantic space for global planning, followed by the addition of high-frequency details, rather than directly modeling a vast set of low-level video tokens using bi-directional attention. SemanticGen adopts a two-stage generation process. In the first stage, a diffusion model generates compact semantic video features, which define the global layout of the video. In the second stage, another diffusion model generates VAE latents conditioned on these semantic features to produce the final output. We observe that generation in the semantic space leads to faster convergence compared to the VAE latent space. Our method is also effective and computationally efficient when extended to long video generation. Extensive experiments demonstrate that SemanticGen produces high-quality videos and outperforms state-of-the-art approaches and strong baselines.","upvotes":93,"discussionId":"694b614d746a34b55dd53d26","projectPage":"https://jianhongbai.github.io/SemanticGen/","ai_summary":"SemanticGen addresses slow convergence and computational costs in video generation by using a two-stage diffusion model approach that first generates semantic features and then VAE latents, leading to faster convergence and high-quality results.","ai_keywords":["VAE space","VAE decoder","semantic space","diffusion model","semantic video features","bi-directional attention"],"organization":{"_id":"662c559b322afcbae51b3c8b","name":"KlingTeam","fullname":"Kling Team","avatar":"https://cdn-uploads.huggingface.co/production/uploads/60e272ca6c78a8c122b12127/ZQV1aKLUDPf2rUcxxAqj6.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6530bf50f145530101ec03a2","avatarUrl":"/avatars/c61c00c314cf202b64968e51e855694d.svg","isPro":false,"fullname":"Jianhong Bai","user":"jianhongbai","type":"user"},{"_id":"646f3418a6a58aa29505fd30","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646f3418a6a58aa29505fd30/1z13rnpb6rsUgQsYumWPg.png","isPro":false,"fullname":"QINGHE WANG","user":"Qinghew","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64c139d867eff857ea51caa8","avatarUrl":"/avatars/4b7b3f41c2e2cfa21dd43bbac6e081ae.svg","isPro":false,"fullname":"Shengqiong Wu","user":"ChocoWu","type":"user"},{"_id":"6687f9a71309e08b1f84bdc6","avatarUrl":"/avatars/f947ec9fe620ae4cffa83b371acdd571.svg","isPro":false,"fullname":"MeiYi","user":"natalie5","type":"user"},{"_id":"6672937ceac0fb1b9e516595","avatarUrl":"/avatars/5eea5657016572f60b0ecd0fa9a7dae4.svg","isPro":false,"fullname":"haoran he","user":"haoranhe","type":"user"},{"_id":"661a59ff8858a270e6ad4481","avatarUrl":"/avatars/40f1a62699795a83f2f521641effa8b1.svg","isPro":false,"fullname":"Zhenhao Yang","user":"Jeffrey-0711","type":"user"},{"_id":"641af5fcf902cc42730b47e2","avatarUrl":"/avatars/73ac99dec226f0e814a16d2f1dbfbce8.svg","isPro":false,"fullname":"Xiaoyu Shi","user":"btwbtm","type":"user"},{"_id":"645aff5121ab438e732c47c1","avatarUrl":"/avatars/23b2a853139b0f2ae1fa88e2bd4e0056.svg","isPro":false,"fullname":"Zhengyao Lv","user":"cszy98","type":"user"},{"_id":"66743477ab975c859114d410","avatarUrl":"/avatars/ac692cc336e383fb2cb53db6d1e3fe8c.svg","isPro":false,"fullname":"yawenluo","user":"yawenluo","type":"user"},{"_id":"6732119dc1f20c742bcf2e90","avatarUrl":"/avatars/62a7f0804b7918d5ef92d13ebd975aa0.svg","isPro":false,"fullname":"XuYulong","user":"UniDra","type":"user"},{"_id":"64241749a05235e2f8d34cb0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64241749a05235e2f8d34cb0/o6CY4xS22W8_DIqesFykM.jpeg","isPro":false,"fullname":"Yuanxing Zhang","user":"LongoXC","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1,"organization":{"_id":"662c559b322afcbae51b3c8b","name":"KlingTeam","fullname":"Kling Team","avatar":"https://cdn-uploads.huggingface.co/production/uploads/60e272ca6c78a8c122b12127/ZQV1aKLUDPf2rUcxxAqj6.jpeg"}}">
SemanticGen addresses slow convergence and computational costs in video generation by using a two-stage diffusion model approach that first generates semantic features and then VAE latents, leading to faster convergence and high-quality results.
AI-generated summary
State-of-the-art video generative models typically learn the distribution of video latents in the VAE space and map them to pixels using a VAE decoder. While this approach can generate high-quality videos, it suffers from slow convergence and is computationally expensive when generating long videos. In this paper, we introduce SemanticGen, a novel solution to address these limitations by generating videos in the semantic space. Our main insight is that, due to the inherent redundancy in videos, the generation process should begin in a compact, high-level semantic space for global planning, followed by the addition of high-frequency details, rather than directly modeling a vast set of low-level video tokens using bi-directional attention. SemanticGen adopts a two-stage generation process. In the first stage, a diffusion model generates compact semantic video features, which define the global layout of the video. In the second stage, another diffusion model generates VAE latents conditioned on these semantic features to produce the final output. We observe that generation in the semantic space leads to faster convergence compared to the VAE latent space. Our method is also effective and computationally efficient when extended to long video generation. Extensive experiments demonstrate that SemanticGen produces high-quality videos and outperforms state-of-the-art approaches and strong baselines.