Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - SkeletonGaussian: Editable 4D Generation through Gaussian Skeletonization
[go: Go Back, main page]

https://wusar.github.io/projects/skeletongaussian/
Arxiv: https://arxiv.org/abs/2602.04271
Code: https://github.com/wusar/SkeletonGaussian

\n","updatedAt":"2026-02-09T05:48:24.291Z","author":{"_id":"6697ac8427e4e21a3a92da27","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6697ac8427e4e21a3a92da27/9vn07-1_BBDk9zfDtDpcG.png","fullname":"Ruijie Zhu","name":"RuijieZhu","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.587847113609314},"editors":["RuijieZhu"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6697ac8427e4e21a3a92da27/9vn07-1_BBDk9zfDtDpcG.png"],"reactions":[],"isReport":false}},{"id":"69854724c053180255616691","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-06T01:43:00.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [SV-GS: Sparse View 4D Reconstruction with Skeleton-Driven Gaussian Splatting](https://huggingface.co/papers/2601.00285) (2026)\n* [AnimaMimic: Imitating 3D Animation from Video Priors](https://huggingface.co/papers/2512.14133) (2025)\n* [Blur2Sharp: Human Novel Pose and View Synthesis with Generative Prior Refinement](https://huggingface.co/papers/2512.08215) (2025)\n* [CAMO: Category-Agnostic 3D Motion Transfer from Monocular 2D Videos](https://huggingface.co/papers/2601.02716) (2026)\n* [Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis](https://huggingface.co/papers/2601.14253) (2026)\n* [3DProxyImg: Controllable 3D-Aware Animation Synthesis from Single Image via 2D-3D Aligned Proxy Embedding](https://huggingface.co/papers/2512.15126) (2025)\n* [MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos](https://huggingface.co/papers/2512.10881) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-06T01:43:00.406Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7033407688140869},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"69874f2323830cd9f8649686","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-02-07T14:41:39.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/skeletongaussian-editable-4d-generation-through-gaussian-skeletonization-670-f121ccda\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"

arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/skeletongaussian-editable-4d-generation-through-gaussian-skeletonization-670-f121ccda

\n
    \n
  • Executive Summary
  • \n
  • Detailed Breakdown
  • \n
  • Practical Applications
  • \n
\n","updatedAt":"2026-02-07T14:41:39.014Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.731536865234375},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[{"reaction":"๐Ÿ”ฅ","users":["RuijieZhu"],"count":1}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.04271","authors":[{"_id":"698489995de9acb9debfb219","name":"Lifan Wu","hidden":false},{"_id":"698489995de9acb9debfb21a","user":{"_id":"6697ac8427e4e21a3a92da27","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6697ac8427e4e21a3a92da27/9vn07-1_BBDk9zfDtDpcG.png","isPro":false,"fullname":"Ruijie Zhu","user":"RuijieZhu","type":"user"},"name":"Ruijie Zhu","status":"claimed_verified","statusLastChangedAt":"2026-02-06T18:53:01.990Z","hidden":false},{"_id":"698489995de9acb9debfb21b","name":"Yubo Ai","hidden":false},{"_id":"698489995de9acb9debfb21c","name":"Tianzhu Zhang","hidden":false}],"publishedAt":"2026-02-04T07:00:44.000Z","submittedOnDailyAt":"2026-02-05T09:49:38.676Z","title":"SkeletonGaussian: Editable 4D Generation through Gaussian Skeletonization","submittedOnDailyBy":{"_id":"6697ac8427e4e21a3a92da27","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6697ac8427e4e21a3a92da27/9vn07-1_BBDk9zfDtDpcG.png","isPro":false,"fullname":"Ruijie Zhu","user":"RuijieZhu","type":"user"},"summary":"4D generation has made remarkable progress in synthesizing dynamic 3D objects from input text, images, or videos. However, existing methods often represent motion as an implicit deformation field, which limits direct control and editability. To address this issue, we propose SkeletonGaussian, a novel framework for generating editable dynamic 3D Gaussians from monocular video input. Our approach introduces a hierarchical articulated representation that decomposes motion into sparse rigid motion explicitly driven by a skeleton and fine-grained non-rigid motion. Concretely, we extract a robust skeleton and drive rigid motion via linear blend skinning, followed by a hexplane-based refinement for non-rigid deformations, enhancing interpretability and editability. Experimental results demonstrate that SkeletonGaussian surpasses existing methods in generation quality while enabling intuitive motion editing, establishing a new paradigm for editable 4D generation. Project page: https://wusar.github.io/projects/skeletongaussian/","upvotes":1,"discussionId":"698489995de9acb9debfb21d","projectPage":"https://wusar.github.io/projects/skeletongaussian/","githubRepo":"https://github.com/wusar/SkeletonGaussian","githubRepoAddedBy":"user","ai_summary":"SkeletonGaussian enables editable 4D generation by decomposing motion into rigid skeleton-driven and non-rigid fine-grained components using hexplane-based refinement.","ai_keywords":["4D generation","dynamic 3D objects","monocular video input","articulated representation","skeleton-driven motion","linear blend skinning","hexplane-based refinement","non-rigid deformations","motion editing"],"githubStars":7,"organization":{"_id":"61d8000084231b832e5bbd99","name":"ustc","fullname":"university of science and technology of china","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1641545773772-61d7fdeb22a383817a543b68.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6697ac8427e4e21a3a92da27","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6697ac8427e4e21a3a92da27/9vn07-1_BBDk9zfDtDpcG.png","isPro":false,"fullname":"Ruijie Zhu","user":"RuijieZhu","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"61d8000084231b832e5bbd99","name":"ustc","fullname":"university of science and technology of china","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1641545773772-61d7fdeb22a383817a543b68.png"}}">
Papers
arxiv:2602.04271

SkeletonGaussian: Editable 4D Generation through Gaussian Skeletonization

Published on Feb 4
ยท Submitted by
Ruijie Zhu
on Feb 5
Authors:
,
,

Abstract

SkeletonGaussian enables editable 4D generation by decomposing motion into rigid skeleton-driven and non-rigid fine-grained components using hexplane-based refinement.

AI-generated summary

4D generation has made remarkable progress in synthesizing dynamic 3D objects from input text, images, or videos. However, existing methods often represent motion as an implicit deformation field, which limits direct control and editability. To address this issue, we propose SkeletonGaussian, a novel framework for generating editable dynamic 3D Gaussians from monocular video input. Our approach introduces a hierarchical articulated representation that decomposes motion into sparse rigid motion explicitly driven by a skeleton and fine-grained non-rigid motion. Concretely, we extract a robust skeleton and drive rigid motion via linear blend skinning, followed by a hexplane-based refinement for non-rigid deformations, enhancing interpretability and editability. Experimental results demonstrate that SkeletonGaussian surpasses existing methods in generation quality while enabling intuitive motion editing, establishing a new paradigm for editable 4D generation. Project page: https://wusar.github.io/projects/skeletongaussian/

Community

Paper author Paper submitter
โ€ข
edited 12 days ago

๐Ÿš€ Introducing SkeletonGaussian โ€” Editable 4D Generation through Gaussian Skeletonization!
(Accepted by CVM 2026)

โœจ Generate dynamic 3D Gaussians from text, images, or videos
๐Ÿฆด Explicit skeleton-driven motion enables intuitive pose editing
๐ŸŽฏ Higher visual quality + better motion fidelity than prior 4D methods

A new step toward controllable, editable 4D generation.
Project page: https://wusar.github.io/projects/skeletongaussian/
Arxiv: https://arxiv.org/abs/2602.04271
Code: https://github.com/wusar/SkeletonGaussian

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/skeletongaussian-editable-4d-generation-through-gaussian-skeletonization-670-f121ccda

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.04271 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.04271 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.04271 in a Space README.md to link it from this page.

Collections including this paper 1