Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - SparseFlex: High-Resolution and Arbitrary-Topology 3D Shape Modeling
https://xianglonghe.github.io/TripoSF/index.html\n","updatedAt":"2025-03-31T02:54:33.128Z","author":{"_id":"64d71083a787c9bc7b9f1238","avatarUrl":"/avatars/d0b0546dec7fc5792921154bec41385a.svg","fullname":"Yangguang Li","name":"Lp256","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.8720628023147583},"editors":["Lp256"],"editorAvatarUrls":["/avatars/d0b0546dec7fc5792921154bec41385a.svg"],"reactions":[],"isReport":false}},{"id":"67eb42c18d223bab85191607","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-04-01T01:34:57.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Hyper3D: Efficient 3D Representation via Hybrid Triplane and Octree Feature for Enhanced 3D Shape Variational Auto-Encoders](https://huggingface.co/papers/2503.10403) (2025)\n* [ShapeShifter: 3D Variations Using Multiscale and Sparse Point-Voxel Diffusion](https://huggingface.co/papers/2502.02187) (2025)\n* [Dragen3D: Multiview Geometry Consistent 3D Gaussian Generation with Drag-Based Control](https://huggingface.co/papers/2502.16475) (2025)\n* [Pandora3D: A Comprehensive Framework for High-Quality 3D Shape and Texture Generation](https://huggingface.co/papers/2502.14247) (2025)\n* [DeepMesh: Auto-Regressive Artist-mesh Creation with Reinforcement Learning](https://huggingface.co/papers/2503.15265) (2025)\n* [SuperCarver: Texture-Consistent 3D Geometry Super-Resolution for High-Fidelity Surface Detail Generation](https://huggingface.co/papers/2503.09439) (2025)\n* [MARS: Mesh AutoRegressive Model for 3D Shape Detailization](https://huggingface.co/papers/2502.11390) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-04-01T01:34:57.607Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6887631416320801},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.21732","authors":[{"_id":"67e620f77203bed82eb944e9","user":{"_id":"66744b514f3d4b3327cd228d","avatarUrl":"/avatars/9768587af7442fbb140f6b3d58100f91.svg","isPro":false,"fullname":"XianglongHe","user":"XianglongHe","type":"user"},"name":"Xianglong He","status":"admin_assigned","statusLastChangedAt":"2025-03-28T10:51:50.659Z","hidden":false},{"_id":"67e620f77203bed82eb944ea","user":{"_id":"644dbf6453ad80c6593bf748","avatarUrl":"/avatars/0e170cf2aa8d7f0f3f83e36f06f023f8.svg","isPro":false,"fullname":"Zixin Zou","user":"zouzx","type":"user"},"name":"Zi-Xin Zou","status":"admin_assigned","statusLastChangedAt":"2025-03-28T10:52:17.598Z","hidden":false},{"_id":"67e620f77203bed82eb944eb","name":"Chia-Hao Chen","hidden":false},{"_id":"67e620f77203bed82eb944ec","user":{"_id":"6346aaa3f06b237ba4e297b0","avatarUrl":"/avatars/5acb986e993eab1461200f3e9d99d022.svg","isPro":false,"fullname":"Yuan-Chen Guo","user":"bennyguo","type":"user"},"name":"Yuan-Chen Guo","status":"admin_assigned","statusLastChangedAt":"2025-03-28T10:52:33.753Z","hidden":false},{"_id":"67e620f77203bed82eb944ed","name":"Ding Liang","hidden":false},{"_id":"67e620f77203bed82eb944ee","name":"Chun Yuan","hidden":false},{"_id":"67e620f77203bed82eb944ef","name":"Wanli Ouyang","hidden":false},{"_id":"67e620f77203bed82eb944f0","user":{"_id":"638066faf022c8a5803f7eb8","avatarUrl":"/avatars/4cfd699c3f6c5461b12b7dc5e3fe183d.svg","isPro":false,"fullname":"Yanpei Cao","user":"pookiefoof","type":"user"},"name":"Yan-Pei Cao","status":"admin_assigned","statusLastChangedAt":"2025-03-28T10:52:50.823Z","hidden":false},{"_id":"67e620f77203bed82eb944f1","user":{"_id":"64d71083a787c9bc7b9f1238","avatarUrl":"/avatars/d0b0546dec7fc5792921154bec41385a.svg","isPro":false,"fullname":"Yangguang Li","user":"Lp256","type":"user"},"name":"Yangguang Li","status":"admin_assigned","statusLastChangedAt":"2025-03-28T10:52:59.331Z","hidden":false}],"publishedAt":"2025-03-27T17:46:42.000Z","submittedOnDailyAt":"2025-03-31T01:22:51.212Z","title":"SparseFlex: High-Resolution and Arbitrary-Topology 3D Shape Modeling","submittedOnDailyBy":{"_id":"64d71083a787c9bc7b9f1238","avatarUrl":"/avatars/d0b0546dec7fc5792921154bec41385a.svg","isPro":false,"fullname":"Yangguang Li","user":"Lp256","type":"user"},"summary":"Creating high-fidelity 3D meshes with arbitrary topology, including open\nsurfaces and complex interiors, remains a significant challenge. Existing\nimplicit field methods often require costly and detail-degrading watertight\nconversion, while other approaches struggle with high resolutions. This paper\nintroduces SparseFlex, a novel sparse-structured isosurface representation that\nenables differentiable mesh reconstruction at resolutions up to 1024^3\ndirectly from rendering losses. SparseFlex combines the accuracy of Flexicubes\nwith a sparse voxel structure, focusing computation on surface-adjacent regions\nand efficiently handling open surfaces. Crucially, we introduce a frustum-aware\nsectional voxel training strategy that activates only relevant voxels during\nrendering, dramatically reducing memory consumption and enabling\nhigh-resolution training. This also allows, for the first time, the\nreconstruction of mesh interiors using only rendering supervision. Building\nupon this, we demonstrate a complete shape modeling pipeline by training a\nvariational autoencoder (VAE) and a rectified flow transformer for high-quality\n3D shape generation. Our experiments show state-of-the-art reconstruction\naccuracy, with a ~82% reduction in Chamfer Distance and a ~88% increase in\nF-score compared to previous methods, and demonstrate the generation of\nhigh-resolution, detailed 3D shapes with arbitrary topology. By enabling\nhigh-resolution, differentiable mesh reconstruction and generation with\nrendering losses, SparseFlex significantly advances the state-of-the-art in 3D\nshape representation and modeling.","upvotes":9,"discussionId":"67e620fb7203bed82eb945e8","projectPage":"https://xianglonghe.github.io/TripoSF/index.html","githubRepo":"https://github.com/VAST-AI-Research/TripoSF","githubRepoAddedBy":"user","ai_summary":"SparseFlex enables high-resolution 3D mesh reconstruction and generation, achieving superior accuracy and efficiency using a sparse voxel structure and rendering losses.","ai_keywords":["sparse-structured isosurface representation","differentiable mesh reconstruction","Flexicubes","sparse voxel structure","frustum-aware sectional voxel training strategy","rendering supervision","variational autoencoder (VAE)","rectified flow transformer","Chamfer Distance","F-score","3D shape representation","3D shape modeling"],"githubStars":720},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64d71083a787c9bc7b9f1238","avatarUrl":"/avatars/d0b0546dec7fc5792921154bec41385a.svg","isPro":false,"fullname":"Yangguang Li","user":"Lp256","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6410213f928400b416424f6e","avatarUrl":"/avatars/4ce6a2a33d73119dc840217d7d053343.svg","isPro":false,"fullname":"Xudong Xu","user":"Sheldoooon","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"6351e5bb3734c6e8a5c1bec1","avatarUrl":"/avatars/a784a51b369b197398575c3afbd5ceab.svg","isPro":false,"fullname":"Han-Bit Kang","user":"hbkang","type":"user"},{"_id":"67f3cd405ce82f3a3577834f","avatarUrl":"/avatars/9990243f3a3dd546960f4b33a4f8ab40.svg","isPro":false,"fullname":"Mary Ziya Gao","user":"ziyamary","type":"user"},{"_id":"65a4567e212d6aca9a3e8f5a","avatarUrl":"/avatars/ed944797230b5460381209bf76e4a0e4.svg","isPro":false,"fullname":"Catherine Liu","user":"Liu12uiL","type":"user"},{"_id":"66744b514f3d4b3327cd228d","avatarUrl":"/avatars/9768587af7442fbb140f6b3d58100f91.svg","isPro":false,"fullname":"XianglongHe","user":"XianglongHe","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
SparseFlex enables high-resolution 3D mesh reconstruction and generation, achieving superior accuracy and efficiency using a sparse voxel structure and rendering losses.
AI-generated summary
Creating high-fidelity 3D meshes with arbitrary topology, including open
surfaces and complex interiors, remains a significant challenge. Existing
implicit field methods often require costly and detail-degrading watertight
conversion, while other approaches struggle with high resolutions. This paper
introduces SparseFlex, a novel sparse-structured isosurface representation that
enables differentiable mesh reconstruction at resolutions up to 1024^3
directly from rendering losses. SparseFlex combines the accuracy of Flexicubes
with a sparse voxel structure, focusing computation on surface-adjacent regions
and efficiently handling open surfaces. Crucially, we introduce a frustum-aware
sectional voxel training strategy that activates only relevant voxels during
rendering, dramatically reducing memory consumption and enabling
high-resolution training. This also allows, for the first time, the
reconstruction of mesh interiors using only rendering supervision. Building
upon this, we demonstrate a complete shape modeling pipeline by training a
variational autoencoder (VAE) and a rectified flow transformer for high-quality
3D shape generation. Our experiments show state-of-the-art reconstruction
accuracy, with a ~82% reduction in Chamfer Distance and a ~88% increase in
F-score compared to previous methods, and demonstrate the generation of
high-resolution, detailed 3D shapes with arbitrary topology. By enabling
high-resolution, differentiable mesh reconstruction and generation with
rendering losses, SparseFlex significantly advances the state-of-the-art in 3D
shape representation and modeling.