Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - PhysX: Physical-Grounded 3D Asset Generation
[go: Go Back, main page]

https://physx-3d.github.io/
Code: https://github.com/ziangcao0312/PhysX
Demo video: https://www.youtube.com/watch?v=M5V_c0Duuy4&feature=youtu.be
Dataset: https://huggingface.co/datasets/Caoza/PhysX
\"teaser.png\"

\n","updatedAt":"2025-07-17T02:56:11.006Z","author":{"_id":"65af6f6b52e1b2aae437af2e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65af6f6b52e1b2aae437af2e/sFC98zLL_ZPS9fvZFi01W.jpeg","fullname":"Ziang Cao","name":"Caoza","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6331069469451904},"editors":["Caoza"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/65af6f6b52e1b2aae437af2e/sFC98zLL_ZPS9fvZFi01W.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2507.12465","authors":[{"_id":"6878635e001546c83aa4f979","user":{"_id":"65af6f6b52e1b2aae437af2e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65af6f6b52e1b2aae437af2e/sFC98zLL_ZPS9fvZFi01W.jpeg","isPro":false,"fullname":"Ziang Cao","user":"Caoza","type":"user"},"name":"Ziang Cao","status":"claimed_verified","statusLastChangedAt":"2025-07-17T08:22:44.605Z","hidden":false},{"_id":"6878635e001546c83aa4f97a","user":{"_id":"62fc8cf7ee999004b5a8b982","avatarUrl":"/avatars/6c5dda9e58747054a989f077a078f3dc.svg","isPro":false,"fullname":"Zhaoxi Chen","user":"FrozenBurning","type":"user"},"name":"Zhaoxi Chen","status":"admin_assigned","statusLastChangedAt":"2025-07-17T09:22:11.615Z","hidden":false},{"_id":"6878635e001546c83aa4f97b","name":"Linag Pan","hidden":false},{"_id":"6878635e001546c83aa4f97c","user":{"_id":"62ab1ac1d48b4d8b048a3473","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png","isPro":false,"fullname":"Ziwei Liu","user":"liuziwei7","type":"user"},"name":"Ziwei Liu","status":"admin_assigned","statusLastChangedAt":"2025-07-17T09:21:56.595Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/65af6f6b52e1b2aae437af2e/nrL7wGaZ1Z5FZWuu0GTbg.mp4"],"publishedAt":"2025-07-16T17:59:35.000Z","submittedOnDailyAt":"2025-07-17T01:26:11.001Z","title":"PhysX: Physical-Grounded 3D Asset Generation","submittedOnDailyBy":{"_id":"65af6f6b52e1b2aae437af2e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65af6f6b52e1b2aae437af2e/sFC98zLL_ZPS9fvZFi01W.jpeg","isPro":false,"fullname":"Ziang Cao","user":"Caoza","type":"user"},"summary":"3D modeling is moving from virtual to physical. Existing 3D generation\nprimarily emphasizes geometries and textures while neglecting physical-grounded\nmodeling. Consequently, despite the rapid development of 3D generative models,\nthe synthesized 3D assets often overlook rich and important physical\nproperties, hampering their real-world application in physical domains like\nsimulation and embodied AI. As an initial attempt to address this challenge, we\npropose PhysX, an end-to-end paradigm for physical-grounded 3D asset\ngeneration. 1) To bridge the critical gap in physics-annotated 3D datasets, we\npresent PhysXNet - the first physics-grounded 3D dataset systematically\nannotated across five foundational dimensions: absolute scale, material,\naffordance, kinematics, and function description. In particular, we devise a\nscalable human-in-the-loop annotation pipeline based on vision-language models,\nwhich enables efficient creation of physics-first assets from raw 3D assets.2)\nFurthermore, we propose PhysXGen, a feed-forward framework for\nphysics-grounded image-to-3D asset generation, injecting physical knowledge\ninto the pre-trained 3D structural space. Specifically, PhysXGen employs a\ndual-branch architecture to explicitly model the latent correlations between 3D\nstructures and physical properties, thereby producing 3D assets with plausible\nphysical predictions while preserving the native geometry quality. Extensive\nexperiments validate the superior performance and promising generalization\ncapability of our framework. All the code, data, and models will be released to\nfacilitate future research in generative physical AI.","upvotes":44,"discussionId":"6878635e001546c83aa4f97d","projectPage":"https://physx-3d.github.io/","githubRepo":"https://github.com/ziangcao0312/PhysX","githubRepoAddedBy":"user","ai_summary":"PhysX addresses the gap in physical-grounded 3D asset generation by introducing PhysXNet, a physics-annotated dataset, and PhysXGen, a feed-forward framework that integrates physical knowledge into 3D generation.","ai_keywords":["physics-grounded 3D asset generation","PhysXNet","physics-annotated dataset","vision-language models","human-in-the-loop annotation","PhysXGen","feed-forward framework","dual-branch architecture","latent correlations","physical predictions","geometry quality"],"githubStars":351},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"62fc8cf7ee999004b5a8b982","avatarUrl":"/avatars/6c5dda9e58747054a989f077a078f3dc.svg","isPro":false,"fullname":"Zhaoxi Chen","user":"FrozenBurning","type":"user"},{"_id":"62ab1ac1d48b4d8b048a3473","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png","isPro":false,"fullname":"Ziwei Liu","user":"liuziwei7","type":"user"},{"_id":"65af6f6b52e1b2aae437af2e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65af6f6b52e1b2aae437af2e/sFC98zLL_ZPS9fvZFi01W.jpeg","isPro":false,"fullname":"Ziang Cao","user":"Caoza","type":"user"},{"_id":"64ef30fb0de94d31b7921175","avatarUrl":"/avatars/d4f4c7be460befc1cedec13bbf9db972.svg","isPro":false,"fullname":"pan","user":"pldeqiushui","type":"user"},{"_id":"636daf995aaed143cd6c7447","avatarUrl":"/avatars/efee0647aeba593cd51550cf09e5a4df.svg","isPro":false,"fullname":"ZenT","user":"ZenT","type":"user"},{"_id":"64c1c77c245c55a21c6f5a13","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c1c77c245c55a21c6f5a13/d9zlSksf3TxWpBbb-r0fd.jpeg","isPro":false,"fullname":"Reza Sayar","user":"Reza2kn","type":"user"},{"_id":"67dd0b83e297b83ef664f62e","avatarUrl":"/avatars/3e36b02b111a9988f4c2ba5f32ea0613.svg","isPro":false,"fullname":"Zixian Liu","user":"StoreBlank","type":"user"},{"_id":"633e570be7d5ce7bfe037a53","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/633e570be7d5ce7bfe037a53/zV8ULv4Mu7YIGZ8D3JtmK.jpeg","isPro":false,"fullname":"Zhaocheng Liu","user":"zhaocheng","type":"user"},{"_id":"64f6f65879aef257d2e91666","avatarUrl":"/avatars/803358d93d08985c9a8f0f914e56535d.svg","isPro":false,"fullname":"Long Zhuo","user":"Zolo97","type":"user"},{"_id":"652965773a416e1f2173443b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/652965773a416e1f2173443b/y9MB8YgHzbwCXAc4EI9T3.jpeg","isPro":true,"fullname":"Yuhao Dong","user":"THUdyh","type":"user"},{"_id":"636e931444a18bc3c013ddc4","avatarUrl":"/avatars/7f704a38558c572b23986857159f6e58.svg","isPro":false,"fullname":"Hengwei","user":"hwbian","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2507.12465

PhysX: Physical-Grounded 3D Asset Generation

Published on Jul 16, 2025
· Submitted by
Ziang Cao
on Jul 17, 2025
#2 Paper of the day
Authors:
,

Abstract

PhysX addresses the gap in physical-grounded 3D asset generation by introducing PhysXNet, a physics-annotated dataset, and PhysXGen, a feed-forward framework that integrates physical knowledge into 3D generation.

AI-generated summary

3D modeling is moving from virtual to physical. Existing 3D generation primarily emphasizes geometries and textures while neglecting physical-grounded modeling. Consequently, despite the rapid development of 3D generative models, the synthesized 3D assets often overlook rich and important physical properties, hampering their real-world application in physical domains like simulation and embodied AI. As an initial attempt to address this challenge, we propose PhysX, an end-to-end paradigm for physical-grounded 3D asset generation. 1) To bridge the critical gap in physics-annotated 3D datasets, we present PhysXNet - the first physics-grounded 3D dataset systematically annotated across five foundational dimensions: absolute scale, material, affordance, kinematics, and function description. In particular, we devise a scalable human-in-the-loop annotation pipeline based on vision-language models, which enables efficient creation of physics-first assets from raw 3D assets.2) Furthermore, we propose PhysXGen, a feed-forward framework for physics-grounded image-to-3D asset generation, injecting physical knowledge into the pre-trained 3D structural space. Specifically, PhysXGen employs a dual-branch architecture to explicitly model the latent correlations between 3D structures and physical properties, thereby producing 3D assets with plausible physical predictions while preserving the native geometry quality. Extensive experiments validate the superior performance and promising generalization capability of our framework. All the code, data, and models will be released to facilitate future research in generative physical AI.

Community

Paper author Paper submitter

This dataset aims to bridge the critical gap in physics-annotated 3D datasets. It is the first physics-grounded 3D dataset systematically annotated across five foundational dimensions: absolute scale, material, affordance, kinematics, and function description.
Project page: https://physx-3d.github.io/
Code: https://github.com/ziangcao0312/PhysX
Demo video: https://www.youtube.com/watch?v=M5V_c0Duuy4&feature=youtu.be
Dataset: https://huggingface.co/datasets/Caoza/PhysX
teaser.png

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.12465 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.12465 in a Space README.md to link it from this page.

Collections including this paper 3