Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
mmlab-ntu (MMLab@NTU)
[go: Go Back, main page]

MMLab@NTU was formed on the 1 August 2018, with a research focus on computer vision and deep learning. Its sister lab is MMLab@CUHK. It is now a group with three faculty members and more than 40 members including research fellows, research assistants, and PhD students.

\n

Members in MMLab@NTU conduct research primarily in low-level vision, image and video understanding, creative content creation, 3D scene understanding and reconstruction. Have a look at the overview of our research. All publications are listed here.

\n

We are always looking for motivated PhD students, postdocs, research assistants who have the same interests like us. Check out the careers page and follow us on Twitter.

\n","classNames":"hf-sanitized hf-sanitized-ziixdjMk1DFUDSFV71g0w"},"users":[{"_id":"62b5777f593a2c49da69dc02","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658152070753-62b5777f593a2c49da69dc02.jpeg","isPro":false,"fullname":"Jingkang Yang","user":"Jingkang","type":"user"},{"_id":"62e57662ae9d3f10acbb1b1b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62e57662ae9d3f10acbb1b1b/lg58jdbNyv6LGH2LFnZDF.png","isPro":false,"fullname":"Shangchen Zhou","user":"sczhou","type":"user"},{"_id":"62b67da0f56de4396ca9e44b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658586059273-62b67da0f56de4396ca9e44b.jpeg","isPro":false,"fullname":"Liangyu Chen","user":"liangyuch","type":"user"},{"_id":"629c95b7a5d6f5fe10e6ed45","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/629c95b7a5d6f5fe10e6ed45/Sy0Ype5snsRookID-gsSm.jpeg","isPro":false,"fullname":"Yuming Jiang","user":"yumingj","type":"user"},{"_id":"62d3f7d84b0933c48f3cdd9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62d3f7d84b0933c48f3cdd9c/Tab1vxtxLatWzXS8NVIyo.png","isPro":true,"fullname":"Bo Li","user":"luodian","type":"user"},{"_id":"62fc8cf7ee999004b5a8b982","avatarUrl":"/avatars/6c5dda9e58747054a989f077a078f3dc.svg","isPro":false,"fullname":"Zhaoxi Chen","user":"FrozenBurning","type":"user"},{"_id":"62df78222d89ce551ce0f71d","avatarUrl":"/avatars/89fba294cff2d2f941d121c1923e4c76.svg","isPro":false,"fullname":"Lingdong Kong","user":"ldkong","type":"user"},{"_id":"62ab1ac1d48b4d8b048a3473","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png","isPro":false,"fullname":"Ziwei Liu","user":"liuziwei7","type":"user"},{"_id":"62fcf93cc1588e1d4c66453e","avatarUrl":"/avatars/5ebf58aca34e7eb7cb56d1188118e7ed.svg","isPro":false,"fullname":"Jiahao Xie","user":"Jiahao000","type":"user"},{"_id":"623c530013a63ea865f96c8e","avatarUrl":"/avatars/164455a1a94f92b71733fc778c21bd89.svg","isPro":false,"fullname":"Fangzhou Hong","user":"hongfz16","type":"user"},{"_id":"63023b6ab002e9a4a2152890","avatarUrl":"/avatars/cae8ba0a8d61fb4e576934431f43991b.svg","isPro":false,"fullname":"Haonan Qiu","user":"MoonQiu","type":"user"},{"_id":"63043db17373aacccd89f49d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63043db17373aacccd89f49d/jzP_fPCFXeYJvAD8uA_N7.jpeg","isPro":false,"fullname":"JIANYI WANG","user":"Iceclear","type":"user"},{"_id":"630ad0dd2ff113e0fb31c6b0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1671174653229-630ad0dd2ff113e0fb31c6b0.jpeg","isPro":false,"fullname":"Zongsheng Yue","user":"OAOA","type":"user"},{"_id":"631966505442cbea7df9784d","avatarUrl":"/avatars/b736f3b8d959c70928f4077e24a700bf.svg","isPro":false,"fullname":"Mingyuan Zhang","user":"mingyuan","type":"user"},{"_id":"632ea8a92a6ef6fb4acd6403","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1664002049861-noauth.png","isPro":false,"fullname":"Liming Jiang","user":"EndlessSora","type":"user"},{"_id":"62a54d0410334c1d024e2f59","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1664764278226-62a54d0410334c1d024e2f59.jpeg","isPro":false,"fullname":"Shuai Yang","user":"PKUWilliamYang","type":"user"},{"_id":"60efe7fa0d920bc7805cada5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60efe7fa0d920bc7805cada5/2LBrJBjSCOP5ilZIpWLHl.png","isPro":false,"fullname":"Ziqi Huang","user":"Ziqi","type":"user"},{"_id":"63185a8973e9e4aa51dbc0ce","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1676359176854-63185a8973e9e4aa51dbc0ce.jpeg","isPro":false,"fullname":"Tianxing Wu","user":"TianxingWu","type":"user"},{"_id":"643f6293ec817b7666868a9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/643f6293ec817b7666868a9c/yHRiAUEj8nVZBmjnV7bQU.jpeg","isPro":false,"fullname":"Wang Yuhan","user":"yuhan-wang","type":"user"},{"_id":"644493bb9c1bd83bd1a09860","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/VNb6fnFYraR5R4vSwMYZz.jpeg","isPro":false,"fullname":"Jun CEN","user":"jcenaa","type":"user"},{"_id":"623f533a28672458f749b8e9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1651044799068-623f533a28672458f749b8e9.png","isPro":false,"fullname":"Jiawe Ren","user":"jiawei011","type":"user"},{"_id":"6513aae6330c55fdc5462ca8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/EDhpiTqCBMNPmMGrOKcvY.jpeg","isPro":false,"fullname":"pq-yang","user":"PeiqingYang","type":"user"},{"_id":"63f47b5321eb234ab739e91a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63f47b5321eb234ab739e91a/vWfFNVtMkHl8gieha5PPd.jpeg","isPro":false,"fullname":"Haozhe Xie","user":"hzxie","type":"user"},{"_id":"650e37cc11f3210cf7910501","avatarUrl":"/avatars/dab5c9d647cfa97c59f5170216673a20.svg","isPro":false,"fullname":"zeqixiao","user":"zeqixiao","type":"user"}],"userCount":24,"collections":[],"datasets":[],"models":[{"author":"mmlab-ntu","authorData":{"_id":"62d55f243bf5e059f7ca25ba","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658151991971-62b5777f593a2c49da69dc02.png","fullname":"MMLab@NTU","name":"mmlab-ntu","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":53,"isUserFollowing":false},"downloads":0,"gated":false,"id":"mmlab-ntu/vtoonify-encoder","availableInferenceProviders":[],"lastModified":"2022-09-26T03:28:25.000Z","likes":0,"private":false,"repoType":"model","isLikedByUser":false}],"paperPreviews":[{"_id":"2602.08439","title":"Demo-ICL: In-Context Learning for Procedural Video Knowledge Acquisition","id":"2602.08439","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2602.08439.png"},{"_id":"2601.22153","title":"DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation","id":"2601.22153","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2601.22153.png"}],"spaces":[{"author":"mmlab-ntu","authorData":{"_id":"62d55f243bf5e059f7ca25ba","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658151991971-62b5777f593a2c49da69dc02.png","fullname":"MMLab@NTU","name":"mmlab-ntu","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":53,"isUserFollowing":false},"colorFrom":"yellow","colorTo":"red","createdAt":"2023-04-27T03:03:14.000Z","emoji":"📉","id":"mmlab-ntu/Segment-Any-RGBD","lastModified":"2023-04-27T04:26:18.000Z","likes":6,"pinned":false,"private":false,"sdk":"gradio","repoType":"space","runtime":{"stage":"RUNTIME_ERROR","hardware":{"current":null,"requested":"cpu-basic"},"storage":null,"gcTimeout":86400,"errorMessage":"f4506cfbbe553e23b895e27956588\n Preparing metadata (setup.py): started\n Preparing metadata (setup.py): finished with status 'done'\nBuilding wheels for collected packages: segment-anything\n Building wheel for segment-anything (setup.py): started\n Building wheel for segment-anything (setup.py): finished with status 'done'\n Created wheel for segment-anything: filename=segment_anything-1.0-py3-none-any.whl size=36611 sha256=dab44cd3b0607b0f52e5581aa96aed031bc6eba4313394e9c7c55ba793648340\n Stored in directory: /tmp/pip-ephem-wheel-cache-jz6ht20f/wheels/b0/7e/40/20f0b1e23280cc4a66dc8009c29f42cb4afc1b205bc5814786\nSuccessfully built segment-anything\nInstalling collected packages: segment-anything\nSuccessfully installed segment-anything-1.0\n\n[notice] A new release of pip available: 22.3.1 -> 23.3.1\n[notice] To update, run: python -m pip install --upgrade pip\n/home/user/.local/lib/python3.8/site-packages/clip/clip.py:25: UserWarning: PyTorch version 1.7.1 or higher is recommended\n warnings.warn(\"PyTorch version 1.7.1 or higher is recommended\")\nTraceback (most recent call last):\n File \"app.py\", line 33, in \n from open_vocab_seg import add_ovseg_config\n File \"/home/user/app/open_vocab_seg/__init__.py\", line 9, in \n from .ovseg_model import OVSeg, OVSegDEMO\n File \"/home/user/app/open_vocab_seg/ovseg_model.py\", line 26, in \n from .mask_former_model import MaskFormer\n File \"/home/user/app/open_vocab_seg/mask_former_model.py\", line 17, in \n from .modeling.criterion import SetCriterion\n File \"/home/user/app/open_vocab_seg/modeling/criterion.py\", line 14, in \n from ..utils.misc import is_dist_avail_and_initialized, nested_tensor_from_tensor_list\n File \"/home/user/app/open_vocab_seg/utils/__init__.py\", line 5, in \n from .predictor import VisualizationDemo, VisualizationDemoIndoor\n File \"/home/user/app/open_vocab_seg/utils/predictor.py\", line 12, in \n from pytorch3d.structures import Pointclouds\nModuleNotFoundError: No module named 'pytorch3d'\n","replicas":{"requested":1},"devMode":false,"domains":[{"domain":"mmlab-ntu-segment-any-rgbd.hf.space","stage":"READY"}]},"title":"Segment Any RGBD","isLikedByUser":false,"trendingScore":0,"tags":["gradio","region:us"],"featured":false},{"author":"mmlab-ntu","authorData":{"_id":"62d55f243bf5e059f7ca25ba","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658151991971-62b5777f593a2c49da69dc02.png","fullname":"MMLab@NTU","name":"mmlab-ntu","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":53,"isUserFollowing":false},"colorFrom":"green","colorTo":"red","createdAt":"2023-04-23T15:07:55.000Z","emoji":"👁","id":"mmlab-ntu/relate-anything-model","lastModified":"2023-04-24T19:26:09.000Z","likes":29,"pinned":false,"private":false,"sdk":"gradio","repoType":"space","runtime":{"stage":"BUILD_ERROR","hardware":{"current":null,"requested":"cpu-basic"},"storage":null,"gcTimeout":86400,"errorMessage":"Build failed with exit code: 1","replicas":{"requested":1},"devMode":false,"domains":[{"domain":"mmlab-ntu-relate-anything-model.hf.space","stage":"READY"}]},"title":"Relate Anything","isLikedByUser":false,"trendingScore":0,"tags":["gradio","region:us"],"featured":false}],"buckets":[],"numBuckets":0,"numDatasets":0,"numModels":1,"numSpaces":3,"lastOrgActivities":[{"time":"2026-02-18T13:39:13.980Z","user":"Ziqi","userAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60efe7fa0d920bc7805cada5/2LBrJBjSCOP5ilZIpWLHl.png","type":"paper","paper":{"id":"2602.12279","title":"UniT: Unified Multimodal Chain-of-Thought Test-time Scaling","publishedAt":"2026-02-12T18:59:49.000Z","upvotes":19,"isUpvotedByUser":true}},{"time":"2026-02-18T13:38:47.999Z","user":"liangyuch","userAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658586059273-62b67da0f56de4396ca9e44b.jpeg","type":"paper","paper":{"id":"2602.12279","title":"UniT: Unified Multimodal Chain-of-Thought Test-time Scaling","publishedAt":"2026-02-12T18:59:49.000Z","upvotes":19,"isUpvotedByUser":true}},{"time":"2026-02-18T09:29:40.833Z","user":"liangyuch","userAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658586059273-62b67da0f56de4396ca9e44b.jpeg","type":"paper-daily","paper":{"id":"2602.12279","title":"UniT: Unified Multimodal Chain-of-Thought Test-time Scaling","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2602.12279.png","upvotes":19,"publishedAt":"2026-02-12T18:59:49.000Z","isUpvotedByUser":true}}],"acceptLanguages":["*"],"canReadRepos":false,"canReadSpaces":false,"blogPosts":[],"currentRepoPage":0,"filters":{},"paperView":false}">

AI & ML interests

Computer Vision and Deep Learning

Recent Activity

MMLab@NTU was formed on the 1 August 2018, with a research focus on computer vision and deep learning. Its sister lab is MMLab@CUHK. It is now a group with three faculty members and more than 40 members including research fellows, research assistants, and PhD students.

Members in MMLab@NTU conduct research primarily in low-level vision, image and video understanding, creative content creation, 3D scene understanding and reconstruction. Have a look at the overview of our research. All publications are listed here.

We are always looking for motivated PhD students, postdocs, research assistants who have the same interests like us. Check out the careers page and follow us on Twitter.

datasets 0

None public yet