Members in MMLab@NTU conduct research primarily in low-level vision, image and video understanding, creative content creation, 3D scene understanding and reconstruction. Have a look at the overview of our research. All publications are listed here.
\nWe are always looking for motivated PhD students, postdocs, research assistants who have the same interests like us. Check out the careers page and follow us on Twitter.
\n","classNames":"hf-sanitized hf-sanitized-ziixdjMk1DFUDSFV71g0w"},"users":[{"_id":"62b5777f593a2c49da69dc02","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658152070753-62b5777f593a2c49da69dc02.jpeg","isPro":false,"fullname":"Jingkang Yang","user":"Jingkang","type":"user"},{"_id":"62e57662ae9d3f10acbb1b1b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62e57662ae9d3f10acbb1b1b/lg58jdbNyv6LGH2LFnZDF.png","isPro":false,"fullname":"Shangchen Zhou","user":"sczhou","type":"user"},{"_id":"62b67da0f56de4396ca9e44b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658586059273-62b67da0f56de4396ca9e44b.jpeg","isPro":false,"fullname":"Liangyu Chen","user":"liangyuch","type":"user"},{"_id":"629c95b7a5d6f5fe10e6ed45","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/629c95b7a5d6f5fe10e6ed45/Sy0Ype5snsRookID-gsSm.jpeg","isPro":false,"fullname":"Yuming Jiang","user":"yumingj","type":"user"},{"_id":"62d3f7d84b0933c48f3cdd9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62d3f7d84b0933c48f3cdd9c/Tab1vxtxLatWzXS8NVIyo.png","isPro":true,"fullname":"Bo Li","user":"luodian","type":"user"},{"_id":"62fc8cf7ee999004b5a8b982","avatarUrl":"/avatars/6c5dda9e58747054a989f077a078f3dc.svg","isPro":false,"fullname":"Zhaoxi Chen","user":"FrozenBurning","type":"user"},{"_id":"62df78222d89ce551ce0f71d","avatarUrl":"/avatars/89fba294cff2d2f941d121c1923e4c76.svg","isPro":false,"fullname":"Lingdong Kong","user":"ldkong","type":"user"},{"_id":"62ab1ac1d48b4d8b048a3473","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png","isPro":false,"fullname":"Ziwei Liu","user":"liuziwei7","type":"user"},{"_id":"62fcf93cc1588e1d4c66453e","avatarUrl":"/avatars/5ebf58aca34e7eb7cb56d1188118e7ed.svg","isPro":false,"fullname":"Jiahao Xie","user":"Jiahao000","type":"user"},{"_id":"623c530013a63ea865f96c8e","avatarUrl":"/avatars/164455a1a94f92b71733fc778c21bd89.svg","isPro":false,"fullname":"Fangzhou Hong","user":"hongfz16","type":"user"},{"_id":"63023b6ab002e9a4a2152890","avatarUrl":"/avatars/cae8ba0a8d61fb4e576934431f43991b.svg","isPro":false,"fullname":"Haonan Qiu","user":"MoonQiu","type":"user"},{"_id":"63043db17373aacccd89f49d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63043db17373aacccd89f49d/jzP_fPCFXeYJvAD8uA_N7.jpeg","isPro":false,"fullname":"JIANYI WANG","user":"Iceclear","type":"user"},{"_id":"630ad0dd2ff113e0fb31c6b0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1671174653229-630ad0dd2ff113e0fb31c6b0.jpeg","isPro":false,"fullname":"Zongsheng Yue","user":"OAOA","type":"user"},{"_id":"631966505442cbea7df9784d","avatarUrl":"/avatars/b736f3b8d959c70928f4077e24a700bf.svg","isPro":false,"fullname":"Mingyuan Zhang","user":"mingyuan","type":"user"},{"_id":"632ea8a92a6ef6fb4acd6403","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1664002049861-noauth.png","isPro":false,"fullname":"Liming Jiang","user":"EndlessSora","type":"user"},{"_id":"62a54d0410334c1d024e2f59","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1664764278226-62a54d0410334c1d024e2f59.jpeg","isPro":false,"fullname":"Shuai Yang","user":"PKUWilliamYang","type":"user"},{"_id":"60efe7fa0d920bc7805cada5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60efe7fa0d920bc7805cada5/2LBrJBjSCOP5ilZIpWLHl.png","isPro":false,"fullname":"Ziqi Huang","user":"Ziqi","type":"user"},{"_id":"63185a8973e9e4aa51dbc0ce","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1676359176854-63185a8973e9e4aa51dbc0ce.jpeg","isPro":false,"fullname":"Tianxing Wu","user":"TianxingWu","type":"user"},{"_id":"643f6293ec817b7666868a9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/643f6293ec817b7666868a9c/yHRiAUEj8nVZBmjnV7bQU.jpeg","isPro":false,"fullname":"Wang Yuhan","user":"yuhan-wang","type":"user"},{"_id":"644493bb9c1bd83bd1a09860","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/VNb6fnFYraR5R4vSwMYZz.jpeg","isPro":false,"fullname":"Jun CEN","user":"jcenaa","type":"user"},{"_id":"623f533a28672458f749b8e9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1651044799068-623f533a28672458f749b8e9.png","isPro":false,"fullname":"Jiawe Ren","user":"jiawei011","type":"user"},{"_id":"6513aae6330c55fdc5462ca8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/EDhpiTqCBMNPmMGrOKcvY.jpeg","isPro":false,"fullname":"pq-yang","user":"PeiqingYang","type":"user"},{"_id":"63f47b5321eb234ab739e91a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63f47b5321eb234ab739e91a/vWfFNVtMkHl8gieha5PPd.jpeg","isPro":false,"fullname":"Haozhe Xie","user":"hzxie","type":"user"},{"_id":"650e37cc11f3210cf7910501","avatarUrl":"/avatars/dab5c9d647cfa97c59f5170216673a20.svg","isPro":false,"fullname":"zeqixiao","user":"zeqixiao","type":"user"}],"userCount":24,"collections":[],"datasets":[],"models":[{"author":"mmlab-ntu","authorData":{"_id":"62d55f243bf5e059f7ca25ba","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658151991971-62b5777f593a2c49da69dc02.png","fullname":"MMLab@NTU","name":"mmlab-ntu","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":53,"isUserFollowing":false},"downloads":0,"gated":false,"id":"mmlab-ntu/vtoonify-encoder","availableInferenceProviders":[],"lastModified":"2022-09-26T03:28:25.000Z","likes":0,"private":false,"repoType":"model","isLikedByUser":false}],"paperPreviews":[{"_id":"2602.08439","title":"Demo-ICL: In-Context Learning for Procedural Video Knowledge Acquisition","id":"2602.08439","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2602.08439.png"},{"_id":"2601.22153","title":"DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation","id":"2601.22153","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2601.22153.png"}],"spaces":[{"author":"mmlab-ntu","authorData":{"_id":"62d55f243bf5e059f7ca25ba","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658151991971-62b5777f593a2c49da69dc02.png","fullname":"MMLab@NTU","name":"mmlab-ntu","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":53,"isUserFollowing":false},"colorFrom":"yellow","colorTo":"red","createdAt":"2023-04-27T03:03:14.000Z","emoji":"📉","id":"mmlab-ntu/Segment-Any-RGBD","lastModified":"2023-04-27T04:26:18.000Z","likes":6,"pinned":false,"private":false,"sdk":"gradio","repoType":"space","runtime":{"stage":"RUNTIME_ERROR","hardware":{"current":null,"requested":"cpu-basic"},"storage":null,"gcTimeout":86400,"errorMessage":"f4506cfbbe553e23b895e27956588\n Preparing metadata (setup.py): started\n Preparing metadata (setup.py): finished with status 'done'\nBuilding wheels for collected packages: segment-anything\n Building wheel for segment-anything (setup.py): started\n Building wheel for segment-anything (setup.py): finished with status 'done'\n Created wheel for segment-anything: filename=segment_anything-1.0-py3-none-any.whl size=36611 sha256=dab44cd3b0607b0f52e5581aa96aed031bc6eba4313394e9c7c55ba793648340\n Stored in directory: /tmp/pip-ephem-wheel-cache-jz6ht20f/wheels/b0/7e/40/20f0b1e23280cc4a66dc8009c29f42cb4afc1b205bc5814786\nSuccessfully built segment-anything\nInstalling collected packages: segment-anything\nSuccessfully installed segment-anything-1.0\n\n[notice] A new release of pip available: 22.3.1 -> 23.3.1\n[notice] To update, run: python -m pip install --upgrade pip\n/home/user/.local/lib/python3.8/site-packages/clip/clip.py:25: UserWarning: PyTorch version 1.7.1 or higher is recommended\n warnings.warn(\"PyTorch version 1.7.1 or higher is recommended\")\nTraceback (most recent call last):\n File \"app.py\", line 33, inAI & ML interests
Computer Vision and Deep Learning
Recent Activity
Papers
Demo-ICL: In-Context Learning for Procedural Video Knowledge Acquisition
DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation
MMLab@NTU was formed on the 1 August 2018, with a research focus on computer vision and deep learning. Its sister lab is MMLab@CUHK. It is now a group with three faculty members and more than 40 members including research fellows, research assistants, and PhD students.
Members in MMLab@NTU conduct research primarily in low-level vision, image and video understanding, creative content creation, 3D scene understanding and reconstruction. Have a look at the overview of our research. All publications are listed here.
We are always looking for motivated PhD students, postdocs, research assistants who have the same interests like us. Check out the careers page and follow us on Twitter.