Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - ShowUI-π: Flow-based Generative Models as GUI Dexterous Hands
[go: Go Back, main page]

https://arxiv.org/abs/2512.24965
Website: https://showlab.github.io/showui-pi/
Github: https://github.com/showlab/showui-pi

\n","updatedAt":"2026-01-14T05:16:22.561Z","author":{"_id":"64440be5af034cdfd69ca3a7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64440be5af034cdfd69ca3a7/qmx24QiDFT29vleCxL9TX.jpeg","fullname":"Qinghong (Kevin) Lin","name":"KevinQHLin","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":42,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8128398656845093},"editors":["KevinQHLin"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/64440be5af034cdfd69ca3a7/qmx24QiDFT29vleCxL9TX.jpeg"],"reactions":[{"reaction":"👍","users":["rockyd","KevinQHLin"],"count":2}],"isReport":false}},{"id":"696844ec7c8edbccd1d09439","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-01-15T01:37:48.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ShowUI-Aloha: Human-Taught GUI Agent](https://huggingface.co/papers/2601.07181) (2026)\n* [Unified Embodied VLM Reasoning with Robotic Action via Autoregressive Discretized Pre-training](https://huggingface.co/papers/2512.24125) (2025)\n* [$\\mathcal{E}_0$: Enhancing Generalization and Fine-Grained Control in VLA Models via Continuized Discrete Diffusion](https://huggingface.co/papers/2511.21542) (2025)\n* [PALM: Progress-Aware Policy Learning via Affordance Reasoning for Long-Horizon Robotic Manipulation](https://huggingface.co/papers/2601.07060) (2026)\n* [METIS: Multi-Source Egocentric Training for Integrated Dexterous Vision-Language-Action Model](https://huggingface.co/papers/2511.17366) (2025)\n* [SAGA: Open-World Mobile Manipulation via Structured Affordance Grounding](https://huggingface.co/papers/2512.12842) (2025)\n* [See Once, Then Act: Vision-Language-Action Model with Task Learning from One-Shot Video Demonstrations](https://huggingface.co/papers/2512.07582) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-01-15T01:37:48.584Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7069104909896851},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2512.24965","authors":[{"_id":"6958946b832867f253525a3a","name":"Siyuan Hu","hidden":false},{"_id":"6958946b832867f253525a3b","user":{"_id":"64440be5af034cdfd69ca3a7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64440be5af034cdfd69ca3a7/qmx24QiDFT29vleCxL9TX.jpeg","isPro":false,"fullname":"Qinghong (Kevin) Lin","user":"KevinQHLin","type":"user"},"name":"Kevin Qinghong Lin","status":"claimed_verified","statusLastChangedAt":"2026-01-14T09:53:47.889Z","hidden":false},{"_id":"6958946b832867f253525a3c","user":{"_id":"661ab3da2b14565c7acccf5c","avatarUrl":"/avatars/fa4fc03664803e02aede4d4c3d50b393.svg","isPro":false,"fullname":"Mike Zheng Shou","user":"AnalMom","type":"user"},"name":"Mike Zheng Shou","status":"admin_assigned","statusLastChangedAt":"2026-01-14T12:51:00.387Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/64440be5af034cdfd69ca3a7/_OdaniV8mAw_Y2nQ85G_I.mp4"],"publishedAt":"2025-12-31T16:51:14.000Z","submittedOnDailyAt":"2026-01-14T02:46:22.550Z","title":"ShowUI-π: Flow-based Generative Models as GUI Dexterous Hands","submittedOnDailyBy":{"_id":"64440be5af034cdfd69ca3a7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64440be5af034cdfd69ca3a7/qmx24QiDFT29vleCxL9TX.jpeg","isPro":false,"fullname":"Qinghong (Kevin) Lin","user":"KevinQHLin","type":"user"},"summary":"Building intelligent agents capable of dexterous manipulation is essential for achieving human-like automation in both robotics and digital environments. However, existing GUI agents rely on discrete click predictions (x,y), which prohibits free-form, closed-loop trajectories (e.g. dragging a progress bar) that require continuous, on-the-fly perception and adjustment. In this work, we develop ShowUI-π, the first flow-based generative model as GUI dexterous hand, featuring the following designs: (i) Unified Discrete-Continuous Actions, integrating discrete clicks and continuous drags within a shared model, enabling flexible adaptation across diverse interaction modes; (ii) Flow-based Action Generation for drag modeling, which predicts incremental cursor adjustments from continuous visual observations via a lightweight action expert, ensuring smooth and stable trajectories; (iii) Drag Training data and Benchmark, where we manually collect and synthesize 20K drag trajectories across five domains (e.g. PowerPoint, Adobe Premiere Pro), and introduce ScreenDrag, a benchmark with comprehensive online and offline evaluation protocols for assessing GUI agents' drag capabilities. Our experiments show that proprietary GUI agents still struggle on ScreenDrag (e.g. Operator scores 13.27, and the best Gemini-2.5-CUA reaches 22.18). In contrast, ShowUI-π achieves 26.98 with only 450M parameters, underscoring both the difficulty of the task and the effectiveness of our approach. We hope this work advances GUI agents toward human-like dexterous control in digital world. The code is available at https://github.com/showlab/showui-pi.","upvotes":42,"discussionId":"6958946b832867f253525a3f","projectPage":"https://showlab.github.io/showui-pi","githubRepo":"https://github.com/showlab/showui-pi","githubRepoAddedBy":"auto","githubStars":91,"organization":{"_id":"63a553c4ce5763e06f78669c","name":"showlab","fullname":"Show Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1671779505215-63a55320ce5763e06f78519c.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64440be5af034cdfd69ca3a7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64440be5af034cdfd69ca3a7/qmx24QiDFT29vleCxL9TX.jpeg","isPro":false,"fullname":"Qinghong (Kevin) Lin","user":"KevinQHLin","type":"user"},{"_id":"6836a5aa14ebb7ff0c031871","avatarUrl":"/avatars/8bb04314e9b1cb3927b159f511796703.svg","isPro":false,"fullname":"Yang Pei","user":"yangpei-comp","type":"user"},{"_id":"6731cee239c391c5cff2c538","avatarUrl":"/avatars/db2c98500732a67471de942aea4aae47.svg","isPro":false,"fullname":"Qiming Huang","user":"ceilingfan456","type":"user"},{"_id":"64759fbcb6cf74b6ed4b2545","avatarUrl":"/avatars/6f5b7f7673ba61f6c06d1e99a360f5b5.svg","isPro":false,"fullname":"Jessica","user":"JHU03","type":"user"},{"_id":"6357c9f400f138b8ca551704","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6357c9f400f138b8ca551704/7cUf2RznjAfXNkO_wJ_jl.png","isPro":false,"fullname":"Siyuan Hu","user":"h-siyuan","type":"user"},{"_id":"670f21f6948129a7f5df200f","avatarUrl":"/avatars/f7de967a92b71a945c4dc9198c3e62b0.svg","isPro":false,"fullname":"gtysssp","user":"gtysssp","type":"user"},{"_id":"6925218ee4ebdbea0bea3834","avatarUrl":"/avatars/f637e08613f21a430a68d100ddd366df.svg","isPro":false,"fullname":"Leo","user":"Leo-ml","type":"user"},{"_id":"65b862af23d948d884ae9169","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/aAAokhlISNMiNuO_0LzrM.png","isPro":false,"fullname":"jiaqi wang","user":"kolerk","type":"user"},{"_id":"66c45954ab8f09b10b7ab6a8","avatarUrl":"/avatars/f9946c775c4d70b8e044865ac34ef121.svg","isPro":false,"fullname":"Zhu","user":"ZaynZhu","type":"user"},{"_id":"6336e1e7e3ac69e6a906c3ef","avatarUrl":"/avatars/1ef58ba227cc3eff5048c184a11f48de.svg","isPro":false,"fullname":"LYT","user":"LYTinn","type":"user"},{"_id":"636865b8cca0a0a962c21f3f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Mja7cpws4gb2Jmdj_foPA.png","isPro":false,"fullname":"Xiangru (Edward) Jian","user":"HideOnBush","type":"user"},{"_id":"69672aab8b65d2b4185683f5","avatarUrl":"/avatars/650882e59d21b6a7ea913cc20ad1dddf.svg","isPro":false,"fullname":"Otto","user":"MidOtto","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"63a553c4ce5763e06f78669c","name":"showlab","fullname":"Show Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1671779505215-63a55320ce5763e06f78519c.png"}}">
Papers
arxiv:2512.24965

ShowUI-π: Flow-based Generative Models as GUI Dexterous Hands

Published on Dec 31, 2025
· Submitted by
Qinghong (Kevin) Lin
on Jan 14
Authors:
,

Abstract

Building intelligent agents capable of dexterous manipulation is essential for achieving human-like automation in both robotics and digital environments. However, existing GUI agents rely on discrete click predictions (x,y), which prohibits free-form, closed-loop trajectories (e.g. dragging a progress bar) that require continuous, on-the-fly perception and adjustment. In this work, we develop ShowUI-π, the first flow-based generative model as GUI dexterous hand, featuring the following designs: (i) Unified Discrete-Continuous Actions, integrating discrete clicks and continuous drags within a shared model, enabling flexible adaptation across diverse interaction modes; (ii) Flow-based Action Generation for drag modeling, which predicts incremental cursor adjustments from continuous visual observations via a lightweight action expert, ensuring smooth and stable trajectories; (iii) Drag Training data and Benchmark, where we manually collect and synthesize 20K drag trajectories across five domains (e.g. PowerPoint, Adobe Premiere Pro), and introduce ScreenDrag, a benchmark with comprehensive online and offline evaluation protocols for assessing GUI agents' drag capabilities. Our experiments show that proprietary GUI agents still struggle on ScreenDrag (e.g. Operator scores 13.27, and the best Gemini-2.5-CUA reaches 22.18). In contrast, ShowUI-π achieves 26.98 with only 450M parameters, underscoring both the difficulty of the task and the effectiveness of our approach. We hope this work advances GUI agents toward human-like dexterous control in digital world. The code is available at https://github.com/showlab/showui-pi.

Community

Paper author Paper submitter

TL;DR: ShowUI-π is a 450M flow-based vision-language-action model that treats GUI actions as continuous trajectories, generating smooth clicks and drags directly from screen observations. It unifies discrete and continuous actions, enabling precise drawing, rotation, sorting, and captcha solving without tokenized coordinates.

arXiv: https://arxiv.org/abs/2512.24965
Website: https://showlab.github.io/showui-pi/
Github: https://github.com/showlab/showui-pi

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.24965 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.24965 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.24965 in a Space README.md to link it from this page.

Collections including this paper 5