Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - SafeGround: Know When to Trust GUI Grounding Models via Uncertainty Calibration
https://github.com/Cece1031/SAFEGROUND\n","updatedAt":"2026-02-04T05:18:29.531Z","author":{"_id":"64679a226192d39142245e5e","avatarUrl":"/avatars/05abee0b6317f100923936ca2099e9eb.svg","fullname":"Xin Eric Wang","name":"xw-eric","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8204277753829956},"editors":["xw-eric"],"editorAvatarUrls":["/avatars/05abee0b6317f100923936ca2099e9eb.svg"],"reactions":[],"isReport":false}},{"id":"6983f58b8c5dd830a8541105","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-05T01:42:35.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Double-Calibration: Towards Trustworthy LLMs via Calibrating Knowledge and Reasoning Confidence](https://huggingface.co/papers/2601.11956) (2026)\n* [Calibrating LLM Judges: Linear Probes for Fast and Reliable Uncertainty Estimation](https://huggingface.co/papers/2512.22245) (2025)\n* [NAACL: Noise-AwAre Verbal Confidence Calibration for LLMs in RAG Systems](https://huggingface.co/papers/2601.11004) (2026)\n* [Fact-Checking with Large Language Models via Probabilistic Certainty and Consistency](https://huggingface.co/papers/2601.02574) (2026)\n* [EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMs](https://huggingface.co/papers/2601.06786) (2026)\n* [From Passive Metric to Active Signal: The Evolving Role of Uncertainty Quantification in Large Language Models](https://huggingface.co/papers/2601.15690) (2026)\n* [Step-GUI Technical Report](https://huggingface.co/papers/2512.15431) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-05T01:42:35.213Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.740085780620575},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"698785c8b87e7e373e1eaf19","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-02-07T18:34:48.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivLens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/safeground-know-when-to-trust-gui-grounding-models-via-uncertainty-calibration-2939-1a857831\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"
\n","updatedAt":"2026-02-07T18:34:48.806Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6809837222099304},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.02419","authors":[{"_id":"6982d63f9084cb4f0ecb5808","name":"Qingni Wang","hidden":false},{"_id":"6982d63f9084cb4f0ecb5809","name":"Yue Fan","hidden":false},{"_id":"6982d63f9084cb4f0ecb580a","user":{"_id":"64679a226192d39142245e5e","avatarUrl":"/avatars/05abee0b6317f100923936ca2099e9eb.svg","isPro":false,"fullname":"Xin Eric Wang","user":"xw-eric","type":"user"},"name":"Xin Eric Wang","status":"claimed_verified","statusLastChangedAt":"2026-02-06T18:56:10.640Z","hidden":false}],"publishedAt":"2026-02-02T18:22:45.000Z","submittedOnDailyAt":"2026-02-04T02:48:29.522Z","title":"SafeGround: Know When to Trust GUI Grounding Models via Uncertainty Calibration","submittedOnDailyBy":{"_id":"64679a226192d39142245e5e","avatarUrl":"/avatars/05abee0b6317f100923936ca2099e9eb.svg","isPro":false,"fullname":"Xin Eric Wang","user":"xw-eric","type":"user"},"summary":"Graphical User Interface (GUI) grounding aims to translate natural language instructions into executable screen coordinates, enabling automated GUI interaction. Nevertheless, incorrect grounding can result in costly, hard-to-reverse actions (e.g., erroneous payment approvals), raising concerns about model reliability. In this paper, we introduce SafeGround, an uncertainty-aware framework for GUI grounding models that enables risk-aware predictions through calibrations before testing. SafeGround leverages a distribution-aware uncertainty quantification method to capture the spatial dispersion of stochastic samples from outputs of any given model. Then, through the calibration process, SafeGround derives a test-time decision threshold with statistically guaranteed false discovery rate (FDR) control. We apply SafeGround on multiple GUI grounding models for the challenging ScreenSpot-Pro benchmark. Experimental results show that our uncertainty measure consistently outperforms existing baselines in distinguishing correct from incorrect predictions, while the calibrated threshold reliably enables rigorous risk control and potentials of substantial system-level accuracy improvements. Across multiple GUI grounding models, SafeGround improves system-level accuracy by up to 5.38% percentage points over Gemini-only inference.","upvotes":4,"discussionId":"6982d6409084cb4f0ecb580b","githubRepo":"https://github.com/Cece1031/SAFEGROUND","githubRepoAddedBy":"user","ai_summary":"SafeGround is a uncertainty-aware framework for GUI grounding models that uses distribution-aware uncertainty quantification and calibration to enable risk-aware predictions with controlled false discovery rates.","ai_keywords":["GUI grounding","uncertainty quantification","calibration","false discovery rate","distribution-aware","stochastic samples","test-time decision threshold"],"githubStars":7},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64679a226192d39142245e5e","avatarUrl":"/avatars/05abee0b6317f100923936ca2099e9eb.svg","isPro":false,"fullname":"Xin Eric Wang","user":"xw-eric","type":"user"},{"_id":"66875f6fff90daeff20da481","avatarUrl":"/avatars/070cba3e6d6153c0632e5ed1e660d070.svg","isPro":false,"fullname":"wqn","user":"cece1031","type":"user"},{"_id":"6747de57f8cab58c22ec94a2","avatarUrl":"/avatars/5bae0341862fac24564781c0fa32aac5.svg","isPro":false,"fullname":"Jinyang Wu","user":"Jinyang23","type":"user"},{"_id":"6524e8d3e6e5f6b1035006a4","avatarUrl":"/avatars/0c46dcebe4896d5d6d578a0c72ee6cff.svg","isPro":false,"fullname":"Zhiyuan Wang","user":"FoerKent","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
SafeGround is a uncertainty-aware framework for GUI grounding models that uses distribution-aware uncertainty quantification and calibration to enable risk-aware predictions with controlled false discovery rates.
AI-generated summary
Graphical User Interface (GUI) grounding aims to translate natural language instructions into executable screen coordinates, enabling automated GUI interaction. Nevertheless, incorrect grounding can result in costly, hard-to-reverse actions (e.g., erroneous payment approvals), raising concerns about model reliability. In this paper, we introduce SafeGround, an uncertainty-aware framework for GUI grounding models that enables risk-aware predictions through calibrations before testing. SafeGround leverages a distribution-awareuncertainty quantification method to capture the spatial dispersion of stochastic samples from outputs of any given model. Then, through the calibration process, SafeGround derives a test-time decision threshold with statistically guaranteed false discovery rate (FDR) control. We apply SafeGround on multiple GUI grounding models for the challenging ScreenSpot-Pro benchmark. Experimental results show that our uncertainty measure consistently outperforms existing baselines in distinguishing correct from incorrect predictions, while the calibrated threshold reliably enables rigorous risk control and potentials of substantial system-level accuracy improvements. Across multiple GUI grounding models, SafeGround improves system-level accuracy by up to 5.38% percentage points over Gemini-only inference.