Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Confidence Estimation for LLMs in Multi-turn Interactions
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-01-07T01:36:54.442Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7315990924835205},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"696be5ab776f8ff4db792f65","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-01-17T19:40:27.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivlens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/confidence-estimation-for-llms-in-multi-turn-interactions-678-8dca17a6\n\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"
\n","updatedAt":"2026-01-17T19:40:27.600Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7219069600105286},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.02179","authors":[{"_id":"695d1b7ac03d6d81e4399d2b","name":"Caiqi Zhang","hidden":false},{"_id":"695d1b7ac03d6d81e4399d2c","name":"Ruihan Yang","hidden":false},{"_id":"695d1b7ac03d6d81e4399d2d","name":"Xiaochen Zhu","hidden":false},{"_id":"695d1b7ac03d6d81e4399d2e","name":"Chengzu Li","hidden":false},{"_id":"695d1b7ac03d6d81e4399d2f","name":"Tiancheng Hu","hidden":false},{"_id":"695d1b7ac03d6d81e4399d30","name":"Yijiang River Dong","hidden":false},{"_id":"695d1b7ac03d6d81e4399d31","name":"Deqing Yang","hidden":false},{"_id":"695d1b7ac03d6d81e4399d32","name":"Nigel Collier","hidden":false}],"publishedAt":"2026-01-05T14:58:04.000Z","submittedOnDailyAt":"2026-01-06T11:57:24.455Z","title":"Confidence Estimation for LLMs in Multi-turn Interactions","submittedOnDailyBy":{"_id":"63920dfac47e36ddeb8f1864","avatarUrl":"/avatars/c36cbf7b084d62368312e5c9292e4260.svg","isPro":false,"fullname":"Caiqi Zhang","user":"caiqizh","type":"user"},"summary":"While confidence estimation is a promising direction for mitigating hallucinations in Large Language Models (LLMs), current research dominantly focuses on single-turn settings. The dynamics of model confidence in multi-turn conversations, where context accumulates and ambiguity is progressively resolved, remain largely unexplored. Reliable confidence estimation in multi-turn settings is critical for many downstream applications, such as autonomous agents and human-in-the-loop systems. This work presents the first systematic study of confidence estimation in multi-turn interactions, establishing a formal evaluation framework grounded in two key desiderata: per-turn calibration and monotonicity of confidence as more information becomes available. To facilitate this, we introduce novel metrics, including a length-normalized Expected Calibration Error (InfoECE), and a new \"Hinter-Guesser\" paradigm for generating controlled evaluation datasets. Our experiments reveal that widely-used confidence techniques struggle with calibration and monotonicity in multi-turn dialogues. We propose P(Sufficient), a logit-based probe that achieves comparatively better performance, although the task remains far from solved. Our work provides a foundational methodology for developing more reliable and trustworthy conversational agents.","upvotes":17,"discussionId":"695d1b7ac03d6d81e4399d33","githubRepo":"https://github.com/caiqizh/multi-turn-conf","githubRepoAddedBy":"auto","ai_summary":"Multi-turn conversation confidence estimation lacks systematic evaluation frameworks, prompting the introduction of novel metrics and a \"Hinter-Guesser\" paradigm for controlled dataset generation to improve calibration and monotonicity.","ai_keywords":["confidence estimation","large language models","multi-turn conversations","calibration","monotonicity","Expected Calibration Error","logit-based probe"],"githubStars":4,"organization":{"_id":"679c9d2bb741486264125a9a","name":"uni-cambridge","fullname":"University of Cambridge","avatar":"https://cdn-uploads.huggingface.co/production/uploads/679c98caac2b47d6f4132f9b/5tGEF02r7vvH94c0pQ5_0.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"645b0c3ec35da9c7afd95421","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/645b0c3ec35da9c7afd95421/vYBrCDagHsXAo6J2p-uG0.jpeg","isPro":false,"fullname":"Yuling","user":"YerbaPage","type":"user"},{"_id":"685a558b50a4d7e3c996e2e2","avatarUrl":"/avatars/50694a6bdf83a3b06512b4adcbf3409a.svg","isPro":false,"fullname":"Li Yuexian","user":"SDFHSIDFdFEF","type":"user"},{"_id":"63920dfac47e36ddeb8f1864","avatarUrl":"/avatars/c36cbf7b084d62368312e5c9292e4260.svg","isPro":false,"fullname":"Caiqi Zhang","user":"caiqizh","type":"user"},{"_id":"645474d788c97c8796dcb3d8","avatarUrl":"/avatars/dfab5c337c2c3d9c44149738f2e09517.svg","isPro":false,"fullname":"Beiduo CHen","user":"McmanusChen","type":"user"},{"_id":"671a4abbef737c0abe21b3f8","avatarUrl":"/avatars/da826af5472a3b9f1969f0c766672731.svg","isPro":false,"fullname":"Ruihan Yang","user":"rhyang2021","type":"user"},{"_id":"695d76bd94d1963e5dbf6e5c","avatarUrl":"/avatars/81998c86275b3db7b31fb8b40d08f96d.svg","isPro":false,"fullname":"James","user":"Jameshua","type":"user"},{"_id":"61b927af85a85a69ab914260","avatarUrl":"/avatars/5a8d2867c064423644101dbc74d863a5.svg","isPro":false,"fullname":"Tiancheng Hu","user":"pitehu","type":"user"},{"_id":"631efbfcc6b20f03c8211fc9","avatarUrl":"/avatars/3abf7acb7258c0f6d1a6016b293128b5.svg","isPro":false,"fullname":"txq","user":"future7","type":"user"},{"_id":"678f99215c1e705963aa8e26","avatarUrl":"/avatars/8be6e11af6ca818d090c8f420ae9c59d.svg","isPro":false,"fullname":"Yedidia AGNIMO","user":"YedsonUQ","type":"user"},{"_id":"689c92de47073c20aaa5664d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/3XFKakgZwLPmyGEi5MrJh.png","isPro":false,"fullname":"shebly","user":"yangzhichao","type":"user"},{"_id":"695c3d0dd0638f21b7f43f4e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/NwnttXUqQmK0AwFyRxPRa.png","isPro":false,"fullname":"Kieran Garvey","user":"Atenai","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"679c9d2bb741486264125a9a","name":"uni-cambridge","fullname":"University of Cambridge","avatar":"https://cdn-uploads.huggingface.co/production/uploads/679c98caac2b47d6f4132f9b/5tGEF02r7vvH94c0pQ5_0.png"}}">
Multi-turn conversation confidence estimation lacks systematic evaluation frameworks, prompting the introduction of novel metrics and a "Hinter-Guesser" paradigm for controlled dataset generation to improve calibration and monotonicity.
AI-generated summary
While confidence estimation is a promising direction for mitigating hallucinations in Large Language Models (LLMs), current research dominantly focuses on single-turn settings. The dynamics of model confidence in multi-turn conversations, where context accumulates and ambiguity is progressively resolved, remain largely unexplored. Reliable confidence estimation in multi-turn settings is critical for many downstream applications, such as autonomous agents and human-in-the-loop systems. This work presents the first systematic study of confidence estimation in multi-turn interactions, establishing a formal evaluation framework grounded in two key desiderata: per-turn calibration and monotonicity of confidence as more information becomes available. To facilitate this, we introduce novel metrics, including a length-normalized Expected Calibration Error (InfoECE), and a new "Hinter-Guesser" paradigm for generating controlled evaluation datasets. Our experiments reveal that widely-used confidence techniques struggle with calibration and monotonicity in multi-turn dialogues. We propose P(Sufficient), a logit-based probe that achieves comparatively better performance, although the task remains far from solved. Our work provides a foundational methodology for developing more reliable and trustworthy conversational agents.