Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - B-score: Detecting biases in large language models using response
history
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-05-28T01:38:20.852Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7895151972770691},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2505.18545","authors":[{"_id":"6835217ee759f596d018f72c","user":{"_id":"6631fd5961a4305e5610d403","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6631fd5961a4305e5610d403/P1Dtxzn-KIbYDDsiw60nr.jpeg","isPro":false,"fullname":"An Vo","user":"anvo25","type":"user"},"name":"An Vo","status":"claimed_verified","statusLastChangedAt":"2025-05-27T07:52:08.994Z","hidden":false},{"_id":"6835217ee759f596d018f72d","user":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"name":"Mohammad Reza Taesiri","status":"extracted_confirmed","statusLastChangedAt":"2025-05-27T02:21:46.082Z","hidden":false},{"_id":"6835217ee759f596d018f72e","name":"Daeyoung Kim","hidden":false},{"_id":"6835217ee759f596d018f72f","user":{"_id":"60e85b3fcd1cf4e418fff651","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60e85b3fcd1cf4e418fff651/TUaEA4SqB6xuG2JhwNETC.jpeg","isPro":false,"fullname":"Anh (Totti) Nguyen","user":"anhng8","type":"user"},"name":"Anh Totti Nguyen","status":"extracted_confirmed","statusLastChangedAt":"2025-06-20T19:47:53.519Z","hidden":false}],"publishedAt":"2025-05-24T06:23:52.000Z","submittedOnDailyAt":"2025-05-27T00:50:49.797Z","title":"B-score: Detecting biases in large language models using response\n history","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"Large language models (LLMs) often exhibit strong biases, e.g, against women\nor in favor of the number 7. We investigate whether LLMs would be able to\noutput less biased answers when allowed to observe their prior answers to the\nsame question in a multi-turn conversation. To understand which types of\nquestions invite more biased answers, we test LLMs on our proposed set of\nquestions that span 9 topics and belong to three types: (1) Subjective; (2)\nRandom; and (3) Objective. Interestingly, LLMs are able to \"de-bias\" themselves\nin a multi-turn conversation in response to questions that seek an Random,\nunbiased answer. Furthermore, we propose B-score, a novel metric that is\neffective in detecting biases to Subjective, Random, Easy, and Hard questions.\nOn MMLU, HLE, and CSQA, leveraging B-score substantially improves the\nverification accuracy of LLM answers (i.e, accepting LLM correct answers and\nrejecting incorrect ones) compared to using verbalized confidence scores or the\nfrequency of single-turn answers alone. Code and data are available at:\nhttps://b-score.github.io.","upvotes":30,"discussionId":"6835217ee759f596d018f794","projectPage":"https://b-score.github.io/","githubRepo":"https://github.com/anvo25/b-score","githubRepoAddedBy":"user","ai_summary":"LLMs can reduce biases in multi-turn conversations for certain types of questions, and a novel B-score metric improves the accuracy of verifying LLM answers.","ai_keywords":["large language models","biases","multi-turn conversation","B-score","MMLU","HLE","CSQA","verification accuracy","verbalized confidence scores"],"githubStars":3},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"639ac72e94cad09c5a16c169","avatarUrl":"/avatars/d676019ba2ccb6266791742c5a40a39e.svg","isPro":false,"fullname":"TTN","user":"ttn0011","type":"user"},{"_id":"657223677b5f2b6f3c2e8d9a","avatarUrl":"/avatars/8b00289cdd60d984820ce384e1c0e6e2.svg","isPro":true,"fullname":"Nguyen Huy Hung","user":"hungnh1125","type":"user"},{"_id":"65dbc8b2ad7ccf910dcdedc6","avatarUrl":"/avatars/b8856be796d69a9b74fcf492b8128cfa.svg","isPro":false,"fullname":"Mehrshad","user":"cmehrshad","type":"user"},{"_id":"6631fd5961a4305e5610d403","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6631fd5961a4305e5610d403/P1Dtxzn-KIbYDDsiw60nr.jpeg","isPro":false,"fullname":"An Vo","user":"anvo25","type":"user"},{"_id":"65bccbcb6fbe1186e0924561","avatarUrl":"/avatars/1ea6aefb73375df1357bbb9ac7fc7cec.svg","isPro":false,"fullname":"Sunyoung Park","user":"architectyou","type":"user"},{"_id":"668c8e8c142f9b26a49f03cc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/668c8e8c142f9b26a49f03cc/YNmPCrlsi6iwSeNfh1iID.png","isPro":false,"fullname":"Logan Bolton","user":"loganbolton","type":"user"},{"_id":"60e85b3fcd1cf4e418fff651","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60e85b3fcd1cf4e418fff651/TUaEA4SqB6xuG2JhwNETC.jpeg","isPro":false,"fullname":"Anh (Totti) Nguyen","user":"anhng8","type":"user"},{"_id":"6497cd76ad206c182775f7a1","avatarUrl":"/avatars/83efc5b49a5f6d20ced04e7f2d9eac9b.svg","isPro":false,"fullname":"Le Gia Khang","user":"shenkha","type":"user"},{"_id":"662289fc93567d7bbf1d414c","avatarUrl":"/avatars/0ec5d327ab8dad65c7c06afccb5e5b78.svg","isPro":false,"fullname":"do minh duc","user":"ducdm0507","type":"user"},{"_id":"67a87266e113700e5ee1c3fc","avatarUrl":"/avatars/ea21f5d239244b810c892333663b5390.svg","isPro":false,"fullname":"Tường Vy","user":"tuongvy2603","type":"user"},{"_id":"6601023782708115860c228e","avatarUrl":"/avatars/26bfaaf45b7114793b0685f53b255b39.svg","isPro":false,"fullname":"Tran","user":"Woffy","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
LLMs can reduce biases in multi-turn conversations for certain types of questions, and a novel B-score metric improves the accuracy of verifying LLM answers.
AI-generated summary
Large language models (LLMs) often exhibit strong biases, e.g, against women
or in favor of the number 7. We investigate whether LLMs would be able to
output less biased answers when allowed to observe their prior answers to the
same question in a multi-turn conversation. To understand which types of
questions invite more biased answers, we test LLMs on our proposed set of
questions that span 9 topics and belong to three types: (1) Subjective; (2)
Random; and (3) Objective. Interestingly, LLMs are able to "de-bias" themselves
in a multi-turn conversation in response to questions that seek an Random,
unbiased answer. Furthermore, we propose B-score, a novel metric that is
effective in detecting biases to Subjective, Random, Easy, and Hard questions.
On MMLU, HLE, and CSQA, leveraging B-score substantially improves the
verification accuracy of LLM answers (i.e, accepting LLM correct answers and
rejecting incorrect ones) compared to using verbalized confidence scores or the
frequency of single-turn answers alone. Code and data are available at:
https://b-score.github.io.