Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Multiple Choice Questions: Reasoning Makes Large Language Models (LLMs) More Self-Confident Even When They Are Wrong
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-01-21T01:33:46.668Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7220402956008911},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2501.09775","authors":[{"_id":"678e1b151a99a49b98056e1c","name":"Tairan Fu","hidden":false},{"_id":"678e1b151a99a49b98056e1d","user":{"_id":"66852115f8c3f32152346733","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66852115f8c3f32152346733/EuApIooUvMesCSr27Auze.jpeg","isPro":false,"fullname":"Javier Conde","user":"javicond3","type":"user"},"name":"Javier Conde","status":"claimed_verified","statusLastChangedAt":"2025-01-21T10:51:53.077Z","hidden":false},{"_id":"678e1b151a99a49b98056e1e","user":{"_id":"64f31365ed48e3bb9c487d5d","avatarUrl":"/avatars/979c1979eadbd4529c95b925bbb58d78.svg","isPro":false,"fullname":"Gonzalo","user":"gonzmart","type":"user"},"name":"Gonzalo Martínez","status":"admin_assigned","statusLastChangedAt":"2025-01-20T16:59:39.580Z","hidden":false},{"_id":"678e1b151a99a49b98056e1f","user":{"_id":"5f9c00a5777efc07d7f1e4be","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1665073337782-5f9c00a5777efc07d7f1e4be.png","isPro":true,"fullname":"María Grandury","user":"mariagrandury","type":"user"},"name":"María Grandury","status":"extracted_confirmed","statusLastChangedAt":"2025-01-20T10:35:14.566Z","hidden":false},{"_id":"678e1b151a99a49b98056e20","user":{"_id":"6574f66b06fdcd4ca9491299","avatarUrl":"/avatars/e31821949d75efb750ab2d9ebe12b9a8.svg","isPro":false,"fullname":"pedro reviriego","user":"reviriego","type":"user"},"name":"Pedro Reviriego","status":"claimed_verified","statusLastChangedAt":"2025-01-20T13:16:45.528Z","hidden":false}],"publishedAt":"2025-01-16T10:27:51.000Z","submittedOnDailyAt":"2025-01-20T07:15:24.921Z","title":"Multiple Choice Questions: Reasoning Makes Large Language Models (LLMs)\n More Self-Confident Even When They Are Wrong","submittedOnDailyBy":{"_id":"64f31365ed48e3bb9c487d5d","avatarUrl":"/avatars/979c1979eadbd4529c95b925bbb58d78.svg","isPro":false,"fullname":"Gonzalo","user":"gonzmart","type":"user"},"summary":"One of the most widely used methods to evaluate LLMs are Multiple Choice\nQuestion (MCQ) tests. MCQ benchmarks enable the testing of LLM knowledge on\nalmost any topic at scale as the results can be processed automatically. To\nhelp the LLM answer, a few examples called few shots can be included in the\nprompt. Moreover, the LLM can be asked to answer the question directly with the\nselected option or to first provide the reasoning and then the selected answer,\nwhich is known as chain of thought. In addition to checking whether the\nselected answer is correct, the evaluation can look at the LLM-estimated\nprobability of its response as an indication of the confidence of the LLM in\nthe response. In this paper, we study how the LLM confidence in its answer\ndepends on whether the model has been asked to answer directly or to provide\nthe reasoning before answering. The results of the evaluation of questions on a\nwide range of topics in seven different models show that LLMs are more\nconfident in their answers when they provide reasoning before the answer. This\noccurs regardless of whether the selected answer is correct. Our hypothesis is\nthat this behavior is due to the reasoning that modifies the probability of the\nselected answer, as the LLM predicts the answer based on the input question and\nthe reasoning that supports the selection made. Therefore, LLM estimated\nprobabilities seem to have intrinsic limitations that should be understood in\norder to use them in evaluation procedures. Interestingly, the same behavior\nhas been observed in humans, for whom explaining an answer increases confidence\nin its correctness.","upvotes":32,"discussionId":"678e1b161a99a49b98056e61","ai_summary":"LLMs are more confident in their answers when reasoning is provided before the answer, regardless of correctness, due to modifications in probability estimation.","ai_keywords":["Large Language Models (LLMs)","Multiple Choice Question (MCQ) tests","few shots","chain of thought","LLM confidence","LLM-estimated probability"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64f31365ed48e3bb9c487d5d","avatarUrl":"/avatars/979c1979eadbd4529c95b925bbb58d78.svg","isPro":false,"fullname":"Gonzalo","user":"gonzmart","type":"user"},{"_id":"6574f66b06fdcd4ca9491299","avatarUrl":"/avatars/e31821949d75efb750ab2d9ebe12b9a8.svg","isPro":false,"fullname":"pedro reviriego","user":"reviriego","type":"user"},{"_id":"6720a6a1ddbde77be9df631b","avatarUrl":"/avatars/09239c9728c6f5c9d542bf6204f18a1e.svg","isPro":false,"fullname":"AMA","user":"AmA-2025","type":"user"},{"_id":"5f9c00a5777efc07d7f1e4be","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1665073337782-5f9c00a5777efc07d7f1e4be.png","isPro":true,"fullname":"María Grandury","user":"mariagrandury","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"64a431f04f46b933c8833703","avatarUrl":"/avatars/b78a2070e51e9bdba8fb399e6c15885a.svg","isPro":false,"fullname":"Alfonso Sánchez-Macián","user":"macian","type":"user"},{"_id":"65b901df7fd4e741b208bdbf","avatarUrl":"/avatars/3a579f7d11357e6bb5211fc9590f5ce0.svg","isPro":false,"fullname":"Miguel González","user":"migonsa","type":"user"},{"_id":"656c439e8dffbab5af39cf29","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/8Lj7sg8qUEoo7nxLNxb1l.png","isPro":false,"fullname":"Andres Navarro","user":"andresnav","type":"user"},{"_id":"678e71b58db484d2799ac4f7","avatarUrl":"/avatars/ad190b7deaa8fa7a640ddb1a2ba84250.svg","isPro":false,"fullname":"Ángel Merino","user":"angel-gitt","type":"user"},{"_id":"678e722e71b571f910097a8f","avatarUrl":"/avatars/e21f1da144ae8b73178bab1e13daf1ac.svg","isPro":false,"fullname":"Angel Cuevas","user":"acrumin","type":"user"},{"_id":"6745b57af07989f1a601db60","avatarUrl":"/avatars/986cb3fc1c210196986900aa4e7bd3a2.svg","isPro":false,"fullname":"Carlos Arriaga","user":"Arri98","type":"user"},{"_id":"66852115f8c3f32152346733","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66852115f8c3f32152346733/EuApIooUvMesCSr27Auze.jpeg","isPro":false,"fullname":"Javier Conde","user":"javicond3","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":3}">
Papers
arxiv:2501.09775

Multiple Choice Questions: Reasoning Makes Large Language Models (LLMs) More Self-Confident Even When They Are Wrong

Published on Jan 16, 2025
· Submitted by
Gonzalo
on Jan 20, 2025
#3 Paper of the day

Abstract

LLMs are more confident in their answers when reasoning is provided before the answer, regardless of correctness, due to modifications in probability estimation.

AI-generated summary

One of the most widely used methods to evaluate LLMs are Multiple Choice Question (MCQ) tests. MCQ benchmarks enable the testing of LLM knowledge on almost any topic at scale as the results can be processed automatically. To help the LLM answer, a few examples called few shots can be included in the prompt. Moreover, the LLM can be asked to answer the question directly with the selected option or to first provide the reasoning and then the selected answer, which is known as chain of thought. In addition to checking whether the selected answer is correct, the evaluation can look at the LLM-estimated probability of its response as an indication of the confidence of the LLM in the response. In this paper, we study how the LLM confidence in its answer depends on whether the model has been asked to answer directly or to provide the reasoning before answering. The results of the evaluation of questions on a wide range of topics in seven different models show that LLMs are more confident in their answers when they provide reasoning before the answer. This occurs regardless of whether the selected answer is correct. Our hypothesis is that this behavior is due to the reasoning that modifies the probability of the selected answer, as the LLM predicts the answer based on the input question and the reasoning that supports the selection made. Therefore, LLM estimated probabilities seem to have intrinsic limitations that should be understood in order to use them in evaluation procedures. Interestingly, the same behavior has been observed in humans, for whom explaining an answer increases confidence in its correctness.

Community

Paper author Paper submitter
This comment has been hidden

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2501.09775 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.09775 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.09775 in a Space README.md to link it from this page.

Collections including this paper 6