Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Logical Reasoning in Large Language Models: A Survey
[go: Go Back, main page]

\"Screenshot

\n","updatedAt":"2025-02-14T02:55:58.715Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9179,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3736025393009186},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"67afeeee8a1b0f0b480b1f07","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-02-15T01:33:34.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Advancing Reasoning in Large Language Models: Promising Methods and Approaches](https://huggingface.co/papers/2502.03671) (2025)\n* [Instantiation-based Formalization of Logical Reasoning Tasks using Language Models and Logical Solvers](https://huggingface.co/papers/2501.16961) (2025)\n* [SR-FoT: A Syllogistic-Reasoning Framework of Thought for Large Language Models Tackling Knowledge-based Reasoning Tasks](https://huggingface.co/papers/2501.11599) (2025)\n* [The Emergence of Strategic Reasoning of Large Language Models](https://huggingface.co/papers/2412.13013) (2024)\n* [ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning](https://huggingface.co/papers/2502.01100) (2025)\n* [JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models](https://huggingface.co/papers/2501.14851) (2025)\n* [Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models](https://huggingface.co/papers/2501.09686) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-02-15T01:33:34.252Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7527275085449219},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"67b07b9a01fb4ed87be42339","author":{"_id":"65ecc7ee992beff38cc5a928","avatarUrl":"/avatars/4012d188daadc651804cb10e140f9f71.svg","fullname":"Vincent Envogue","name":"venvogue","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-02-15T11:33:46.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"AlphaGeometry model is not mentioned in the paper.","html":"

AlphaGeometry model is not mentioned in the paper.

\n","updatedAt":"2025-02-15T11:33:46.113Z","author":{"_id":"65ecc7ee992beff38cc5a928","avatarUrl":"/avatars/4012d188daadc651804cb10e140f9f71.svg","fullname":"Vincent Envogue","name":"venvogue","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9457889795303345},"editors":["venvogue"],"editorAvatarUrls":["/avatars/4012d188daadc651804cb10e140f9f71.svg"],"reactions":[],"isReport":false},"replies":[{"id":"67c6718c52af8936f70661cd","author":{"_id":"6467136a8334813a7ae1d1b0","avatarUrl":"/avatars/d0fd37532c830e8bef14995148190f9f.svg","fullname":"Zhizhang Fu","name":"HarryFu","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-03-04T03:20:44.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"will add in the next round!","html":"

will add in the next round!

\n","updatedAt":"2025-03-04T03:20:44.793Z","author":{"_id":"6467136a8334813a7ae1d1b0","avatarUrl":"/avatars/d0fd37532c830e8bef14995148190f9f.svg","fullname":"Zhizhang Fu","name":"HarryFu","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8764627575874329},"editors":["HarryFu"],"editorAvatarUrls":["/avatars/d0fd37532c830e8bef14995148190f9f.svg"],"reactions":[],"isReport":false,"parentCommentId":"67b07b9a01fb4ed87be42339"}}]},{"id":"67b125ac19666c921b2e9313","author":{"_id":"62f32eab52ad88c930bb3f3b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png","fullname":"Asankhaya Sharma","name":"codelion","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":385,"isUserFollowing":false},"createdAt":"2025-02-15T23:39:24.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Many of the techniques covered in the survey are implemented in optillm, our open-source optimizing inference proxy - https://github.com/codelion/optillm","html":"

Many of the techniques covered in the survey are implemented in optillm, our open-source optimizing inference proxy - https://github.com/codelion/optillm

\n","updatedAt":"2025-02-15T23:39:24.189Z","author":{"_id":"62f32eab52ad88c930bb3f3b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png","fullname":"Asankhaya Sharma","name":"codelion","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":385,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8256968259811401},"editors":["codelion"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.09100","authors":[{"_id":"67aeb0a3d58f4990b384d83e","name":"Hanmeng Liu","hidden":false},{"_id":"67aeb0a3d58f4990b384d83f","user":{"_id":"6467136a8334813a7ae1d1b0","avatarUrl":"/avatars/d0fd37532c830e8bef14995148190f9f.svg","isPro":false,"fullname":"Zhizhang Fu","user":"HarryFu","type":"user"},"name":"Zhizhang Fu","status":"claimed_verified","statusLastChangedAt":"2025-02-14T11:10:54.325Z","hidden":false},{"_id":"67aeb0a3d58f4990b384d840","name":"Mengru Ding","hidden":false},{"_id":"67aeb0a3d58f4990b384d841","user":{"_id":"62e47d1b6a82e063860c587e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62e47d1b6a82e063860c587e/jvFt1caSZNWDQTYKZQ9K-.jpeg","isPro":false,"fullname":"ruoxining","user":"ruoxining","type":"user"},"name":"Ruoxi Ning","status":"extracted_confirmed","statusLastChangedAt":"2025-02-14T06:28:50.414Z","hidden":false},{"_id":"67aeb0a3d58f4990b384d842","name":"Chaoli Zhang","hidden":false},{"_id":"67aeb0a3d58f4990b384d843","name":"Xiaozhang Liu","hidden":false},{"_id":"67aeb0a3d58f4990b384d844","name":"Yue Zhang","hidden":false}],"publishedAt":"2025-02-13T09:19:14.000Z","submittedOnDailyAt":"2025-02-14T00:25:58.708Z","title":"Logical Reasoning in Large Language Models: A Survey","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"With the emergence of advanced reasoning models like OpenAI o3 and\nDeepSeek-R1, large language models (LLMs) have demonstrated remarkable\nreasoning capabilities. However, their ability to perform rigorous logical\nreasoning remains an open question. This survey synthesizes recent advancements\nin logical reasoning within LLMs, a critical area of AI research. It outlines\nthe scope of logical reasoning in LLMs, its theoretical foundations, and the\nbenchmarks used to evaluate reasoning proficiency. We analyze existing\ncapabilities across different reasoning paradigms - deductive, inductive,\nabductive, and analogical - and assess strategies to enhance reasoning\nperformance, including data-centric tuning, reinforcement learning, decoding\nstrategies, and neuro-symbolic approaches. The review concludes with future\ndirections, emphasizing the need for further exploration to strengthen logical\nreasoning in AI systems.","upvotes":24,"discussionId":"67aeb0a4d58f4990b384d871","ai_summary":"This survey examines advancements in logical reasoning within large language models, analyzing capabilities across various paradigms and strategies for improvement.","ai_keywords":["large language models","logical reasoning","deductive reasoning","inductive reasoning","abductive reasoning","analogical reasoning","data-centric tuning","reinforcement learning","decoding strategies","neuro-symbolic approaches"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"668cd4bbe990292e5f6974d3","avatarUrl":"/avatars/d1747b2372e94500ecb5fb56809b482d.svg","isPro":false,"fullname":"Jinyeong Kim","user":"rubatoyeong","type":"user"},{"_id":"66f612b934b8ac9ffa44f084","avatarUrl":"/avatars/6836c122e19c66c90f1673f28b30d7f0.svg","isPro":false,"fullname":"Tang","user":"tommysally","type":"user"},{"_id":"62e47d1b6a82e063860c587e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62e47d1b6a82e063860c587e/jvFt1caSZNWDQTYKZQ9K-.jpeg","isPro":false,"fullname":"ruoxining","user":"ruoxining","type":"user"},{"_id":"62567c86d444a9b5a0ec51c1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62567c86d444a9b5a0ec51c1/1vXJf2uGztPcXpkwyTBr6.png","isPro":false,"fullname":"Dongfu Jiang","user":"DongfuJiang","type":"user"},{"_id":"64de37ee5e192985054be575","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64de37ee5e192985054be575/fVV7JQMtp_J3uFqszJJHH.jpeg","isPro":false,"fullname":"Yuansheng Ni","user":"yuanshengni","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"646def59ad20c6fa4f309df5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646def59ad20c6fa4f309df5/atJbnb0HD0fygm-CAK_6O.png","isPro":false,"fullname":"DataTune","user":"datatune","type":"user"},{"_id":"6703ec213df5fe425086ef73","avatarUrl":"/avatars/e6f9dad6587ee0883ae10f8805ab7ea9.svg","isPro":true,"fullname":"Tianhao Liang","user":"tianhao2k","type":"user"},{"_id":"671c8457941d8e30b6cc6ef6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/V_8z1kx5ey-aafsBNIsPh.png","isPro":false,"fullname":"DingMengru","user":"DingMengru","type":"user"},{"_id":"6467136a8334813a7ae1d1b0","avatarUrl":"/avatars/d0fd37532c830e8bef14995148190f9f.svg","isPro":false,"fullname":"Zhizhang Fu","user":"HarryFu","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"62deb6c3520a9fae78bb9bc3","avatarUrl":"/avatars/5d75fffa9bad36d20adb8f47141d1f0b.svg","isPro":false,"fullname":"Literate Goggles","user":"literate-goggles","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2502.09100

Logical Reasoning in Large Language Models: A Survey

Published on Feb 13, 2025
· Submitted by
AK
on Feb 14, 2025
Authors:
,
,
,
,

Abstract

This survey examines advancements in logical reasoning within large language models, analyzing capabilities across various paradigms and strategies for improvement.

AI-generated summary

With the emergence of advanced reasoning models like OpenAI o3 and DeepSeek-R1, large language models (LLMs) have demonstrated remarkable reasoning capabilities. However, their ability to perform rigorous logical reasoning remains an open question. This survey synthesizes recent advancements in logical reasoning within LLMs, a critical area of AI research. It outlines the scope of logical reasoning in LLMs, its theoretical foundations, and the benchmarks used to evaluate reasoning proficiency. We analyze existing capabilities across different reasoning paradigms - deductive, inductive, abductive, and analogical - and assess strategies to enhance reasoning performance, including data-centric tuning, reinforcement learning, decoding strategies, and neuro-symbolic approaches. The review concludes with future directions, emphasizing the need for further exploration to strengthen logical reasoning in AI systems.

Community

Paper submitter

Screenshot 2025-02-13 at 9.55.48 PM.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

AlphaGeometry model is not mentioned in the paper.

·
Paper author

will add in the next round!

Many of the techniques covered in the survey are implemented in optillm, our open-source optimizing inference proxy - https://github.com/codelion/optillm

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.09100 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.09100 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.09100 in a Space README.md to link it from this page.

Collections including this paper 13