< Explain other AI papers

Logical Reasoning in Large Language Models: A Survey

Hanmeng Liu, Zhizhang Fu, Mengru Ding, Ruoxi Ning, Chaoli Zhang, Xiaozhang Liu, Yue Zhang

2025-02-14

Logical Reasoning in Large Language Models: A Survey

Summary

This paper talks about how well large language models (LLMs) can do logical reasoning, which is like solving puzzles or figuring out complex problems step-by-step. It looks at the latest improvements in this area and explains different ways these AI models try to think logically.

What's the problem?

Even though LLMs are getting really good at understanding language and solving some tricky problems, we're not sure if they can do proper logical reasoning like humans. It's like wondering if a super-smart calculator can actually understand math or if it's just really good at following rules without really 'getting it'.

What's the solution?

The researchers didn't solve the problem directly, but they did a few important things. They looked at all the recent studies on how LLMs do logical reasoning and organized this information. They explained different types of reasoning these models can do, like deductive (using general rules to figure out specific things) and inductive (using specific examples to make general rules). They also talked about ways to make LLMs better at reasoning, such as training them on special data or using techniques from other areas of AI.

Why it matters?

This matters because as AI becomes more common in our lives, we need to know how 'smart' it really is. If we can make AI that truly understands logic, it could help solve complex problems in science, medicine, or technology. Understanding the limits of current AI also helps us know where humans are still needed and how to make AI safer and more reliable. This research gives scientists a roadmap for making AI that can think more like humans do when solving difficult problems.

Abstract

With the emergence of advanced reasoning models like OpenAI o3 and DeepSeek-R1, large language models (LLMs) have demonstrated remarkable reasoning capabilities. However, their ability to perform rigorous logical reasoning remains an open question. This survey synthesizes recent advancements in logical reasoning within LLMs, a critical area of AI research. It outlines the scope of logical reasoning in LLMs, its theoretical foundations, and the benchmarks used to evaluate reasoning proficiency. We analyze existing capabilities across different reasoning paradigms - deductive, inductive, abductive, and analogical - and assess strategies to enhance reasoning performance, including data-centric tuning, reinforcement learning, decoding strategies, and neuro-symbolic approaches. The review concludes with future directions, emphasizing the need for further exploration to strengthen logical reasoning in AI systems.