Knowledge Augmented Complex Problem Solving with Large Language Models: A Survey
Da Zheng, Lun Du, Junwei Su, Yuchen Tian, Yuqi Zhu, Jintian Zhang, Lanning Wei, Ningyu Zhang, Huajun Chen
2025-05-08
Summary
This paper talks about how large language models (LLMs) are being used to solve complicated problems by combining their strong computing abilities with reasoning skills that are similar to how humans think.
What's the problem?
The problem is that even though LLMs are powerful, they still struggle with tasks that require many steps of reasoning, deep knowledge about specific subjects, and checking to make sure their answers are actually correct. These challenges make it hard for LLMs to be fully reliable when dealing with really tough or detailed problems.
What's the solution?
The researchers reviewed different ways that LLMs are being improved by adding extra knowledge and smarter reasoning methods. They looked at how these models can use outside information, better step-by-step thinking, and new ways to check their own answers to handle more complex problems.
Why it matters?
This matters because if LLMs can get better at solving hard problems, they could help people in areas like science, engineering, and education. Understanding what LLMs are good at and where they still need work helps researchers make them more useful and trustworthy for everyone.
Abstract
LLMs address complex problem-solving by integrating human-like reasoning and computational power, but face challenges in multi-step reasoning, domain knowledge, and result verification.