Do great minds think alike? Investigating Human-AI Complementarity in Question Answering with CAIMIRA
Maharshi Gor, Hal Daumé III, Tianyi Zhou, Jordan Boyd-Graber
2024-10-10

Summary
This paper explores how humans and AI can work together in answering questions, using a framework called CAIMIRA to measure their abilities in problem-solving.
What's the problem?
There are claims that AI, particularly large language models (LLMs), can outperform humans in understanding and reasoning with language. However, it's unclear how true this is, especially when it comes to complex tasks like answering questions accurately and effectively.
What's the solution?
To investigate this, the authors introduced CAIMIRA, a framework that allows for a detailed comparison of how well humans and AI systems perform on question-answering tasks. They analyzed over 300,000 responses from around 70 different AI systems and 155 human participants across various quiz questions. The study found that while humans excel at reasoning based on knowledge and concepts, advanced AI models like GPT-4 perform better in retrieving specific information when the questions are straightforward.
Why it matters?
This research is important because it helps clarify the strengths and weaknesses of both humans and AI in problem-solving. By understanding these differences, future question-answering tasks can be designed to challenge both higher-level reasoning skills and the ability to interpret language in nuanced ways. This can lead to the development of AI systems that better complement human thinking, improving overall performance in real-world problem-solving scenarios.
Abstract
Recent advancements of large language models (LLMs) have led to claims of AI surpassing humans in natural language processing (NLP) tasks such as textual understanding and reasoning. This work investigates these assertions by introducing CAIMIRA, a novel framework rooted in item response theory (IRT) that enables quantitative assessment and comparison of problem-solving abilities of question-answering (QA) agents: humans and AI systems. Through analysis of over 300,000 responses from ~70 AI systems and 155 humans across thousands of quiz questions, CAIMIRA uncovers distinct proficiency patterns in knowledge domains and reasoning skills. Humans outperform AI systems in knowledge-grounded abductive and conceptual reasoning, while state-of-the-art LLMs like GPT-4 and LLaMA show superior performance on targeted information retrieval and fact-based reasoning, particularly when information gaps are well-defined and addressable through pattern matching or data retrieval. These findings highlight the need for future QA tasks to focus on questions that challenge not only higher-order reasoning and scientific thinking, but also demand nuanced linguistic interpretation and cross-contextual knowledge application, helping advance AI developments that better emulate or complement human cognitive abilities in real-world problem-solving.