< Explain other AI papers

Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse

Ryan Liu, Jiayi Geng, Addison J. Wu, Ilia Sucholutsky, Tania Lombrozo, Thomas L. Griffiths

2024-10-30

Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse

Summary

This paper explores how using Chain-of-Thought (CoT) prompting can sometimes hurt the performance of large language models (LLMs) on certain tasks, drawing parallels between human thinking and model reasoning.

What's the problem?

While CoT prompting has been shown to improve the performance of LLMs on many tasks by encouraging step-by-step reasoning, it can actually lead to worse results in specific situations. Researchers want to understand when and why this happens, especially since human thinking can sometimes hinder performance in complex tasks.

What's the solution?

The authors investigate three types of tasks where CoT prompting might reduce performance: implicit statistical learning, visual recognition, and classifying exceptions. They conduct experiments with various state-of-the-art models and find that using CoT can lower accuracy significantly in these cases. However, they also identify some tasks where CoT helps or does not harm performance, suggesting that not all tasks benefit from this approach. By analyzing these patterns, they provide insights into how LLMs think and how their reasoning processes compare to human thinking.

Why it matters?

This research is important because it helps improve our understanding of how to effectively use CoT prompting with LLMs. By identifying when this method works and when it doesn't, developers can better design AI systems that are more effective at handling complex tasks. This knowledge can lead to more reliable AI applications in fields like education, customer service, and creative writing.

Abstract

Chain-of-thought (CoT) prompting has become a widely used strategy for working with large language and multimodal models. While CoT has been shown to improve performance across many tasks, determining the settings in which it is effective remains an ongoing effort. In particular, it is still an open question in what settings CoT systematically reduces model performance. In this paper, we seek to identify the characteristics of tasks where CoT reduces performance by drawing inspiration from cognitive psychology, looking at cases where (i) verbal thinking or deliberation hurts performance in humans, and (ii) the constraints governing human performance generalize to language models. Three such cases are implicit statistical learning, visual recognition, and classifying with patterns containing exceptions. In extensive experiments across all three settings, we find that a diverse collection of state-of-the-art models exhibit significant drop-offs in performance (e.g., up to 36.3% absolute accuracy for OpenAI o1-preview compared to GPT-4o) when using inference-time reasoning compared to zero-shot counterparts. We also identify three tasks that satisfy condition (i) but not (ii), and find that while verbal thinking reduces human performance in these tasks, CoT retains or increases model performance. Overall, our results show that while there is not an exact parallel between the cognitive processes of models and those of humans, considering cases where thinking has negative consequences for human performance can help us identify settings where it negatively impacts models. By connecting the literature on human deliberation with evaluations of CoT, we offer a new tool that can be used in understanding the impact of prompt choices and inference-time reasoning.