< Explain other AI papers

Can Large Reasoning Models do Analogical Reasoning under Perceptual Uncertainty?

Giacomo Camposampiero, Michael Hersche, Roger Wattenhofer, Abu Sebastian, Abbas Rahimi

2025-03-17

Can Large Reasoning Models do Analogical Reasoning under Perceptual
  Uncertainty?

Summary

This paper investigates whether advanced AI models can reason like humans when faced with uncertain or confusing visual information, specifically in solving visual analogy problems.

What's the problem?

AI models, even powerful ones, often struggle when the visual information they receive is imperfect or contains distractions. It's unclear how well they can perform analogical reasoning, a type of problem-solving that involves identifying relationships between different elements, when visual information is not clear.

What's the solution?

The researchers tested two advanced AI models on visual analogy problems similar to those found on human IQ tests. They modified the problems to include confusing visual elements and less clear visual information to see how the AI models would perform under these conditions.

Why it matters?

This work matters because it highlights the limitations of current AI models in dealing with real-world visual information, which is often noisy and uncertain. It also suggests that different approaches, like neuro-symbolic models, might be more robust in handling these challenges.

Abstract

This work presents a first evaluation of two state-of-the-art Large Reasoning Models (LRMs), OpenAI's o3-mini and DeepSeek R1, on analogical reasoning, focusing on well-established nonverbal human IQ tests based on Raven's progressive matrices. We benchmark with the I-RAVEN dataset and its more difficult extension, I-RAVEN-X, which tests the ability to generalize to longer reasoning rules and ranges of the attribute values. To assess the influence of visual uncertainties on these nonverbal analogical reasoning tests, we extend the I-RAVEN-X dataset, which otherwise assumes an oracle perception. We adopt a two-fold strategy to simulate this imperfect visual perception: 1) we introduce confounding attributes which, being sampled at random, do not contribute to the prediction of the correct answer of the puzzles and 2) smoothen the distributions of the input attributes' values. We observe a sharp decline in OpenAI's o3-mini task accuracy, dropping from 86.6% on the original I-RAVEN to just 17.0% -- approaching random chance -- on the more challenging I-RAVEN-X, which increases input length and range and emulates perceptual uncertainty. This drop occurred despite spending 3.4x more reasoning tokens. A similar trend is also observed for DeepSeek R1: from 80.6% to 23.2%. On the other hand, a neuro-symbolic probabilistic abductive model, ARLC, that achieves state-of-the-art performances on I-RAVEN, can robustly reason under all these out-of-distribution tests, maintaining strong accuracy with only a modest reduction from 98.6% to 88.0%. Our code is available at https://github.com/IBM/raven-large-language-models.