< Explain other AI papers

RiddleBench: A New Generative Reasoning Benchmark for LLMs

Deepon Halder, Alan Saji, Thanmay Jayakumar, Ratish Puduppully, Anoop Kunchukuttan, Raj Dabre

2025-11-05

RiddleBench: A New Generative Reasoning Benchmark for LLMs

Summary

This paper introduces a new way to test how well large language models, like those powering chatbots, can actually *think* beyond just doing math or answering simple questions.

What's the problem?

Current tests for AI reasoning mostly focus on things like solving math problems or logic puzzles with clear rules. They don't really check if the AI can handle more complex situations that require combining different types of thinking – like understanding spatial relationships, figuring out what's possible given certain limits, and using common sense. Basically, existing tests aren't good at measuring the kind of flexible, real-world reasoning that humans do naturally.

What's the solution?

The researchers created a new benchmark called RiddleBench, which contains over 1,700 challenging puzzles written in English. These puzzles are designed to specifically test an AI's ability to combine logic, spatial reasoning, and understanding constraints. They then tested several of the most advanced AI models, including those from Google and Anthropic, on these riddles.

Why it matters?

The results showed that even the best AI models struggle with RiddleBench, getting only slightly better than random guessing. This highlights significant weaknesses in their reasoning abilities. The benchmark isn't just about showing what AI *can't* do; it's a tool to help developers understand *why* AI fails and build more reliable and intelligent systems that can handle complex, real-world problems.

Abstract

Large Language Models have demonstrated strong performance on many established reasoning benchmarks. However, these benchmarks primarily evaluate structured skills like quantitative problem-solving, leaving a gap in assessing flexible, multifaceted reasoning abilities that are central to human intelligence. These abilities require integrating logical deduction with spatial awareness and constraint satisfaction, which current evaluations do not measure well. To address this, we introduce RiddleBench, a benchmark of 1,737 challenging puzzles in English designed to probe these core reasoning capabilities. Evaluation of state-of-the-art models on RiddleBench shows fundamental weaknesses. Even top proprietary models like Gemini 2.5 Pro, o3, and Claude 4 Sonnet achieve accuracy just above 60% (60.30%, 63.37%, and 63.16%). Analysis further reveals deep failures, including hallucination cascades (accepting flawed reasoning from other models) and poor self-correction due to a strong self-confirmation bias. Their reasoning is also fragile, with performance degrading significantly when constraints are reordered or irrelevant information is introduced. RiddleBench functions as a diagnostic tool for these issues and as a resource for guiding the development of more robust and reliable language models.