CrossWordBench: Evaluating the Reasoning Capabilities of LLMs and LVLMs with Controllable Puzzle Generation
Jixuan Leng, Chengsong Huang, Langlin Huang, Bill Yuchen Lin, William W. Cohen, Haohan Wang, Jiaxin Huang
2025-04-09
Summary
This paper talks about CrossWordBench, a test that uses crossword puzzles to check how well AI models can solve problems that need both reading clues and understanding visual layouts.
What's the problem?
Current tests for AI models either focus on text or images, but not both together, making it hard to see if they can handle tasks that mix words and visuals like real-world problems.
What's the solution?
CrossWordBench creates customizable crossword puzzles in text and image formats to test AI models on tasks needing both language and visual skills, revealing which models can combine these abilities effectively.
Why it matters?
This helps improve AI assistants for tasks like solving real puzzles, understanding complex diagrams, or following instructions that involve both text and images, making them smarter and more versatile.
Abstract
Existing reasoning evaluation frameworks for Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) predominantly either assess text-based reasoning or vision-language understanding capabilities, with limited dynamic interplay between textual and visual constraints. To address this limitation, we introduce CrossWordBench, a benchmark designed to evaluate the reasoning capabilities of both LLMs and LVLMs through the medium of crossword puzzles-a task requiring multimodal adherence to semantic constraints from text-based clues and intersectional constraints from visual grid structures. CrossWordBench leverages a controllable puzzle generation framework that produces puzzles in multiple formats (text and image) and offers different evaluation strategies ranging from direct puzzle solving to interactive modes. Our extensive evaluation of over 20 models reveals that reasoning LLMs outperform non-reasoning models substantially by effectively leveraging crossing-letter constraints. We further demonstrate that LVLMs struggle with the task, showing a strong correlation between their puzzle-solving performance and grid-parsing accuracy. Our findings offer insights into the limitations of the reasoning capabilities of current LLMs and LVLMs, and provide an effective approach for creating multimodal constrained tasks for future evaluations.