Lost in Time: Clock and Calendar Understanding Challenges in Multimodal LLMs
Rohit Saxena, Aryo Pradipta Gema, Pasquale Minervini
2025-02-10
Summary
This paper talks about how well AI models that can understand both text and images (called multimodal large language models or MLLMs) can read clocks and calendars. The researchers created special tests to see if these AIs can understand time and dates from visual information.
What's the problem?
Even though reading clocks and calendars is easy for humans, it's surprisingly hard for AI. These models often struggle to accurately tell time from clock faces or figure out dates from calendar images, which limits how well they can understand and work with time-related information in real-world situations.
What's the solution?
The researchers made two new sets of test questions: ClockQA for different types of clocks, and CalendarQA for yearly calendars. They used these to test how well AI models could recognize visual details, do math with numbers, and understand time concepts. They checked if the AIs could handle both simple questions (like what day Christmas is) and trickier ones (like finding the 100th day of the year).
Why it matters?
This matters because understanding time is crucial for AI to work well in everyday situations. If AI assistants can't read clocks or calendars correctly, they might give wrong information for scheduling, planning, or answering time-related questions. By showing where current AI models struggle with time, this research helps developers know what to improve to make AI more reliable and useful in real-world applications.
Abstract
Understanding time from visual representations is a fundamental cognitive skill, yet it remains a challenge for multimodal large language models (MLLMs). In this work, we investigate the capabilities of MLLMs in interpreting time and date through analogue clocks and yearly calendars. To facilitate this, we curated a structured dataset comprising two subsets: 1) ClockQA, which comprises various types of clock styles-standard, black-dial, no-second-hand, Roman numeral, and arrow-hand clocks-paired with time related questions; and 2) CalendarQA, which consists of yearly calendar images with questions ranging from commonly known dates (e.g., Christmas, New Year's Day) to computationally derived ones (e.g., the 100th or 153rd day of the year). We aim to analyse how MLLMs can perform visual recognition, numerical reasoning, and temporal inference when presented with time-related visual data. Our evaluations show that despite recent advancements, reliably understanding time remains a significant challenge for MLLMs.