< Explain other AI papers

What Questions Should Robots Be Able to Answer? A Dataset of User Questions for Explainable Robotics

Lennart Wachowiak, Andrew Coles, Gerard Canal, Oya Celiktutan

2025-10-23

What Questions Should Robots Be Able to Answer? A Dataset of User Questions for Explainable Robotics

Summary

This paper focuses on understanding what questions people ask robots when they're helping around the house, and creates a large collection of those questions to help robot designers build better conversational robots.

What's the problem?

As robots become more common in homes, they need to be able to understand and answer questions from people. Most research on robot explanations focuses on *why* a robot did something, but people ask a much wider range of questions. There wasn't a good resource available that captured all the different types of questions people would actually ask a household robot in various situations.

What's the solution?

The researchers gathered questions from 100 people by showing them videos and descriptions of robots doing everyday tasks. Participants were asked what they would want to ask the robot in each scenario. This resulted in a dataset of nearly 1,900 questions, categorized by what the question is about – like how the robot performs a task, what it's capable of, or how it would handle tricky situations. They also noticed that people with less robotics experience tend to ask simpler, more basic questions.

Why it matters?

This dataset is important because it gives robot developers a clear idea of the kinds of questions their robots need to be able to answer. It can be used to test how well a robot's question-answering system works, and to design explanations that are actually helpful and understandable to people, whether they're robotics experts or complete beginners. Ultimately, it helps build robots that can interact with us more naturally and effectively.

Abstract

With the growing use of large language models and conversational interfaces in human-robot interaction, robots' ability to answer user questions is more important than ever. We therefore introduce a dataset of 1,893 user questions for household robots, collected from 100 participants and organized into 12 categories and 70 subcategories. Most work in explainable robotics focuses on why-questions. In contrast, our dataset provides a wide variety of questions, from questions about simple execution details to questions about how the robot would act in hypothetical scenarios -- thus giving roboticists valuable insights into what questions their robot needs to be able to answer. To collect the dataset, we created 15 video stimuli and 7 text stimuli, depicting robots performing varied household tasks. We then asked participants on Prolific what questions they would want to ask the robot in each portrayed situation. In the final dataset, the most frequent categories are questions about task execution details (22.5%), the robot's capabilities (12.7%), and performance assessments (11.3%). Although questions about how robots would handle potentially difficult scenarios and ensure correct behavior are less frequent, users rank them as the most important for robots to be able to answer. Moreover, we find that users who identify as novices in robotics ask different questions than more experienced users. Novices are more likely to inquire about simple facts, such as what the robot did or the current state of the environment. As robots enter environments shared with humans and language becomes central to giving instructions and interaction, this dataset provides a valuable foundation for (i) identifying the information robots need to log and expose to conversational interfaces, (ii) benchmarking question-answering modules, and (iii) designing explanation strategies that align with user expectations.