"What Are You Doing?": Effects of Intermediate Feedback from Agentic LLM In-Car Assistants During Multi-Step Processing
Johannes Kirmayr, Raphael Wennmacher, Khanh Huynh, Lukas Stappen, Elisabeth André, Florian Alt
2026-02-20
Summary
This research explores how AI assistants in cars should communicate what they're doing while completing tasks, like making changes to settings or planning a route, to make drivers feel more comfortable and informed.
What's the problem?
When AI assistants handle complex tasks, it's unclear how much information they should give the user during the process. Should they explain every step, just give the final result, or something in between? This is especially important in cars where drivers need to stay focused on the road and can't afford to be distracted by overly chatty or confusing AI.
What's the solution?
Researchers conducted an experiment with 45 people in a driving simulator. They tested different ways an AI assistant provided feedback: sometimes it announced each step it was taking, sometimes it showed intermediate results, and sometimes it stayed completely silent until the task was finished. They measured how quickly people felt the task was completed, how much they trusted the AI, how easy the task felt, and their overall experience.
Why it matters?
The study found that giving drivers updates during a task – but not *too* many – made them feel more confident in the AI, reduced their mental effort, and improved their overall experience. It also suggests that AI should start by being very clear about what it's doing to build trust, and then become less talkative as it proves itself reliable, adjusting how much it says based on how important the task is and the driving situation.
Abstract
Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. We translate our empirical findings into design implications for feedback timing and verbosity in agentic assistants, balancing transparency and efficiency.