CAR-bench: Evaluating the Consistency and Limit-Awareness of LLM Agents under Real-World Uncertainty
Johannes Kirmayr, Lukas Stappen, Elisabeth André
2026-02-06
Summary
This paper introduces a new way to test how well large language models (LLMs) can act as helpful assistants in situations where things aren't perfectly clear, like when a user gives a vague request. It focuses on how these models perform in a realistic setting, specifically an in-car voice assistant.
What's the problem?
Current tests for LLM agents are too simple. They assume the user is always clear and everything works perfectly. In reality, users often give incomplete or confusing instructions, and sometimes the assistant doesn't have all the tools it needs to complete a task. This creates uncertainty that the assistant needs to handle gracefully, but existing tests don't check for this ability. The tests don't evaluate if the LLM knows its limits or can ask for clarification when needed.
What's the solution?
The researchers created a benchmark called CAR-bench, which simulates an in-car assistant environment. This environment includes a simulated user who can be unclear, rules the assistant must follow, and 58 different tools the assistant can use (like navigation, making calls, or controlling the car). CAR-bench includes special tasks designed to test how well the assistant handles situations where it doesn't have the right tools or information, and how well it can ask clarifying questions when a request is ambiguous.
Why it matters?
This work is important because it shows that even the most advanced LLMs struggle with real-world situations where things aren't straightforward. They often make things up, violate rules, or act too quickly without getting enough information. This highlights the need to build LLM agents that are more reliable, self-aware, and better at handling uncertainty before they can be safely used in applications like self-driving cars or personal assistants.
Abstract
Existing benchmarks for Large Language Model (LLM) agents focus on task completion under idealistic settings but overlook reliability in real-world, user-facing applications. In domains, such as in-car voice assistants, users often issue incomplete or ambiguous requests, creating intrinsic uncertainty that agents must manage through dialogue, tool use, and policy adherence. We introduce CAR-bench, a benchmark for evaluating consistency, uncertainty handling, and capability awareness in multi-turn, tool-using LLM agents in an in-car assistant domain. The environment features an LLM-simulated user, domain policies, and 58 interconnected tools spanning navigation, productivity, charging, and vehicle control. Beyond standard task completion, CAR-bench introduces Hallucination tasks that test agents' limit-awareness under missing tools or information, and Disambiguation tasks that require resolving uncertainty through clarification or internal information gathering. Baseline results reveal large gaps between occasional and consistent success on all task types. Even frontier reasoning LLMs achieve less than 50% consistent pass rate on Disambiguation tasks due to premature actions, and frequently violate policies or fabricate information to satisfy user requests in Hallucination tasks, underscoring the need for more reliable and self-aware LLM agents in real-world settings.