CarePilot: A Multi-Agent Framework for Long-Horizon Computer Task Automation in Healthcare
Akash Ghosh, Tajamul Ashraf, Rishu Kumar Singh, Numan Saeed, Sriparna Saha, Xiuying Chen, Salman Khan
2026-03-26
Summary
This paper introduces a new way to automate complex tasks in healthcare using artificial intelligence, specifically focusing on things like looking at medical images, using electronic health records, and managing lab results.
What's the problem?
Currently, AI systems that can automate tasks are good at simple, quick things, like responding to commands on your phone. However, they struggle with longer, more complicated tasks that require remembering steps and understanding the context, especially in specialized fields like healthcare where things are very detailed and require precise actions across multiple computer programs. Existing AI models just aren't very good at handling these long, multi-step workflows in a medical setting.
What's the solution?
The researchers created a new system called CarePilot. It works by using multiple 'agents' that learn through trial and error, similar to how humans learn. One agent, the 'Actor,' tries to predict the next best action to take based on what it sees on the screen and the current state of the system, using both short-term memory of recent actions and long-term memory of past experiences. Another agent, the 'Critic,' evaluates how good that action was and provides feedback to help the Actor improve. This process repeats, allowing the system to get better and better at completing complex medical tasks. They also created a new dataset, called CareFlow, to test and train these kinds of systems.
Why it matters?
This research is important because it paves the way for AI to take on more responsibility in healthcare, potentially reducing the workload on doctors and nurses and improving patient care. By creating a system that can reliably automate complex medical workflows, it could lead to faster diagnoses, more efficient treatment plans, and fewer errors. It’s a step towards making healthcare more accessible and efficient through the power of AI.
Abstract
Multimodal agentic pipelines are transforming human-computer interaction by enabling efficient and accessible automation of complex, real-world tasks. However, recent efforts have focused on short-horizon or general-purpose applications (e.g., mobile or desktop interfaces), leaving long-horizon automation for domain-specific systems, particularly in healthcare, largely unexplored. To address this, we introduce CareFlow, a high-quality human-annotated benchmark comprising complex, long-horizon software workflows across medical annotation tools, DICOM viewers, EHR systems, and laboratory information systems. On this benchmark, existing vision-language models (VLMs) perform poorly, struggling with long-horizon reasoning and multi-step interactions in medical contexts. To overcome this, we propose CarePilot, a multi-agent framework based on the actor-critic paradigm. The Actor integrates tool grounding with dual-memory mechanisms (long-term and short-term experience) to predict the next semantic action from the visual interface and system state. The Critic evaluates each action, updates memory based on observed effects, and either executes or provides corrective feedback to refine the workflow. Through iterative agentic simulation, the Actor learns to perform more robust and reasoning-aware predictions during inference. Our experiments show that CarePilot achieves state-of-the-art performance, outperforming strong closed-source and open-source multimodal baselines by approximately 15.26% and 3.38%, respectively, on our benchmark and out-of-distribution dataset.