Visual Backdoor Attacks on MLLM Embodied Decision Making via Contrastive Trigger Learning
Qiusi Zhan, Hyeonjeong Ha, Rui Yang, Sirui Xu, Hanyang Chen, Liang-Yan Gui, Yu-Xiong Wang, Huan Zhang, Heng Ji, Daniel Kang
2025-11-03
Summary
This paper investigates a security flaw in advanced AI systems called multimodal large language models (MLLMs) that are used to control agents in virtual environments. These agents 'see' the world and make decisions based on what they observe, and the research shows how someone could secretly control these agents through hidden visual cues.
What's the problem?
As these AI agents become more capable of understanding and interacting with the visual world, they become vulnerable to 'visual backdoor attacks'. Imagine an agent that normally behaves correctly, but when it sees a specific object – like a certain brand of coffee mug – it suddenly starts following a malicious set of instructions. The challenge is making these hidden triggers reliable; objects look different from various angles and in different lighting, making it hard to consistently activate the backdoor.
What's the solution?
The researchers developed a framework called BEAT to reliably inject these visual backdoors. They did this in two main steps. First, they created a diverse training dataset showing the trigger object in many different scenes and situations. Second, they used a special training technique called Contrastive Trigger Learning (CTL) which essentially teaches the AI to strongly associate the trigger object with the malicious behavior, making the backdoor activation very precise and consistent. CTL focuses on clearly distinguishing between images *with* the trigger and images *without* it.
Why it matters?
This research is important because it reveals a significant security risk in these increasingly sophisticated AI systems. If these agents are deployed in the real world – for example, controlling robots or self-driving cars – a malicious actor could potentially take control by exploiting these visual backdoors. The findings highlight the urgent need to develop ways to protect these AI agents from such attacks before they become widespread.
Abstract
Multimodal large language models (MLLMs) have advanced embodied agents by enabling direct perception, reasoning, and planning task-oriented actions from visual inputs. However, such vision driven embodied agents open a new attack surface: visual backdoor attacks, where the agent behaves normally until a visual trigger appears in the scene, then persistently executes an attacker-specified multi-step policy. We introduce BEAT, the first framework to inject such visual backdoors into MLLM-based embodied agents using objects in the environments as triggers. Unlike textual triggers, object triggers exhibit wide variation across viewpoints and lighting, making them difficult to implant reliably. BEAT addresses this challenge by (1) constructing a training set that spans diverse scenes, tasks, and trigger placements to expose agents to trigger variability, and (2) introducing a two-stage training scheme that first applies supervised fine-tuning (SFT) and then our novel Contrastive Trigger Learning (CTL). CTL formulates trigger discrimination as preference learning between trigger-present and trigger-free inputs, explicitly sharpening the decision boundaries to ensure precise backdoor activation. Across various embodied agent benchmarks and MLLMs, BEAT achieves attack success rates up to 80%, while maintaining strong benign task performance, and generalizes reliably to out-of-distribution trigger placements. Notably, compared to naive SFT, CTL boosts backdoor activation accuracy up to 39% under limited backdoor data. These findings expose a critical yet unexplored security risk in MLLM-based embodied agents, underscoring the need for robust defenses before real-world deployment.