OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction
Huang Huang, Fangchen Liu, Letian Fu, Tingfan Wu, Mustafa Mukadam, Jitendra Malik, Ken Goldberg, Pieter Abbeel
2025-03-12
Summary
This paper talks about OTTER, a robot AI that uses its 'eyes' (cameras) and language instructions to pick the right actions by focusing only on what's important in the scene.
What's the problem?
Existing robot AI systems mess up their understanding when they try to adjust pre-trained vision-language models, making them struggle with new objects or environments.
What's the solution?
OTTER keeps the original vision-language model untouched and only grabs visual details that match the task instructions, like focusing on a specific tool mentioned in the command.
Why it matters?
This helps robots handle new tasks better without retraining, making them more adaptable in homes, factories, or hospitals where they encounter unfamiliar objects.
Abstract
Vision-Language-Action (VLA) models aim to predict robotic actions based on visual observations and language instructions. Existing approaches require fine-tuning pre-trained visionlanguage models (VLMs) as visual and language features are independently fed into downstream policies, degrading the pre-trained semantic alignments. We propose OTTER, a novel VLA architecture that leverages these existing alignments through explicit, text-aware visual feature extraction. Instead of processing all visual features, OTTER selectively extracts and passes only task-relevant visual features that are semantically aligned with the language instruction to the policy transformer. This allows OTTER to keep the pre-trained vision-language encoders frozen. Thereby, OTTER preserves and utilizes the rich semantic understanding learned from large-scale pre-training, enabling strong zero-shot generalization capabilities. In simulation and real-world experiments, OTTER significantly outperforms existing VLA models, demonstrating strong zeroshot generalization to novel objects and environments. Video, code, checkpoints, and dataset: https://ottervla.github.io/.