Steerable Visual Representations
Jona Ruthardt, Manu Gaur, Deva Ramanan, Makarand Tapaswi, Yuki M. Asano
2026-04-03
Summary
This paper introduces a new way to create visual representations from images that can be controlled using text prompts, offering more flexibility than existing methods.
What's the problem?
Current image recognition systems, like those using Vision Transformers, are good at identifying the most obvious things in a picture but struggle to focus on specific, less noticeable details you might want them to find. On the other hand, systems that combine images and text (like those using large language models) tend to prioritize the text and lose some of their ability to generally understand what's in the image. Essentially, it's hard to get a system that's both good at understanding images and responsive to specific instructions.
What's the solution?
The researchers developed 'Steerable Visual Representations' which allow you to 'steer' the image understanding process using natural language. Instead of combining text and image information *after* the image is processed (like most systems), they directly incorporate text into the image processing itself, early on. They do this using a technique called 'cross-attention' which is relatively lightweight, meaning it doesn't require a lot of extra computing power. They also created tests to measure how well their system can be steered and showed it works well on tasks like finding unusual items and recognizing specific objects for different people.
Why it matters?
This research is important because it bridges the gap between general image understanding and the ability to focus on specific details guided by text. This could be useful in many applications, like finding specific defects in manufacturing, helping doctors identify subtle signs of disease in medical images, or creating more personalized image search experiences. The fact that it works well even on images it hasn't seen before ('zero-shot generalization') is particularly promising.
Abstract
Pretrained Vision Transformers (ViTs) such as DINOv2 and MAE provide generic image features that can be applied to a variety of downstream tasks such as retrieval, classification, and segmentation. However, such representations tend to focus on the most salient visual cues in the image, with no way to direct them toward less prominent concepts of interest. In contrast, Multimodal LLMs can be guided with textual prompts, but the resulting representations tend to be language-centric and lose their effectiveness for generic visual tasks. To address this, we introduce Steerable Visual Representations, a new class of visual representations, whose global and local features can be steered with natural language. While most vision-language models (e.g., CLIP) fuse text with visual features after encoding (late fusion), we inject text directly into the layers of the visual encoder (early fusion) via lightweight cross-attention. We introduce benchmarks for measuring representational steerability, and demonstrate that our steerable visual features can focus on any desired objects in an image while preserving the underlying representation quality. Our method also matches or outperforms dedicated approaches on anomaly detection and personalized object discrimination, exhibiting zero-shot generalization to out-of-distribution tasks.