< Explain other AI papers

SAM 3: Segment Anything with Concepts

Nicolas Carion, Laura Gustafson, Yuan-Ting Hu, Shoubhik Debnath, Ronghang Hu, Didac Suris, Chaitanya Ryali, Kalyan Vasudev Alwala, Haitham Khedr, Andrew Huang, Jie Lei, Tengyu Ma, Baishan Guo, Arpit Kalla, Markus Marks, Joseph Greer, Meng Wang, Peize Sun, Roman Rädle, Triantafyllos Afouras, Effrosyni Mavroudi, Katherine Xu

2025-11-24

SAM 3: Segment Anything with Concepts

Summary

This paper introduces Segment Anything Model 3, or SAM 3, a new AI system that can identify, outline, and follow objects in both images and videos. What makes it special is that you can tell it *what* to look for using simple descriptions like 'yellow school bus', show it an example image, or even combine both!

What's the problem?

Existing computer vision systems struggle with accurately finding and separating specific objects in images and videos, especially when those objects are described in everyday language. They often need a lot of specific training data for each object they need to recognize, and they aren't very good at tracking objects as they move around in a video. Basically, it's hard to get a computer to 'understand' what you mean when you ask it to find something.

What's the solution?

The researchers created SAM 3, which uses a new approach called Promptable Concept Segmentation. They built a huge dataset of images and videos with over four million different object labels, including examples of things that *aren't* the object you're looking for to help the AI learn what to avoid. SAM 3 has two main parts: one that finds objects in a single image, and another that follows those objects in a video, both powered by a shared core technology. They also added a 'presence head' to improve how accurately the system detects objects.

Why it matters?

SAM 3 is a big step forward because it's much more accurate than previous systems at finding and segmenting objects based on simple prompts. It also improves on the original SAM's abilities. By releasing SAM 3 and a new testing benchmark, the researchers are hoping to encourage further development in this area and make it easier for anyone to build applications that can 'see' and understand the world around them.

Abstract

We present Segment Anything Model (SAM) 3, a unified model that detects, segments, and tracks objects in images and videos based on concept prompts, which we define as either short noun phrases (e.g., "yellow school bus"), image exemplars, or a combination of both. Promptable Concept Segmentation (PCS) takes such prompts and returns segmentation masks and unique identities for all matching object instances. To advance PCS, we build a scalable data engine that produces a high-quality dataset with 4M unique concept labels, including hard negatives, across images and videos. Our model consists of an image-level detector and a memory-based video tracker that share a single backbone. Recognition and localization are decoupled with a presence head, which boosts detection accuracy. SAM 3 doubles the accuracy of existing systems in both image and video PCS, and improves previous SAM capabilities on visual segmentation tasks. We open source SAM 3 along with our new Segment Anything with Concepts (SA-Co) benchmark for promptable concept segmentation.