Selective Contrastive Learning for Weakly Supervised Affordance Grounding
WonJun Moon, Hyun Seok Seong, Jae-Pil Heo
2025-08-25
Summary
This paper focuses on teaching computers to understand which parts of an object are used for specific actions, like knowing a door handle is for turning. It tries to do this by learning from how people demonstrate actions, without needing someone to specifically label every single part in every image.
What's the problem?
Current methods struggle because they often focus on recognizing the *whole* object instead of the specific *part* that's important for an action. For example, a model might learn to recognize a 'chair' generally, but not specifically the 'seat' as the part you sit on. They get distracted by the overall appearance and miss the subtle cues about what makes an action possible.
What's the solution?
The researchers developed a new technique that helps the computer focus on the relevant parts. First, they use a system called CLIP to identify objects involved in an action from both a first-person (like you seeing it) and a third-person (like watching someone else) view. Then, by comparing these views, they pinpoint the specific parts that are important for the action. They also use a method that encourages the computer to distinguish between the useful parts and the background, making sure it's paying attention to the right things.
Why it matters?
This work is important because it brings computers closer to understanding the world like humans do. If a robot can accurately identify which parts of objects are used for what actions, it can interact with its environment more effectively and perform tasks more naturally. This is a step towards more helpful and intuitive robots.
Abstract
Facilitating an entity's interaction with objects requires accurately identifying parts that afford specific actions. Weakly supervised affordance grounding (WSAG) seeks to imitate human learning from third-person demonstrations, where humans intuitively grasp functional parts without needing pixel-level annotations. To achieve this, grounding is typically learned using a shared classifier across images from different perspectives, along with distillation strategies incorporating part discovery process. However, since affordance-relevant parts are not always easily distinguishable, models primarily rely on classification, often focusing on common class-specific patterns that are unrelated to affordance. To address this limitation, we move beyond isolated part-level learning by introducing selective prototypical and pixel contrastive objectives that adaptively learn affordance-relevant cues at both the part and object levels, depending on the granularity of the available information. Initially, we find the action-associated objects in both egocentric (object-focused) and exocentric (third-person example) images by leveraging CLIP. Then, by cross-referencing the discovered objects of complementary views, we excavate the precise part-level affordance clues in each perspective. By consistently learning to distinguish affordance-relevant regions from affordance-irrelevant background context, our approach effectively shifts activation from irrelevant areas toward meaningful affordance cues. Experimental results demonstrate the effectiveness of our method. Codes are available at github.com/hynnsk/SelectiveCL.