< Explain other AI papers

Enhanced OoD Detection through Cross-Modal Alignment of Multi-Modal Representations

Jeonghyeon Kim, Sangheum Hwang

2025-04-03

Enhanced OoD Detection through Cross-Modal Alignment of Multi-Modal
  Representations

Summary

This paper is about improving how AI models can detect data that is different from what they were trained on, especially when using both images and text.

What's the problem?

AI models that use both images and text struggle to identify data that is different from what they were trained on. They often don't fully use the knowledge they gained during training.

What's the solution?

The researchers found that fine-tuning these AI models and making sure that the image and text data are closely aligned helps them to better identify unusual data. By focusing on aligning different types of information, the AI can make better use of its existing knowledge.

Why it matters?

This work matters because it can make AI systems more reliable and safer by helping them to recognize when they are seeing something they don't understand.

Abstract

Prior research on out-of-distribution detection (OoDD) has primarily focused on single-modality models. Recently, with the advent of large-scale pretrained vision-language models such as CLIP, OoDD methods utilizing such multi-modal representations through zero-shot and prompt learning strategies have emerged. However, these methods typically involve either freezing the pretrained weights or only partially tuning them, which can be suboptimal for downstream datasets. In this paper, we highlight that multi-modal fine-tuning (MMFT) can achieve notable OoDD performance. Despite some recent works demonstrating the impact of fine-tuning methods for OoDD, there remains significant potential for performance improvement. We investigate the limitation of na\"ive fine-tuning methods, examining why they fail to fully leverage the pretrained knowledge. Our empirical analysis suggests that this issue could stem from the modality gap within in-distribution (ID) embeddings. To address this, we propose a training objective that enhances cross-modal alignment by regularizing the distances between image and text embeddings of ID data. This adjustment helps in better utilizing pretrained textual information by aligning similar semantics from different modalities (i.e., text and image) more closely in the hyperspherical representation space. We theoretically demonstrate that the proposed regularization corresponds to the maximum likelihood estimation of an energy-based model on a hypersphere. Utilizing ImageNet-1k OoD benchmark datasets, we show that our method, combined with post-hoc OoDD approaches leveraging pretrained knowledge (e.g., NegLabel), significantly outperforms existing methods, achieving state-of-the-art OoDD performance and leading ID accuracy.