MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training
Xingyi He, Hao Yu, Sida Peng, Dongli Tan, Zehong Shen, Hujun Bao, Xiaowei Zhou
2025-01-15

Summary
This paper talks about a new AI system called MatchAnything that can match images from different sources, like comparing an X-ray to a regular photo. It's designed to work across many different types of images, even ones it hasn't seen before.
What's the problem?
Current AI systems are really good at matching similar images, but they struggle when the images look very different, like comparing a night vision photo to a regular daytime photo. This is because there's not enough training data for these different types of images. This limits how useful these AI systems can be in fields that need to compare different types of images, like in medicine or satellite imaging.
What's the solution?
The researchers created a new way to train AI called MatchAnything. They used a clever trick to create lots of training data by making fake pairs of different types of images. This helped the AI learn to find similarities between very different-looking images. The cool part is that once trained, MatchAnything can match types of images it's never seen before, which is much better than other AI systems.
Why it matters?
This matters because it could make image matching AI useful in many more areas. For example, it could help doctors compare different types of medical scans, or help self-driving cars understand their surroundings better by comparing different types of sensors. It opens up new possibilities for using AI in fields that work with many different types of images, potentially leading to new discoveries and applications in science and technology.
Abstract
Image matching, which aims to identify corresponding pixel locations between images, is crucial in a wide range of scientific disciplines, aiding in image registration, fusion, and analysis. In recent years, deep learning-based image matching algorithms have dramatically outperformed humans in rapidly and accurately finding large amounts of correspondences. However, when dealing with images captured under different imaging modalities that result in significant appearance changes, the performance of these algorithms often deteriorates due to the scarcity of annotated cross-modal training data. This limitation hinders applications in various fields that rely on multiple image modalities to obtain complementary information. To address this challenge, we propose a large-scale pre-training framework that utilizes synthetic cross-modal training signals, incorporating diverse data from various sources, to train models to recognize and match fundamental structures across images. This capability is transferable to real-world, unseen cross-modality image matching tasks. Our key finding is that the matching model trained with our framework achieves remarkable generalizability across more than eight unseen cross-modality registration tasks using the same network weight, substantially outperforming existing methods, whether designed for generalization or tailored for specific tasks. This advancement significantly enhances the applicability of image matching technologies across various scientific disciplines and paves the way for new applications in multi-modality human and artificial intelligence analysis and beyond.