GeoPixel: Pixel Grounding Large Multimodal Model in Remote Sensing
Akashah Shabbir, Mohammed Zumri, Mohammed Bennamoun, Fahad S. Khan, Salman Khan
2025-01-27

Summary
This paper talks about GeoPixel, a new AI model designed to understand and analyze high-resolution satellite and aerial images (remote sensing imagery) better than existing models. It's like giving a computer the ability to look at a detailed map from space and understand what it's seeing, down to the tiniest details.
What's the problem?
Current AI models are great at understanding regular photos, but they struggle with satellite images. These images are tricky because they're taken from way up high, show huge areas, and contain lots of tiny details. It's like trying to identify a specific house in a photo of an entire city taken from an airplane. Also, there wasn't enough good data to teach AI how to understand these images properly.
What's the solution?
The researchers created GeoPixel, which is specially designed to handle satellite images. It can work with super high-quality images (up to 4K resolution) and understand what's in them pixel by pixel. They also made a new dataset called GeoPixelD to train GeoPixel. This dataset is like a carefully labeled photo album of satellite images, helping the AI learn what different things look like from above. GeoPixel can now identify multiple objects in an image and even have conversations about what it sees.
Why it matters?
This matters because it could revolutionize how we use satellite imagery. Imagine being able to ask a computer to find all the swimming pools in a city, or to track changes in forests over time, just by looking at satellite photos. This could be huge for urban planning, environmental monitoring, disaster response, and many other fields. It makes it easier for people who aren't experts in reading satellite images to get useful information from them, potentially leading to new discoveries and applications in fields like geography, ecology, and city management.
Abstract
Recent advances in large multimodal models (LMMs) have recognized fine-grained grounding as an imperative factor of visual understanding and dialogue. However, the benefits of such representation in LMMs are limited to the natural image domain, and these models perform poorly for remote sensing (RS). The distinct overhead viewpoint, scale variation, and presence of small objects in high-resolution RS imagery present a unique challenge in region-level comprehension. Moreover, the development of the grounding conversation capability of LMMs within RS is hindered by the lack of granular, RS domain-specific grounded data. Addressing these limitations, we propose GeoPixel - the first end-to-end high resolution RS-LMM that supports pixel-level grounding. This capability allows fine-grained visual perception by generating interleaved masks in conversation. GeoPixel supports up to 4K HD resolution in any aspect ratio, ideal for high-precision RS image analysis. To support the grounded conversation generation (GCG) in RS imagery, we curate a visually grounded dataset GeoPixelD through a semi-automated pipeline that utilizes set-of-marks prompting and spatial priors tailored for RS data to methodically control the data generation process. GeoPixel demonstrates superior performance in pixel-level comprehension, surpassing existing LMMs in both single-target and multi-target segmentation tasks. Our methodological ablation studies validate the effectiveness of each component in the overall architecture. Our code and data will be publicly released.