< Explain other AI papers

Balanced Multi-Task Attention for Satellite Image Classification: A Systematic Approach to Achieving 97.23% Accuracy on EuroSAT Without Pre-Training

Aditya Vir

2025-10-21

Balanced Multi-Task Attention for Satellite Image Classification: A Systematic Approach to Achieving 97.23% Accuracy on EuroSAT Without Pre-Training

Summary

This research focuses on building a really good computer vision system, specifically a convolutional neural network, to automatically identify what’s happening in satellite images – things like forests, farms, or buildings. They did this without starting with a pre-trained model, meaning they built it from scratch and achieved very high accuracy.

What's the problem?

Identifying land use from satellite images is tricky because the images are complex and different types of land cover can look similar. Existing methods often rely on models already trained on huge datasets (like ImageNet), which might not be perfectly suited for satellite imagery. Also, the system can get confused between certain land types, and can sometimes 'overfit' to the training data, meaning it performs well on what it's seen before but poorly on new images.

What's the solution?

The researchers systematically improved their neural network design in stages. They started with a basic network, then added a component called CBAM to help it focus on important features. Finally, they created a new 'balanced multi-task attention' mechanism. This mechanism cleverly combines two ways of looking at the image: one that focuses on *where* things are in the image (spatial) and another that focuses on the *colors* or spectral information. A key part of this is a 'fusion parameter' that automatically figured out how much weight to give to each type of information, ultimately deciding they were equally important. They also used techniques to prevent overfitting and to make sure the network didn't favor some land types over others.

Why it matters?

This work is important because it shows you can achieve state-of-the-art results in satellite image classification by carefully designing a network specifically for this task, without needing to rely on massive pre-trained datasets. This is valuable because getting and labeling satellite data can be expensive and time-consuming. The findings about the equal importance of spatial and spectral information provide insights for future research in this area, and the publicly available code allows others to build upon their work.

Abstract

This work presents a systematic investigation of custom convolutional neural network architectures for satellite land use classification, achieving 97.23% test accuracy on the EuroSAT dataset without reliance on pre-trained models. Through three progressive architectural iterations (baseline: 94.30%, CBAM-enhanced: 95.98%, and balanced multi-task attention: 97.23%) we identify and address specific failure modes in satellite imagery classification. Our principal contribution is a novel balanced multi-task attention mechanism that combines Coordinate Attention for spatial feature extraction with Squeeze-Excitation blocks for spectral feature extraction, unified through a learnable fusion parameter. Experimental results demonstrate that this learnable parameter autonomously converges to alpha approximately 0.57, indicating near-equal importance of spatial and spectral modalities for satellite imagery. We employ progressive DropBlock regularization (5-20% by network depth) and class-balanced loss weighting to address overfitting and confusion pattern imbalance. The final 12-layer architecture achieves Cohen's Kappa of 0.9692 with all classes exceeding 94.46% accuracy, demonstrating confidence calibration with a 24.25% gap between correct and incorrect predictions. Our approach achieves performance within 1.34% of fine-tuned ResNet-50 (98.57%) while requiring no external data, validating the efficacy of systematic architectural design for domain-specific applications. Complete code, trained models, and evaluation scripts are publicly available.