< Explain other AI papers

Evaluating Deep Learning Models for African Wildlife Image Classification: From DenseNet to Vision Transformers

Lukman Jibril Aliyu, Umar Sani Muhammad, Bilqisu Ismail, Nasiru Muhammad, Almustapha A Wakili, Seid Muhie Yimam, Shamsuddeen Hassan Muhammad, Mustapha Abdullahi

2025-07-30

Evaluating Deep Learning Models for African Wildlife Image
  Classification: From DenseNet to Vision Transformers

Summary

This paper talks about a study comparing different deep learning models to see which works best for recognizing African wildlife in images. It looks at both convolutional neural networks like DenseNet and newer transformer models like Vision Transformers, comparing how accurate they are and how much computer power they require.

What's the problem?

The problem is that different AI models have different strengths and weaknesses. Some models use less computer resources but might not be as accurate, while others are very accurate but need much more power and data. This makes it hard to pick the best model for wildlife image classification, especially in places with limited computing resources.

What's the solution?

The study tests several popular models on a large dataset of African wildlife images to measure their accuracy, speed, and resource demands. It finds that DenseNet-201 works best among convolutional models because of its design for reusing features, while the Vision Transformer model called ViT-H/14 performs best among transformers due to its ability to understand the whole image context better. The research also discusses the trade-offs between accuracy and resource needs for each model.

Why it matters?

This matters because choosing the right AI model helps wildlife researchers and conservationists identify animals more accurately and efficiently, even if they don’t have access to super-powerful computers. It supports better monitoring and protection of African wildlife.

Abstract

A comparative study evaluates deep learning models for African wildlife image classification, highlighting trade-offs between accuracy, resource requirements, and deployability, with DenseNet-201 and ViT-H/14 performing best among convolutional networks and transformers, respectively.