Is Nano Banana Pro a Low-Level Vision All-Rounder? A Comprehensive Evaluation on 14 Tasks and 40 Datasets
Jialong Zuo, Haoyou Deng, Hanyu Zhou, Jiaxin Zhu, Yicheng Zhang, Yiwei Zhang, Yongxin Yan, Kaixing Huang, Weisen Chen, Yongtai Deng, Rui Jin, Nong Sang, Changxin Gao
2025-12-18
Summary
This paper investigates whether a new type of AI, specifically the Nano Banana Pro text-to-image model, can perform well on traditional computer vision tasks, even without being specifically trained for them.
What's the problem?
Traditionally, computer vision tasks like image restoration or enhancement require specialized AI models trained for that specific purpose. While models like Nano Banana Pro are amazing at creating images from text, it's unclear if they can also handle these more basic vision problems as effectively as the models designed for them. The core question is: can Nano Banana Pro be a 'jack-of-all-trades' for low-level vision?
What's the solution?
Researchers tested Nano Banana Pro on 14 different low-level vision tasks using 40 different datasets. Importantly, they didn't *train* the AI for these tasks – they simply gave it text prompts describing what needed to be done. They then compared Nano Banana Pro’s results to those of the specialist models, looking at both how good the images *looked* and how accurately they matched the original images based on standard measurements.
Why it matters?
The findings show Nano Banana Pro can create visually appealing results, often adding realistic details that specialist models miss. However, it doesn't perform as well when judged by traditional accuracy metrics. This suggests that while these new AI models are good at creating plausible images, they struggle with the precise pixel-level accuracy needed for some tasks. This research helps us understand the strengths and weaknesses of these new AI models and where further improvements are needed to make them truly versatile.
Abstract
The rapid evolution of text-to-image generation models has revolutionized visual content creation. While commercial products like Nano Banana Pro have garnered significant attention, their potential as generalist solvers for traditional low-level vision challenges remains largely underexplored. In this study, we investigate the critical question: Is Nano Banana Pro a Low-Level Vision All-Rounder? We conducted a comprehensive zero-shot evaluation across 14 distinct low-level tasks spanning 40 diverse datasets. By utilizing simple textual prompts without fine-tuning, we benchmarked Nano Banana Pro against state-of-the-art specialist models. Our extensive analysis reveals a distinct performance dichotomy: while Nano Banana Pro demonstrates superior subjective visual quality, often hallucinating plausible high-frequency details that surpass specialist models, it lags behind in traditional reference-based quantitative metrics. We attribute this discrepancy to the inherent stochasticity of generative models, which struggle to maintain the strict pixel-level consistency required by conventional metrics. This report identifies Nano Banana Pro as a capable zero-shot contender for low-level vision tasks, while highlighting that achieving the high fidelity of domain specialists remains a significant hurdle.