< Explain other AI papers

Thinking with Map: Reinforced Parallel Map-Augmented Agent for Geolocalization

Yuxiang Ji, Yong Wang, Ziyu Ma, Yiming Hu, Hailang Huang, Xuecai Hu, Guanhua Chen, Liaoni Wu, Xiangxiang Chu

2026-01-12

Thinking with Map: Reinforced Parallel Map-Augmented Agent for Geolocalization

Summary

This paper focuses on teaching computers to figure out *where* a picture was taken on Earth, a task called image geolocalization.

What's the problem?

Current computer programs that try to solve this problem are really good at using general knowledge and reasoning, but they miss something humans do naturally: look at a map! They don't effectively integrate map information into their process of figuring out location.

What's the solution?

The researchers created a system where the computer acts like an 'agent' that can explore a map. It first learns how to best use the map through a process called reinforcement learning, which is like training it with rewards and penalties. Then, when it's time to actually find a location, it quickly checks out many different possible paths on the map at the same time to make a more informed guess. They also created a new, large collection of real-world images called MAPBench to test these kinds of systems.

Why it matters?

This work is important because it significantly improves the accuracy of image geolocalization. The new method is much better than existing programs, even those from companies like Google, at pinpointing the location of a photo. This could be useful for things like organizing photos, verifying information, or even helping robots navigate.

Abstract

The image geolocalization task aims to predict the location where an image was taken anywhere on Earth using visual clues. Existing large vision-language model (LVLM) approaches leverage world knowledge, chain-of-thought reasoning, and agentic capabilities, but overlook a common strategy used by humans -- using maps. In this work, we first equip the model Thinking with Map ability and formulate it as an agent-in-the-map loop. We develop a two-stage optimization scheme for it, including agentic reinforcement learning (RL) followed by parallel test-time scaling (TTS). The RL strengthens the agentic capability of model to improve sampling efficiency, and the parallel TTS enables the model to explore multiple candidate paths before making the final prediction, which is crucial for geolocalization. To evaluate our method on up-to-date and in-the-wild images, we further present MAPBench, a comprehensive geolocalization training and evaluation benchmark composed entirely of real-world images. Experimental results show that our method outperforms existing open- and closed-source models on most metrics, specifically improving Acc@500m from 8.0\% to 22.1\% compared to Gemini-3-Pro with Google Search/Map grounded mode.