< Explain other AI papers

GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding

Shijie Zhou, Viet Dac Lai, Hao Tan, Jihyung Kil, Wanrong Zhu, Changyou Chen, Ruiyi Zhang

2025-11-04

GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding

Summary

This paper introduces a new method, called GUI-AIMA, for teaching computers to understand and interact with graphical user interfaces (GUIs) like those on your phone or computer screen. It focuses on how to translate spoken or typed instructions into specific actions, like clicking on a button or selecting text.

What's the problem?

Currently, getting computers to accurately click on things within a GUI is difficult. Existing methods try to directly predict the exact coordinates for a click, but this is hard because screens have lots of detail and it requires a lot of computing power. It's like trying to tell someone exactly where to tap on a screen just by describing it – it's much easier if you can first narrow down the area they should focus on.

What's the solution?

GUI-AIMA takes a different approach. Instead of directly predicting coordinates, it first identifies relevant areas on the screen based on the user’s instructions. Then, it figures out where to click *within* those areas. It does this by cleverly using the existing abilities of powerful AI models, called Multimodal Large Language Models, and fine-tuning them with a relatively small amount of training data. The key is aligning the model’s attention – how it focuses on different parts of the screen – with the areas that are important for the given instruction. It doesn't need to predict exact coordinates, making it more efficient.

Why it matters?

This research is important because it makes GUI automation more practical and efficient. By requiring less training data and computing power, it opens the door to creating more helpful and responsive computer assistants that can understand and execute our commands on screen. This could lead to better accessibility tools, more automated tasks, and generally easier interactions with technology.

Abstract

Graphical user interface (GUI) grounding is a key function of computer-use agents, which maps natural-language instructions to actionable screen regions. Existing approaches based on Multimodal Large Language Models (MLLMs) typically formulate it as a text-based coordinate generation task, yet directly generating precise coordinates from visual inputs remains challenging and computationally intensive. An intuitive way to implement GUI grounding is to first select visual patches relevant to the instructions and then determine the precise click location within those patches. Based on the observations that general MLLMs have some native grounding capability, nested within their attentions, we propose GUI-AIMA, an attention-based and coordinate-free supervised fine-tuning framework for efficient GUI grounding. GUI-AIMA aligns the intrinsic multimodal attention of MLLMs with patch-wise grounding signals. These signals are calculated adaptively for diverse user instructions by multi-head aggregation on simplified query-visual attention matrices. Besides, its coordinate-free manner can easily integrate a plug-and-play zoom-in stage. GUI-AIMA-3B was trained with only 85k screenshots, demonstrating exceptional data efficiency and verifying that light training can trigger the native grounding capability of MLLMs. It achieves state-of-the-art performance among 3B models, attaining an average accuracy of 58.6% on ScreenSpot-Pro and 62.2% on OSWorld-G. Project page: https://github.com/sjz5202/GUI-AIMA