< Explain other AI papers

How Do Decoder-Only LLMs Perceive Users? Rethinking Attention Masking for User Representation Learning

Jiahao Yuan, Yike Xu, Jinyong Wen, Baokun Wang, Yang Chen, Xiaotong Lin, Wuliang Huang, Ziyi Gao, Xing Fu, Yu Cheng, Weiqiang Wang

2026-02-12

How Do Decoder-Only LLMs Perceive Users? Rethinking Attention Masking for User Representation Learning

Summary

This research focuses on how the way large language models (LLMs) 'pay attention' to user behavior data affects how well they can understand and represent individual users. Specifically, it looks at different methods of masking information when training these models.

What's the problem?

LLMs are becoming popular for understanding users based on their actions, like purchases or clicks. However, these models are usually trained to predict the *next* thing a user will do, meaning they only look at past behavior. To get a better understanding, it's helpful to let the model consider *all* the user's behavior at once, but switching from only looking at the past to looking at everything can be tricky and lead to unstable training. The core issue is how to effectively allow the model to use information from the future without disrupting the learning process.

What's the solution?

The researchers developed a new technique called Gradient-Guided Soft Masking (GGSM). This method gradually allows the model to 'see' future behavior data during training. It starts by carefully warming up the model using information from the gradients (which show how the model is learning) and then uses a schedule to slowly open up access to future data. They tested this on a huge dataset of real-world user behavior from Alipay, a popular payment platform.

Why it matters?

This work is important because it shows that *how* you train an LLM to understand users is just as important as the model itself. By carefully controlling how the model accesses information about user behavior, they were able to create much more accurate and stable user representations. This has practical implications for things like predicting what users will buy, recommending products, and understanding how users respond to marketing efforts.

Abstract

Decoder-only large language models are increasingly used as behavioral encoders for user representation learning, yet the impact of attention masking on the quality of user embeddings remains underexplored. In this work, we conduct a systematic study of causal, hybrid, and bidirectional attention masks within a unified contrastive learning framework trained on large-scale real-world Alipay data that integrates long-horizon heterogeneous user behaviors. To improve training dynamics when transitioning from causal to bidirectional attention, we propose Gradient-Guided Soft Masking, a gradient-based pre-warmup applied before a linear scheduler that gradually opens future attention during optimization. Evaluated on 9 industrial user cognition benchmarks covering prediction, preference, and marketing sensitivity tasks, our approach consistently yields more stable training and higher-quality bidirectional representations compared with causal, hybrid, and scheduler-only baselines, while remaining compatible with decoder pretraining. Overall, our findings highlight the importance of masking design and training transition in adapting decoder-only LLMs for effective user representation learning. Our code is available at https://github.com/JhCircle/Deepfind-GGSM.