< Explain other AI papers

LLMs + Persona-Plug = Personalized LLMs

Jiongnan Liu, Yutao Zhu, Shuting Wang, Xiaochi Wei, Erxue Min, Yu Lu, Shuaiqiang Wang, Dawei Yin, Zhicheng Dou

2024-09-19

LLMs + Persona-Plug = Personalized LLMs

Summary

This paper discusses a new approach called Persona-Plug (PPlug) that personalizes large language models (LLMs) to better match individual user preferences without needing expensive fine-tuning for each user.

What's the problem?

Personalization is important because different users may want different responses even if they have similar needs. Traditional methods often require creating a unique version of the model for each user, which is too costly and impractical. Other methods that retrieve past user interactions can fail to capture the full range of a user's style and preferences, leading to less effective personalization.

What's the solution?

The researchers developed PPlug, which creates a user-specific embedding based on all of a user's historical interactions. This embedding is generated through a lightweight plug-in module that encodes the user's past behaviors into a single representation. By attaching this personalized embedding to the input for the LLM, the model can produce responses that are more aligned with the user's habits and preferences without changing the model's internal parameters. Extensive testing showed that PPlug significantly outperforms existing personalization methods.

Why it matters?

This research is significant because it provides a more efficient way to personalize AI responses, making them more relevant and engaging for users. By allowing a single LLM to serve multiple users with distinct preferences, PPlug can improve user experience in applications like virtual assistants, customer service, and educational tools.

Abstract

Personalization plays a critical role in numerous language tasks and applications, since users with the same requirements may prefer diverse outputs based on their individual interests. This has led to the development of various personalized approaches aimed at adapting large language models (LLMs) to generate customized outputs aligned with user preferences. Some of them involve fine-tuning a unique personalized LLM for each user, which is too expensive for widespread application. Alternative approaches introduce personalization information in a plug-and-play manner by retrieving the user's relevant historical texts as demonstrations. However, this retrieval-based strategy may break the continuity of the user history and fail to capture the user's overall styles and patterns, hence leading to sub-optimal performance. To address these challenges, we propose a novel personalized LLM model, . It constructs a user-specific embedding for each individual by modeling all her historical contexts through a lightweight plug-in user embedder module. By attaching this embedding to the task input, LLMs can better understand and capture user habits and preferences, thereby producing more personalized outputs without tuning their own parameters. Extensive experiments on various tasks in the language model personalization (LaMP) benchmark demonstrate that the proposed model significantly outperforms existing personalized LLM approaches.