Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering
Yu Zhao, Alessio Devoto, Giwon Hong, Xiaotang Du, Aryo Pradipta Gema, Hongru Wang, Kam-Fai Wong, Pasquale Minervini
2024-10-25

Summary
This paper introduces SpARE, a new method that helps large language models (LLMs) better select and use knowledge by resolving conflicts between their internal knowledge and the context they are given.
What's the problem?
Large language models often have a lot of factual information stored in them, but sometimes this information conflicts with what they are currently processing. This can lead to incorrect or outdated answers, which is a problem for users who rely on these models for accurate information. The challenge is to find a way to manage these conflicts effectively without retraining the entire model.
What's the solution?
The authors propose SpARE, which stands for Sparse Auto-Encoder-based Representation Engineering. This method uses pre-trained sparse auto-encoders (SAEs) to identify when knowledge conflicts occur within the model. By analyzing the model's internal signals, SpARE can adjust how the model uses its knowledge at inference time, allowing it to choose the most relevant information and resolve conflicts effectively. The results show that SpARE improves the model's performance in answering questions correctly compared to existing methods.
Why it matters?
This research is important because it enhances how language models manage and utilize their knowledge, leading to more accurate and reliable outputs. By addressing knowledge conflicts, SpARE can help improve the effectiveness of AI systems in various applications, such as customer support, education, and content creation.
Abstract
Large language models (LLMs) can store a significant amount of factual knowledge in their parameters. However, their parametric knowledge may conflict with the information provided in the context -- this phenomenon, known as context-memory knowledge conflicts, can lead to undesirable model behaviour, such as reliance on outdated or incorrect information. Analysing the internal activations of LLMs, we find that they can internally register the signals of knowledge conflict at mid-layers. Such signals allow us to detect whether a knowledge conflict occurs and use inference-time intervention strategies to resolve it. In this work, we propose SpARE, a training-free representation engineering method that uses pre-trained sparse auto-encoders (SAEs) to control the knowledge selection behaviour of LLMs. SpARE identifies the functional features that control the knowledge selection behaviours and applies them to edit the internal activations of LLMs at inference time. Our experimental results show that SpARE can effectively control the usage of either knowledge source to resolve knowledge conflict in open-domain question-answering tasks, surpassing existing representation engineering methods (+10%) as well as contrastive decoding methods (+15%).