< Explain other AI papers

Model Surgery: Modulating LLM's Behavior Via Simple Parameter Editing

Huanqian Wang, Yang Yue, Rui Lu, Jingxin Shi, Andrew Zhao, Shenzhi Wang, Shiji Song, Gao Huang

2024-07-15

Model Surgery: Modulating LLM's Behavior Via Simple Parameter Editing

Summary

This paper discusses Model Surgery, a new method for adjusting large language models (LLMs) to improve their behavior by directly editing specific parameters instead of retraining the entire model.

What's the problem?

When LLMs are updated to make them safer and more effective, traditional methods like Supervised Fine-Tuning (SFT) or Reinforcement Learning from Human Feedback (RLHF) can be very resource-intensive and may cause the model to lose some of its original capabilities. Users also have to constantly adapt to changes in the model's behavior, which can be frustrating.

What's the solution?

Model Surgery offers a solution by allowing developers to edit only a small number of parameters that directly influence specific behaviors, such as reducing toxicity or preventing jailbreak attempts. The researchers created a 'behavior probe' that helps identify which parameters to change. By using this method, they were able to significantly reduce harmful outputs—up to 90% in some cases—while keeping the model's overall abilities intact.

Why it matters?

This research is important because it provides a more efficient way to update LLMs without the heavy costs associated with retraining. By focusing on targeted parameter adjustments, developers can improve model safety and user experience more effectively, making AI systems more reliable and easier for users to interact with.

Abstract

Large Language Models (LLMs) have demonstrated great potential as generalist assistants, showcasing powerful task understanding and problem-solving capabilities. To deploy LLMs as AI assistants, it is crucial that these models exhibit desirable behavioral traits, such as non-toxicity and resilience against jailbreak attempts. Current methods for detoxification or preventing jailbreaking usually involve Supervised Fine-Tuning (SFT) or Reinforcement Learning from Human Feedback (RLHF), which requires finetuning billions of parameters through gradient descent with substantial computation cost. Furthermore, models modified through SFT and RLHF may deviate from the pretrained models, potentially leading to a degradation in foundational LLM capabilities. In this paper, we observe that surprisingly, directly editing a small subset of parameters can effectively modulate specific behaviors of LLMs, such as detoxification and resistance to jailbreaking. Specifically, for a behavior that we aim to avoid, we employ a linear classifier, which we term the behavior probe, to classify binary behavior labels within the hidden state space of the LLM. Using this probe, we introduce an algorithm to identify a critical subset of LLM parameters that significantly influence this targeted behavior. Then we directly edit these selected parameters by shifting them towards the behavior probe. Such a direct parameter editing method necessitates only inference-level computational resources. Experiments demonstrate that in the representative detoxification task, our approach achieves reductions of up to 90.0\% in toxicity on the RealToxicityPrompts dataset and 49.2\% on ToxiGen, while maintaining the LLM's general capabilities in areas such as common sense, question answering, and mathematics. Our code is available at https://github.com/lucywang720/model-surgery.