Activation-Informed Merging of Large Language Models
Amin Heyrani Nobari, Kaveh Alimohammadi, Ali ArjomandBigdeli, Akash Srivastava, Faez Ahmed, Navid Azizan
2025-02-06
Summary
This paper talks about Activation-Informed Merging (AIM), a new method for combining large language models (LLMs) by using information about how the models process data internally. AIM improves the performance and reliability of merged models by focusing on their activation patterns.
What's the problem?
Traditional methods for merging LLMs often only focus on model weights and ignore important internal information, like activation patterns. This can lead to less effective merged models that lose some of their original strengths or perform poorly on certain tasks.
What's the solution?
The researchers developed AIM, which uses activation space information to guide the merging process. By analyzing how different parts of the models respond to inputs, AIM prioritizes preserving the most important features from the base model while integrating knowledge from fine-tuned models. This approach prevents performance drops and increases efficiency.
Why it matters?
This research is important because it offers a smarter way to combine AI models without needing to retrain them from scratch. AIM makes it possible to create more powerful and efficient models that perform well across multiple tasks, saving time and computational resources while improving reliability.
Abstract
Model merging, a method that combines the parameters and embeddings of multiple fine-tuned large language models (LLMs), offers a promising approach to enhance model performance across various tasks while maintaining computational efficiency. This paper introduces Activation-Informed Merging (AIM), a technique that integrates the information from the activation space of LLMs into the merging process to improve performance and robustness. AIM is designed as a flexible, complementary solution that is applicable to any existing merging method. It aims to preserve critical weights from the base model, drawing on principles from continual learning~(CL) and model compression. Utilizing a task-agnostic calibration set, AIM selectively prioritizes essential weights during merging. We empirically demonstrate that AIM significantly enhances the performance of merged models across multiple benchmarks. Our findings suggest that considering the activation-space information can provide substantial advancements in the model merging strategies for LLMs with up to 40\% increase in benchmark performance.