< Explain other AI papers

Unlocking Continual Learning Abilities in Language Models

Wenyu Du, Shuang Cheng, Tongxu Luo, Zihan Qiu, Zeyu Huang, Ka Chun Cheung, Reynold Cheng, Jie Fu

2024-06-26

Unlocking Continual Learning Abilities in Language Models

Summary

This paper presents MIGU, a new method designed to help language models (LMs) learn continuously without forgetting previous tasks. It focuses on improving how these models retain knowledge while learning new information.

What's the problem?

Language models often face a problem called catastrophic forgetting, where they lose the ability to perform tasks they previously learned when they try to learn new ones. Current methods usually require access to old data or specific task information, which can be hard or expensive to obtain. This makes it difficult for LMs to effectively learn over time without losing their previous knowledge.

What's the solution?

The authors introduce MIGU (Magnitude-based Gradient Updating), a method that does not rely on old data or task labels. Instead, MIGU updates the model's parameters based on the size of the output from its linear layers. They found that different tasks produce different patterns in the output sizes, and by focusing on these patterns, MIGU can help the model retain its learning capabilities. Their experiments show that MIGU works well with various LM architectures like T5, RoBERTa, and Llama2, significantly improving performance by an average of 15.2% compared to traditional methods in continual learning settings.

Why it matters?

This research is important because it offers a solution to a major challenge in AI: how to enable models to learn continuously without forgetting past knowledge. By making it easier for LMs to adapt and improve over time, MIGU could enhance applications like chatbots, virtual assistants, and other AI systems that need to evolve and learn from new information while maintaining their previous skills.

Abstract

Language models (LMs) exhibit impressive performance and generalization capabilities. However, LMs struggle with the persistent challenge of catastrophic forgetting, which undermines their long-term sustainability in continual learning (CL). Existing approaches usually address the issue by incorporating old task data or task-wise inductive bias into LMs. However, old data and accurate task information are often unavailable or costly to collect, hindering the availability of current CL approaches for LMs. To address this limitation, we introduce MIGU (MagnItude-based Gradient Updating for continual learning), a rehearsal-free and task-label-free method that only updates the model parameters with large magnitudes of output in LMs' linear layers. MIGU is based on our observation that the L1-normalized magnitude distribution of the output in LMs' linear layers is different when the LM models deal with different task data. By imposing this simple constraint on the gradient update process, we can leverage the inherent behaviors of LMs, thereby unlocking their innate CL abilities. Our experiments demonstrate that MIGU is universally applicable to all three LM architectures (T5, RoBERTa, and Llama2), delivering state-of-the-art or on-par performance across continual finetuning and continual pre-training settings on four CL benchmarks. For example, MIGU brings a 15.2% average accuracy improvement over conventional parameter-efficient finetuning baselines in a 15-task CL benchmark. MIGU can also seamlessly integrate with all three existing CL types to further enhance performance. Code is available at https://github.com/wenyudu/MIGU{this https URL}.