< Explain other AI papers

Can Models Learn Skill Composition from Examples?

Haoyu Zhao, Simran Kaur, Dingli Yu, Anirudh Goyal, Sanjeev Arora

2024-10-01

Can Models Learn Skill Composition from Examples?

Summary

This paper explores whether smaller language models can learn to combine different language skills by training on examples, focusing on their ability to create new compositions from previously learned skills.

What's the problem?

As language models get more advanced, it's important for them to be able to use what they've learned in new ways, especially in situations they haven't encountered before. However, smaller models often struggle with this ability, which is known as compositional generalization. This limits their effectiveness in real-world applications where flexibility and creativity are needed.

What's the solution?

The researchers conducted experiments using a method called SKILL-MIX, where they evaluated how well smaller models could learn to combine different language skills by training on examples that included subsets of these skills. They found that even when smaller models were trained on combinations of just two or three skills, they could still perform well on tasks requiring four or five skills, showing that they were able to generalize their learning. Additionally, the models improved at using skills they hadn't specifically trained on when tested later.

Why it matters?

This study is significant because it demonstrates that smaller language models can be trained to be more versatile and capable of handling complex tasks. By improving their ability to combine skills, these models can become more useful in various applications, such as writing assistance, tutoring, and creative content generation.

Abstract

As large language models (LLMs) become increasingly advanced, their ability to exhibit compositional generalization -- the capacity to combine learned skills in novel ways not encountered during training -- has garnered significant attention. This type of generalization, particularly in scenarios beyond training data, is also of great interest in the study of AI safety and alignment. A recent study introduced the SKILL-MIX evaluation, where models are tasked with composing a short paragraph demonstrating the use of a specified k-tuple of language skills. While small models struggled with composing even with k=3, larger models like GPT-4 performed reasonably well with k=5 and 6. In this paper, we employ a setup akin to SKILL-MIX to evaluate the capacity of smaller models to learn compositional generalization from examples. Utilizing a diverse set of language skills -- including rhetorical, literary, reasoning, theory of mind, and common sense -- GPT-4 was used to generate text samples that exhibit random subsets of k skills. Subsequent fine-tuning of 7B and 13B parameter models on these combined skill texts, for increasing values of k, revealed the following findings: (1) Training on combinations of k=2 and 3 skills results in noticeable improvements in the ability to compose texts with k=4 and 5 skills, despite models never having seen such examples during training. (2) When skill categories are split into training and held-out groups, models significantly improve at composing texts with held-out skills during testing despite having only seen training skills during fine-tuning, illustrating the efficacy of the training approach even with previously unseen skills. This study also suggests that incorporating skill-rich (potentially synthetic) text into training can substantially enhance the compositional capabilities of models.