Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Zheyang Xiong, Ziyang Cai, John Cooper, Albert Ge, Vasilis Papageorgiou, Zack Sifakis, Angeliki Giannou, Ziqian Lin, Liu Yang, Saurabh Agarwal, Grigorios G Chrysos, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
2024-10-13

Summary
This paper explores how large language models (LLMs) can learn and perform multiple tasks at the same time during a single processing step, a phenomenon called 'task superposition.'
What's the problem?
While LLMs are known for their ability to learn from context and perform various tasks, it was commonly believed that they could only focus on one task at a time. This limitation could hinder their efficiency and effectiveness in handling complex requests that involve multiple tasks.
What's the solution?
The authors conducted experiments to demonstrate that LLMs can actually handle several distinct tasks simultaneously without needing to retrain for each specific task. They provided evidence of this capability across different types of LLMs and showed that even when trained to learn one task at a time, these models can still perform multiple tasks together. They also explored how LLMs manage and represent these tasks internally, revealing that larger models are better at solving more tasks in parallel.
Why it matters?
This research is important because it expands our understanding of how LLMs work, suggesting they have greater capabilities than previously thought. By recognizing that LLMs can perform tasks in superposition, we can improve their design and application in real-world scenarios, such as in AI assistants or automated systems that need to process complex information efficiently.
Abstract
Large Language Models (LLMs) have demonstrated remarkable in-context learning (ICL) capabilities. In this study, we explore a surprising phenomenon related to ICL: LLMs can perform multiple, computationally distinct ICL tasks simultaneously, during a single inference call, a capability we term "task superposition". We provide empirical evidence of this phenomenon across various LLM families and scales and show that this phenomenon emerges even if we train the model to in-context learn one task at a time. We offer theoretical explanations that this capability is well within the expressive power of transformers. We also explore how LLMs internally compose task vectors during superposition. Furthermore, we show that larger models can solve more ICL tasks in parallel, and better calibrate their output distribution. Our findings offer insights into the latent capabilities of LLMs, further substantiate the perspective of "LLMs as superposition of simulators", and raise questions about the mechanisms enabling simultaneous task execution.