Nested Learning: The Illusion of Deep Learning Architectures
Ali Behrouz, Meisam Razaviyayn, Peilin Zhong, Vahab Mirrokni
2026-01-05
Summary
This paper introduces a new way of thinking about how machines learn, called Nested Learning. It argues that current AI, especially large language models, are good but still struggle with truly continuous learning, improving themselves, and solving complex problems effectively.
What's the problem?
Existing machine learning models, while powerful, have trouble with continual learning – meaning they often forget old information when learning new things. They also don't naturally get better at learning *how* to learn, and struggle to adapt to new situations without extensive retraining. Essentially, they lack a system for consistently building upon past knowledge and improving their own learning process.
What's the solution?
The researchers propose Nested Learning, which views learning as a series of interconnected optimization problems happening at different levels. They show that common optimization techniques like Adam are actually a form of memory, and then build upon this idea to create more advanced optimizers with deeper memory. They also developed a model that can modify its own learning algorithm and a new type of memory system. Combining these, they created a continual learning module called Hope, which shows promise in various tasks like language modeling and adapting to new information with limited examples.
Why it matters?
This work is important because it offers a new framework for designing AI systems that can learn more like humans – continuously, adaptively, and efficiently. If successful, Nested Learning could lead to AI that doesn't just perform tasks, but actually gets better at learning over time, retaining knowledge, and tackling increasingly complex challenges without constant human intervention.
Abstract
Despite the recent progresses, particularly in developing Language Models, there are fundamental challenges and unanswered questions about how such models can continually learn/memorize, self-improve, and find effective solutions. In this paper, we present a new learning paradigm, called Nested Learning (NL), that coherently represents a machine learning model with a set of nested, multi-level, and/or parallel optimization problems, each of which with its own context flow. Through the lenses of NL, existing deep learning methods learns from data through compressing their own context flow, and in-context learning naturally emerges in large models. NL suggests a philosophy to design more expressive learning algorithms with more levels, resulting in higher-order in-context learning and potentially unlocking effective continual learning capabilities. We advocate for NL by presenting three core contributions: (1) Expressive Optimizers: We show that known gradient-based optimizers, such as Adam, SGD with Momentum, etc., are in fact associative memory modules that aim to compress the gradients' information (by gradient descent). Building on this insight, we present other more expressive optimizers with deep memory and/or more powerful learning rules; (2) Self-Modifying Learning Module: Taking advantage of NL's insights on learning algorithms, we present a sequence model that learns how to modify itself by learning its own update algorithm; and (3) Continuum Memory System: We present a new formulation for memory system that generalizes the traditional viewpoint of long/short-term memory. Combining our self-modifying sequence model with the continuum memory system, we present a continual learning module, called Hope, showing promising results in language modeling, knowledge incorporation, and few-shot generalization tasks, continual learning, and long-context reasoning tasks.