Multi-agent cooperation through in-context co-player inference
Marissa A. Weis, Maciej Wołczyk, Rajai Nasser, Rif A. Saurous, Blaise Agüera y Arcas, João Sacramento, Alexander Meulemans
2026-02-19
Summary
This research explores how to get computer programs, specifically 'agents' learning through trial and error, to cooperate with each other even when they're designed to act in their own self-interest.
What's the problem?
Getting these agents to work together is hard because each one is trying to maximize its own reward. Previous attempts to encourage cooperation required researchers to specifically tell the agents *how* other agents learn, which isn't realistic or scalable, or they separated the agents into 'fast learners' and 'slow observers', creating an artificial setup. Essentially, it's difficult to build cooperative AI without making a lot of assumptions about how the other AI will behave.
What's the solution?
The researchers used a type of AI called a 'sequence model,' which is good at learning patterns from data. They trained these agents to play against a variety of different opponents. This forced the agents to quickly adapt to each opponent's behavior *during* a game, figuring out the best way to respond based on what they observed. They found that this in-game adaptation made the agents vulnerable to being exploited, but also created a pressure for them to try and influence how their opponent learns, ultimately leading to cooperative strategies.
Why it matters?
This work shows a promising way to achieve cooperation in AI without needing to pre-program assumptions about how other agents learn. By simply letting agents learn from diverse opponents and adapt in real-time, cooperation can emerge naturally. This is a big step towards building more scalable and realistic cooperative AI systems.
Abstract
Achieving cooperation among self-interested agents remains a fundamental challenge in multi-agent reinforcement learning. Recent work showed that mutual cooperation can be induced between "learning-aware" agents that account for and shape the learning dynamics of their co-players. However, existing approaches typically rely on hardcoded, often inconsistent, assumptions about co-player learning rules or enforce a strict separation between "naive learners" updating on fast timescales and "meta-learners" observing these updates. Here, we demonstrate that the in-context learning capabilities of sequence models allow for co-player learning awareness without requiring hardcoded assumptions or explicit timescale separation. We show that training sequence model agents against a diverse distribution of co-players naturally induces in-context best-response strategies, effectively functioning as learning algorithms on the fast intra-episode timescale. We find that the cooperative mechanism identified in prior work-where vulnerability to extortion drives mutual shaping-emerges naturally in this setting: in-context adaptation renders agents vulnerable to extortion, and the resulting mutual pressure to shape the opponent's in-context learning dynamics resolves into the learning of cooperative behavior. Our results suggest that standard decentralized reinforcement learning on sequence models combined with co-player diversity provides a scalable path to learning cooperative behaviors.