< Explain other AI papers

Small-Gain Nash: Certified Contraction to Nash Equilibria in Differentiable Games

Vedansh Sharma

2025-12-09

Small-Gain Nash: Certified Contraction to Nash Equilibria in Differentiable Games

Summary

This paper tackles a fundamental problem in getting computer programs, specifically those using 'gradient-based learning', to play games effectively. It introduces a new way to guarantee these programs will actually converge to a good strategy, even in complex games where traditional methods fail.

What's the problem?

When teaching a computer to play a game using learning algorithms, a key requirement is that the learning process consistently improves. Older methods for proving this improvement relied on a mathematical property called 'monotonicity' which often doesn't hold true in realistic games, especially those where players' actions strongly influence each other. This means we couldn't confidently say the learning would succeed in many scenarios.

What's the solution?

The researchers developed a new condition called 'Small-Gain Nash' (SGN). This condition doesn't require the usual monotonicity. Instead, it cleverly changes the way we measure distances within the game, creating a custom 'weighted geometry'. In this new geometry, the learning process *does* appear to consistently improve. They also figured out how to choose appropriate step sizes for the learning algorithm to ensure it converges, and identified a range of acceptable step sizes instead of needing extremely small ones.

Why it matters?

This work is important because it expands the types of games we can reliably teach computers to play. By overcoming the limitations of older methods, SGN opens the door to developing AI for more complex and interconnected scenarios. It provides a practical way to *certify* that a learning algorithm will converge, giving us confidence in the AI's performance and allowing for more robust and predictable results.

Abstract

Classical convergence guarantees for gradient-based learning in games require the pseudo-gradient to be (strongly) monotone in Euclidean geometry as shown by rosen(1965), a condition that often fails even in simple games with strong cross-player couplings. We introduce Small-Gain Nash (SGN), a block small-gain condition in a custom block-weighted geometry. SGN converts local curvature and cross-player Lipschitz coupling bounds into a tractable certificate of contraction. It constructs a weighted block metric in which the pseudo-gradient becomes strongly monotone on any region where these bounds hold, even when it is non-monotone in the Euclidean sense. The continuous flow is exponentially contracting in this designed geometry, and projected Euler and RK4 discretizations converge under explicit step-size bounds derived from the SGN margin and a local Lipschitz constant. Our analysis reveals a certified ``timescale band'', a non-asymptotic, metric-based certificate that plays a TTUR-like role: rather than forcing asymptotic timescale separation via vanishing, unequal step sizes, SGN identifies a finite band of relative metric weights for which a single-step-size dynamics is provably contractive. We validate the framework on quadratic games where Euclidean monotonicity analysis fails to predict convergence, but SGN successfully certifies it, and extend the construction to mirror/Fisher geometries for entropy-regularized policy gradient in Markov games. The result is an offline certification pipeline that estimates curvature, coupling, and Lipschitz parameters on compact regions, optimizes block weights to enlarge the SGN margin, and returns a structural, computable convergence certificate consisting of a metric, contraction rate, and safe step-sizes for non-monotone games.