< Explain other AI papers

Evolution and The Knightian Blindspot of Machine Learning

Joel Lehman, Elliot Meyerson, Tarek El-Gaaly, Kenneth O. Stanley, Tarin Ziyaee

2025-01-24

Evolution and The Knightian Blindspot of Machine Learning

Summary

This paper talks about a big problem in machine learning (ML) called the 'Knightian blindspot'. It's about how current AI systems struggle to handle completely unexpected situations in the real world, unlike living things that have evolved to be more adaptable.

What's the problem?

Machine learning, especially a type called reinforcement learning (RL), is great at solving specific problems it's trained for. But it's not good at dealing with totally new, unpredictable situations. For example, a self-driving car trained in the US might completely fail if you tried to use it in the UK, where people drive on the other side of the road. This is because ML systems aren't designed to handle what's called 'Knightian uncertainty' - situations so new and different that you can't even calculate the risks involved.

What's the solution?

The paper doesn't offer a complete solution, but it suggests we should learn from how biological evolution works. Living things that have evolved over time are really good at adapting to new situations, even without having a specific plan or mathematical formula for doing so. The researchers think we should study how evolution creates this adaptability and try to build similar features into our AI systems. They propose looking into fields like artificial life and open-ended evolution to find new ways to make AI more flexible and robust.

Why it matters?

This matters because as we rely more on AI in the real world, we need systems that can handle unexpected situations safely and effectively. If we can solve this problem, it could lead to AI that's much more versatile and trustworthy in complex, changing environments. This could be crucial for applications like self-driving cars, robots working in disaster areas, or AI assistants that can truly adapt to any situation. It's about making AI not just smart in specific ways, but genuinely intelligent and adaptable like living things are.

Abstract

This paper claims that machine learning (ML) largely overlooks an important facet of general intelligence: robustness to a qualitatively unknown future in an open world. Such robustness relates to Knightian uncertainty (KU) in economics, i.e. uncertainty that cannot be quantified, which is excluded from consideration in ML's key formalisms. This paper aims to identify this blind spot, argue its importance, and catalyze research into addressing it, which we believe is necessary to create truly robust open-world AI. To help illuminate the blind spot, we contrast one area of ML, reinforcement learning (RL), with the process of biological evolution. Despite staggering ongoing progress, RL still struggles in open-world situations, often failing under unforeseen situations. For example, the idea of zero-shot transferring a self-driving car policy trained only in the US to the UK currently seems exceedingly ambitious. In dramatic contrast, biological evolution routinely produces agents that thrive within an open world, sometimes even to situations that are remarkably out-of-distribution (e.g. invasive species; or humans, who do undertake such zero-shot international driving). Interestingly, evolution achieves such robustness without explicit theory, formalisms, or mathematical gradients. We explore the assumptions underlying RL's typical formalisms, showing how they limit RL's engagement with the unknown unknowns characteristic of an ever-changing complex world. Further, we identify mechanisms through which evolutionary processes foster robustness to novel and unpredictable challenges, and discuss potential pathways to algorithmically embody them. The conclusion is that the intriguing remaining fragility of ML may result from blind spots in its formalisms, and that significant gains may result from direct confrontation with the challenge of KU.