< Explain other AI papers

Legal Alignment for Safe and Ethical AI

Noam Kolt, Nicholas Caputo, Jack Boeglin, Cullen O'Keefe, Rishi Bommasani, Stephen Casper, Mariano-Florentino Cuéllar, Noah Feldman, Iason Gabriel, Gillian K. Hadfield, Lewis Hammond, Peter Henderson, Atoosa Kasirzadeh, Seth Lazar, Anka Reuel, Kevin L. Wei, Jonathan Zittrain

2026-01-12

Legal Alignment for Safe and Ethical AI

Summary

This paper points out that when trying to build AI that acts safely and ethically, researchers haven't really looked to the field of law for guidance, even though law deals with similar issues of rules and how to follow them.

What's the problem?

Currently, there's a gap in AI safety research. We know we need to make sure AI does what we *want* it to do, and that it follows rules, but we haven't explored how the established system of law – with its rules, principles, and ways of interpreting those rules – can help us achieve that. It's like trying to build a well-functioning society without considering existing legal frameworks.

What's the solution?

The paper proposes a new area of study called 'legal alignment.' This involves three main ideas: first, designing AI to actually *follow* existing laws; second, using the same methods lawyers use to interpret laws to help AI make decisions; and third, using legal concepts as a general structure for building trustworthy and reliable AI systems. This means figuring out which laws apply to specific AI, testing if AI is following those laws, and creating ways to make sure this all works in practice.

Why it matters?

This research is important because it offers a new way to approach AI safety. By combining expertise from law and computer science, we can build AI that is not only intelligent but also operates within a framework of established rules and ethical considerations, ultimately leading to AI systems that are safer, more trustworthy, and better for society.

Abstract

Alignment of artificial intelligence (AI) encompasses the normative problem of specifying how AI systems should act and the technical problem of ensuring AI systems comply with those specifications. To date, AI alignment has generally overlooked an important source of knowledge and practice for grappling with these problems: law. In this paper, we aim to fill this gap by exploring how legal rules, principles, and methods can be leveraged to address problems of alignment and inform the design of AI systems that operate safely and ethically. This emerging field -- legal alignment -- focuses on three research directions: (1) designing AI systems to comply with the content of legal rules developed through legitimate institutions and processes, (2) adapting methods from legal interpretation to guide how AI systems reason and make decisions, and (3) harnessing legal concepts as a structural blueprint for confronting challenges of reliability, trust, and cooperation in AI systems. These research directions present new conceptual, empirical, and institutional questions, which include examining the specific set of laws that particular AI systems should follow, creating evaluations to assess their legal compliance in real-world settings, and developing governance frameworks to support the implementation of legal alignment in practice. Tackling these questions requires expertise across law, computer science, and other disciplines, offering these communities the opportunity to collaborate in designing AI for the better.