< Explain other AI papers

Beyond Release: Access Considerations for Generative AI Systems

Irene Solaiman, Rishi Bommasani, Dan Hendrycks, Ariel Herbert-Voss, Yacine Jernite, Aviya Skowron, Andrew Trask

2025-02-25

Beyond Release: Access Considerations for Generative AI Systems

Summary

This paper talks about how to think beyond just releasing generative AI systems and instead focus on making them accessible in practical, safe, and fair ways for users and organizations.

What's the problem?

When generative AI systems are released, the focus is often on whether they are open or closed to the public. However, this doesn't address the real challenges of making these systems usable, like ensuring people have the right resources, technical tools, and societal support to use them effectively. Without clear access considerations, there can be risks like misuse or unequal availability.

What's the solution?

The researchers created a framework to evaluate access to AI systems based on three main factors: resources (like computing power), technical usability (how easy it is to use), and utility (how helpful it is for specific tasks). They tested this framework by comparing four high-performance language models, two open-source and two closed-source, showing that access depends more on these factors than whether the system is open or closed. They also explored how scaling access can affect risks and benefits.

Why it matters?

This matters because simply making AI systems available isn’t enough to ensure they are used safely and effectively. By focusing on access considerations, this research helps guide better decisions about releasing AI systems in ways that balance their potential risks and benefits. This can lead to more equitable and responsible use of generative AI in society.

Abstract

Generative AI release decisions determine whether system components are made available, but release does not address many other elements that change how users and stakeholders are able to engage with a system. Beyond release, access to system components informs potential risks and benefits. Access refers to practical needs, infrastructurally, technically, and societally, in order to use available components in some way. We deconstruct access along three axes: resourcing, technical usability, and utility. Within each category, a set of variables per system component clarify tradeoffs. For example, resourcing requires access to computing infrastructure to serve model weights. We also compare the accessibility of four high performance language models, two open-weight and two closed-weight, showing similar considerations for all based instead on access variables. Access variables set the foundation for being able to scale or increase access to users; we examine the scale of access and how scale affects ability to manage and intervene on risks. This framework better encompasses the landscape and risk-benefit tradeoffs of system releases to inform system release decisions, research, and policy.