Building Trust: Foundations of Security, Safety and Transparency in AI
Huzaifa Sidhpurwala, Garth Mollett, Emily Fox, Mark Bestavros, Huamin Chen
2024-11-20

Summary
This paper discusses the importance of security, safety, and transparency in the development and use of artificial intelligence (AI) models, especially as they become more common in society.
What's the problem?
As AI models are used more widely, there are growing concerns about their potential risks and vulnerabilities. Issues such as tracking how models are developed, ensuring they are safe to use, and understanding who owns them are not well addressed. These gaps can lead to problems like data breaches or misuse of AI technology.
What's the solution?
The authors propose comprehensive strategies to improve the security and safety of AI systems. They suggest developing standardized processes for how AI models are created and monitored throughout their lifecycle. This includes better tracking of data used in training, clearer ownership of models, and methods to ensure that AI outputs are reliable and safe for users.
Why it matters?
This research is crucial because it lays the groundwork for making AI technology safer and more trustworthy. By addressing these foundational issues, the paper aims to help developers create AI systems that are not only effective but also secure and transparent, ultimately benefiting society as a whole.
Abstract
This paper explores the rapidly evolving ecosystem of publicly available AI models, and their potential implications on the security and safety landscape. As AI models become increasingly prevalent, understanding their potential risks and vulnerabilities is crucial. We review the current security and safety scenarios while highlighting challenges such as tracking issues, remediation, and the apparent absence of AI model lifecycle and ownership processes. Comprehensive strategies to enhance security and safety for both model developers and end-users are proposed. This paper aims to provide some of the foundational pieces for more standardized security, safety, and transparency in the development and operation of AI models and the larger open ecosystems and communities forming around them.