< Explain other AI papers

Data and AI governance: Promoting equity, ethics, and fairness in large language models

Alok Abhishek, Lisa Erickson, Tushar Bandopadhyay

2025-08-07

Data and AI governance: Promoting equity, ethics, and fairness in large
  language models

Summary

This paper talks about data and AI governance, which means creating rules and systems to make sure large language models are fair, ethical, and don’t show bias. It discusses different ways to find and reduce bias in these AI models and how organizations can manage AI responsibly.

What's the problem?

The problem is that large language models can unintentionally learn and spread harmful biases found in the data they were trained on. Without proper checks, these biases can affect decisions and reinforce unfair treatment of certain groups in society.

What's the solution?

The solution is to develop governance frameworks that help assess, measure, and reduce bias in AI models. This includes creating standards for ethical AI use, monitoring AI behavior, and using specialized tools and datasets to detect unfairness and make sure AI systems follow ethical principles.

Why it matters?

This matters because AI is being used in many important areas like healthcare, hiring, and law, so making sure AI is fair and ethical helps protect people’s rights and promotes equality. Good AI governance builds trust and makes AI technology safer and more beneficial for everyone.

Abstract

Approaches to govern, assess, and quantify bias in machine learning models, particularly large language models, are discussed, emphasizing data and AI governance frameworks for ethical deployment.