< Explain other AI papers

MOSAIC: Modeling Social AI for Content Dissemination and Regulation in Multi-Agent Simulations

Genglin Liu, Salman Rahman, Elisa Kreiss, Marzyeh Ghassemi, Saadia Gabriel

2025-04-11

MOSAIC: Modeling Social AI for Content Dissemination and Regulation in
  Multi-Agent Simulations

Summary

This paper talks about MOSAIC, a new open-source simulation tool that creates virtual social networks using AI agents to act like real users. These agents can like, share, comment on, or flag posts, and the system is used to study how information, including misinformation, spreads online and how people react to it.

What's the problem?

The main problem is that it's hard to understand and test how content, especially false or misleading information, spreads on social media and how different moderation strategies affect what people see and do. Real-world experiments are risky and complicated, so researchers need safe and realistic ways to study these dynamics.

What's the solution?

To solve this, the authors built MOSAIC, which uses AI-powered agents with detailed, realistic personalities to simulate a social network similar to platforms like Twitter or Facebook. The system tracks how these agents interact with content and each other, allowing researchers to test different ways of moderating posts and see how these strategies impact both the spread of misinformation and overall user engagement. The simulation also lets them analyze whether the reasons agents give for their actions actually match their behavior in the network.

Why it matters?

This work matters because it gives researchers and policymakers a powerful tool to safely experiment with and understand the effects of different content moderation strategies before trying them in the real world. By using MOSAIC, they can find better ways to stop the spread of misinformation and make social networks healthier and more trustworthy for everyone.

Abstract

We present a novel, open-source social network simulation framework, MOSAIC, where generative language agents predict user behaviors such as liking, sharing, and flagging content. This simulation combines LLM agents with a directed social graph to analyze emergent deception behaviors and gain a better understanding of how users determine the veracity of online social content. By constructing user representations from diverse fine-grained personas, our system enables multi-agent simulations that model content dissemination and engagement dynamics at scale. Within this framework, we evaluate three different content moderation strategies with simulated misinformation dissemination, and we find that they not only mitigate the spread of non-factual content but also increase user engagement. In addition, we analyze the trajectories of popular content in our simulations, and explore whether simulation agents' articulated reasoning for their social interactions truly aligns with their collective engagement patterns. We open-source our simulation software to encourage further research within AI and social sciences.