< Explain other AI papers

MOSAIC: Modeling Social AI for Content Dissemination and Regulation in Multi-Agent Simulations

Genglin Liu, Salman Rahman, Elisa Kreiss, Marzyeh Ghassemi, Saadia Gabriel

2025-04-11

MOSAIC: Modeling Social AI for Content Dissemination and Regulation in
  Multi-Agent Simulations

Summary

This paper talks about MOSAIC, a tool that uses AI to simulate social media networks where fake users interact, share posts, and spread or flag misinformation, helping researchers study how content goes viral and how to control it.

What's the problem?

Real social media makes it hard to test content rules safely because you can’t control what millions of real users do or track how fake news spreads without risking harm.

What's the solution?

MOSAIC creates fake social networks with AI users that act like real people, letting researchers test different moderation strategies (like community fact-checking) to see what stops fake news while keeping users engaged.

Why it matters?

This helps social networks and governments design better rules to fight misinformation without shutting down real platforms, and lets scientists study online behavior safely.

Abstract

We present a novel, open-source social network simulation framework, MOSAIC, where generative language agents predict user behaviors such as liking, sharing, and flagging content. This simulation combines LLM agents with a directed social graph to analyze emergent deception behaviors and gain a better understanding of how users determine the veracity of online social content. By constructing user representations from diverse fine-grained personas, our system enables multi-agent simulations that model content dissemination and engagement dynamics at scale. Within this framework, we evaluate three different content moderation strategies with simulated misinformation dissemination, and we find that they not only mitigate the spread of non-factual content but also increase user engagement. In addition, we analyze the trajectories of popular content in our simulations, and explore whether simulation agents' articulated reasoning for their social interactions truly aligns with their collective engagement patterns. We open-source our simulation software to encourage further research within AI and social sciences.