< Explain other AI papers

Does Socialization Emerge in AI Agent Society? A Case Study of Moltbook

Ming Li, Xirui Li, Tianyi Zhou

2026-02-18

Does Socialization Emerge in AI Agent Society? A Case Study of Moltbook

Summary

This paper investigates whether groups of AI agents, interacting in a simulated online world called Moltbook, develop social patterns similar to those seen in human societies.

What's the problem?

As AI agents become more common online, it's unclear if they'll naturally form complex social structures like humans do. Specifically, the researchers wanted to know if these AI societies would become uniform in their thinking and behavior, or if they would maintain diversity and develop lasting relationships and influence networks. They were trying to figure out if simply having a lot of agents interacting is enough to create a real 'society'.

What's the solution?

The researchers created a way to measure different aspects of the AI society in Moltbook over time. They looked at how stable the overall meaning of what agents said was, how often agents changed the words they used, how consistent individual agents were in their own communication, how long influence lasted, and whether the agents reached a shared understanding. They then analyzed a large amount of data from Moltbook to see how these factors changed.

Why it matters?

The findings show that even with many agents interacting, the AI society didn't really 'socialize'. While the general topics discussed stabilized, individual agents remained very different and didn't adapt to each other. This means that just scaling up the number of AI agents and letting them interact isn't enough to create a functioning society. This research provides important guidelines for designing future AI systems that *can* form meaningful social connections and structures.

Abstract

As large language model agents increasingly populate networked environments, a fundamental question arises: do artificial intelligence (AI) agent societies undergo convergence dynamics similar to human social systems? Lately, Moltbook approximates a plausible future scenario in which autonomous agents participate in an open-ended, continuously evolving online society. We present the first large-scale systemic diagnosis of this AI agent society. Beyond static observation, we introduce a quantitative diagnostic framework for dynamic evolution in AI agent societies, measuring semantic stabilization, lexical turnover, individual inertia, influence persistence, and collective consensus. Our analysis reveals a system in dynamic balance in Moltbook: while global semantic averages stabilize rapidly, individual agents retain high diversity and persistent lexical turnover, defying homogenization. However, agents exhibit strong individual inertia and minimal adaptive response to interaction partners, preventing mutual influence and consensus. Consequently, influence remains transient with no persistent supernodes, and the society fails to develop stable collective influence anchors due to the absence of shared social memory. These findings demonstrate that scale and interaction density alone are insufficient to induce socialization, providing actionable design and analysis principles for upcoming next-generation AI agent societies.