< Explain other AI papers

AgentSocialBench: Evaluating Privacy Risks in Human-Centered Agentic Social Networks

Prince Zizhuang Wang, Shuli Jiang

2026-04-06

AgentSocialBench: Evaluating Privacy Risks in Human-Centered Agentic Social Networks

Summary

This paper investigates the privacy risks that arise when many AI agents work together to help people in a social network, like managing different parts of your life and coordinating with your friends' agents.

What's the problem?

As AI agents become more common and start working *for* individuals and coordinating *with* each other, it creates new privacy concerns. Previous research looked at AI agents individually or coordinating simple tasks, but hasn't addressed the complex privacy issues that come up when these agents are part of a larger, human-centered social network where they share information across different areas of your life and with other users. Specifically, it's unclear how well these agents protect your sensitive information when they're constantly communicating and collaborating.

What's the solution?

The researchers created a testing environment called AgentSocialBench. This environment simulates a social network with AI agents helping users, and includes realistic user profiles with different levels of sensitive information. They then ran experiments to see how easily private information leaked when agents were interacting with each other and with humans. They found that even when agents were told to protect information, it still leaked due to the constant need to coordinate, and surprisingly, trying to teach agents to *hide* sensitive details actually made them talk about it *more* – they call this the 'abstraction paradox'.

Why it matters?

This research shows that current AI agents aren't very good at protecting your privacy when they're working together in a social network. Simply giving them instructions (prompt engineering) isn't enough. We need new, more robust methods to ensure that these AI-powered social interactions are safe and don't compromise your personal information before they become widely used.

Abstract

With the rise of personalized, persistent LLM agent frameworks such as OpenClaw, human-centered agentic social networks in which teams of collaborative AI agents serve individual users in a social network across multiple domains are becoming a reality. This setting creates novel privacy challenges: agents must coordinate across domain boundaries, mediate between humans, and interact with other users' agents, all while protecting sensitive personal information. While prior work has evaluated multi-agent coordination and privacy preservation, the dynamics and privacy risks of human-centered agentic social networks remain unexplored. To this end, we introduce AgentSocialBench, the first benchmark to systematically evaluate privacy risk in this setting, comprising scenarios across seven categories spanning dyadic and multi-party interactions, grounded in realistic user profiles with hierarchical sensitivity labels and directed social graphs. Our experiments reveal that privacy in agentic social networks is fundamentally harder than in single-agent settings: (1) cross-domain and cross-user coordination creates persistent leakage pressure even when agents are explicitly instructed to protect information, (2) privacy instructions that teach agents how to abstract sensitive information paradoxically cause them to discuss it more (we call it abstraction paradox). These findings underscore that current LLM agents lack robust mechanisms for privacy preservation in human-centered agentic social networks, and that new approaches beyond prompt engineering are needed to make agent-mediated social coordination safe for real-world deployment.