< Explain other AI papers

Fed-SE: Federated Self-Evolution for Privacy-Constrained Multi-Environment LLM Agents

Xiang Chen, Yuling Shi, Qizhen Lan, Yuchao Qiu, Xiaodong Gu

2025-12-12

Fed-SE: Federated Self-Evolution for Privacy-Constrained Multi-Environment LLM Agents

Summary

This paper introduces a new method, Fed-SE, for improving how AI agents learn and work together in different situations while protecting user privacy.

What's the problem?

Imagine you want a bunch of AI agents to get better at tasks, but they're all working in different environments and you can't share all their data with each other due to privacy concerns. Traditional methods for letting AI learn from each other, like Federated Learning, struggle when the tasks are complex and the rewards for good performance are infrequent. This leads to conflicting updates and makes it hard for the AI to improve consistently across all environments.

What's the solution?

The researchers developed Fed-SE, which allows AI agents to learn locally by focusing on the best parts of their experiences – the 'high-return trajectories'. Then, instead of directly combining all the changes the agents make, Fed-SE smartly combines them in a way that focuses on the common knowledge and ignores the environment-specific quirks. This is done using something called a 'low-rank subspace', which helps prevent negative transfer, meaning one agent's learning doesn't mess up another agent's progress.

Why it matters?

This research is important because it allows AI agents to collaborate and learn from each other even when data privacy is a concern. By improving how these agents learn across different environments, it makes them more robust and reliable, potentially leading to better performance in real-world applications like robotics, personalized assistants, and more. The 18% improvement in task success rates shows this method is a significant step forward.

Abstract

LLM agents are widely deployed in complex interactive tasks, yet privacy constraints often preclude centralized optimization and co-evolution across dynamic environments. While Federated Learning (FL) has proven effective on static datasets, its extension to the open-ended self-evolution of agents remains underexplored. Directly applying standard FL is challenging: heterogeneous tasks and sparse, trajectory-level rewards introduce severe gradient conflicts, destabilizing the global optimization process. To bridge this gap, we propose Fed-SE, a Federated Self-Evolution framework for LLM agents. Fed-SE establishes a local evolution-global aggregation paradigm. Locally, agents employ parameter-efficient fine-tuning on filtered, high-return trajectories to achieve stable gradient updates. Globally, Fed-SE aggregates updates within a low-rank subspace that disentangles environment-specific dynamics, effectively reducing negative transfer across clients. Experiments across five heterogeneous environments demonstrate that Fed-SE improves average task success rates by approximately 18% over federated baselines, validating its effectiveness in robust cross-environment knowledge transfer in privacy-constrained deployments.