FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated Learning
Yuan Yao, Lixu Wang, Jiaqi Wu, Jin Song, Simin Chen, Zehua Wang, Zijian Tian, Wei Chen, Huixia Li, Xiaoxiao Li
2025-12-01
Summary
This paper introduces a new method called Federated Representation Entanglement, or FedRE, for improving privacy and efficiency in federated learning, which is a way to train AI models using data from many different sources without directly sharing the data itself.
What's the problem?
Traditional federated learning assumes all devices use the same AI model structure, but in reality, devices have different capabilities and data. This difference, called heterogeneity, makes standard methods less effective. Also, there are privacy concerns because even sharing model updates could potentially reveal information about the original data, and sending lots of data back and forth between devices and the central server can be slow and costly.
What's the solution?
FedRE tackles these issues by having each device create a single, combined 'representation' of its data using random weights. This representation, along with a coded version of the data's category, is sent to the central server. The server then trains a global model using these combined representations. Importantly, the random weights are changed each time, which helps prevent the model from becoming too confident and improves its accuracy. By sending just one combined representation per device, it also reduces the risk of privacy breaches and lowers communication costs.
Why it matters?
This research is important because it makes federated learning more practical for real-world scenarios where devices are diverse and privacy is a major concern. By improving both performance and privacy while reducing communication overhead, FedRE opens the door to more widespread use of this powerful technique for collaborative AI development.
Abstract
Federated learning (FL) enables collaborative training across clients without compromising privacy. While most existing FL methods assume homogeneous model architectures, client heterogeneity in data and resources renders this assumption impractical, motivating model-heterogeneous FL. To address this problem, we propose Federated Representation Entanglement (FedRE), a framework built upon a novel form of client knowledge termed entangled representation. In FedRE, each client aggregates its local representations into a single entangled representation using normalized random weights and applies the same weights to integrate the corresponding one-hot label encodings into the entangled-label encoding. Those are then uploaded to the server to train a global classifier. During training, each entangled representation is supervised across categories via its entangled-label encoding, while random weights are resampled each round to introduce diversity, mitigating the global classifier's overconfidence and promoting smoother decision boundaries. Furthermore, each client uploads a single cross-category entangled representation along with its entangled-label encoding, mitigating the risk of representation inversion attacks and reducing communication overhead. Extensive experiments demonstrate that FedRE achieves an effective trade-off among model performance, privacy protection, and communication overhead. The codes are available at https://github.com/AIResearch-Group/FedRE.