FedRand: Enhancing Privacy in Federated Learning with Randomized LoRA Subparameter Updates
Sangwoo Park, Seanie Lee, Byungjoo Kim, Sung Ju Hwang
2025-03-11
Summary
This paper talks about FedRand, a tool that helps keep your data private when training AI models across multiple devices by only sharing parts of the model updates and hiding the rest.
What's the problem?
Even when using federated learning (where data stays on your device), sending full model updates to a central server can leak private details, especially in vision-language models that might memorize your photos or messages.
What's the solution?
FedRand makes each device randomly pick parts of the model to update and keeps the rest private, so the server only gets partial updates that can’t easily reveal your data.
Why it matters?
This protects your personal info from being exposed through AI training, making things like photo apps or chat assistants safer to use without risking privacy leaks.
Abstract
Federated Learning (FL) is a widely used framework for training models in a decentralized manner, ensuring that the central server does not have direct access to data from local clients. However, this approach may still fail to fully preserve data privacy, as models from local clients are exposed to the central server during the aggregation process. This issue becomes even more critical when training vision-language models (VLMs) with FL, as VLMs can easily memorize training data instances, making them vulnerable to membership inference attacks (MIAs). To address this challenge, we propose the FedRand framework, which avoids disclosing the full set of client parameters. In this framework, each client randomly selects subparameters of Low-Rank Adaptation (LoRA) from the server and keeps the remaining counterparts of the LoRA weights as private parameters. After training both parameters on the client's private dataset, only the non-private client parameters are sent back to the server for aggregation. This approach mitigates the risk of exposing client-side VLM parameters, thereby enhancing data privacy. We empirically validate that FedRand improves robustness against MIAs compared to relevant baselines while achieving accuracy comparable to methods that communicate full LoRA parameters across several benchmark datasets.