FedSVD: Adaptive Orthogonalization for Private Federated Learning with LoRA
Seanie Lee, Sangwoo Park, Dong Bok Lee, Dominik Wagner, Haebin Seong, Tobias Bocklet, Juho Lee, Sung Ju Hwang
2025-05-20
Summary
This paper talks about FedSVD, a new technique that helps improve the way language models are trained across many different devices while still keeping everyone's data private.
What's the problem?
The problem is that when training language models on lots of people's devices using federated learning, it's hard to keep the models accurate because privacy rules add noise to protect users, which can mess up the learning process.
What's the solution?
To solve this, the researchers used a mathematical method called singular value decomposition to organize the learning updates in a way that handles the added noise better, so the models can learn well without giving up privacy.
Why it matters?
This matters because it allows for safer and more effective AI training on personal devices, like phones or laptops, so people can benefit from smarter AI without worrying about their private information being exposed.
Abstract
FedSVD introduces a method for stable and effective fine-tuning of pre-trained language models in federated learning with differential privacy by using singular value decomposition to handle noise and maintain performance.