< Explain other AI papers

Federated Sketching LoRA: On-Device Collaborative Fine-Tuning of Large Language Models

Wenzhi Fang, Dong-Jun Han, Liangqi Yuan, Seyyedali Hosseinalipour, Christopher G. Brinton

2025-02-05

Federated Sketching LoRA: On-Device Collaborative Fine-Tuning of Large
  Language Models

Summary

This paper talks about a new method called Federated Sketching LoRA (FSLoRA) that helps improve large language models on different devices like phones or computers, even when these devices have different capabilities.

What's the problem?

When trying to make large language models work better on various devices, it's hard to balance performance with the different processing powers of each device. Current methods either don't explain why they work or require too much computing power, which isn't practical for many devices.

What's the solution?

The researchers created FSLoRA, which uses a technique called 'sketching' to let each device update only parts of the model that it can handle. This way, devices with less power can still contribute to improving the model without getting overwhelmed. FSLoRA adjusts how much each device contributes based on what it can handle, making the whole process more flexible and efficient.

Why it matters?

This matters because it allows more devices to help improve AI language models, even if they're not very powerful. This could lead to better AI that learns from a wider range of real-world data, potentially making AI systems more useful and accessible to more people.

Abstract

Fine-tuning large language models (LLMs) on devices is attracting increasing interest. Recent works have fused low-rank adaptation (LoRA) techniques with federated fine-tuning to mitigate challenges associated with device model sizes and data scarcity. Still, the heterogeneity of computational resources remains a critical bottleneck: while higher-rank modules generally enhance performance, varying device capabilities constrain LoRA's feasible rank range. Existing approaches attempting to resolve this issue either lack analytical justification or impose additional computational overhead, leaving a wide gap for an efficient and theoretically-grounded solution. To address these challenges, we propose federated sketching LoRA (FSLoRA), which leverages a sketching mechanism to enable devices to selectively update submatrices of global LoRA modules maintained by the server. By adjusting the sketching ratios, which determine the ranks of the submatrices on the devices, FSLoRA flexibly adapts to device-specific communication and computational constraints. We provide a rigorous convergence analysis of FSLoRA that characterizes how the sketching ratios affect the convergence rate. Through comprehensive experiments on multiple datasets and LLM models, we demonstrate FSLoRA's superior performance compared to various baselines.