< Explain other AI papers

Efficient Training with Denoised Neural Weights

Yifan Gong, Zheng Zhan, Yanyu Li, Yerlan Idelbayev, Andrey Zharkov, Kfir Aberman, Sergey Tulyakov, Yanzhi Wang, Jian Ren

2024-07-17

Efficient Training with Denoised Neural Weights

Summary

This paper discusses a new method for initializing neural network weights called Efficient Training with Denoised Neural Weights, which helps speed up the training of deep learning models.

What's the problem?

Training deep neural network models requires careful initialization of their weights (the parameters that the model learns). If these weights are not set correctly from the beginning, it can lead to longer training times and lower performance. Manually tuning these weights is often time-consuming and can result in mistakes.

What's the solution?

To solve this problem, the authors propose a weight generator that uses a technique called image-to-image translation with generative adversarial networks (GANs). They collect a dataset of various image editing concepts and their corresponding trained weights. The weight generator predicts the best weights to use for initializing models by dividing them into smaller blocks and training a diffusion model on this data. This method allows for much faster training times—about 43.3 seconds compared to traditional methods, achieving better image generation quality in the process.

Why it matters?

This research is important because it makes training deep learning models more efficient and effective. By providing a better way to initialize weights, it can help researchers and developers create high-performing models more quickly, which is crucial in fields like computer vision, where rapid advancements are needed.

Abstract

Good weight initialization serves as an effective measure to reduce the training cost of a deep neural network (DNN) model. The choice of how to initialize parameters is challenging and may require manual tuning, which can be time-consuming and prone to human error. To overcome such limitations, this work takes a novel step towards building a weight generator to synthesize the neural weights for initialization. We use the image-to-image translation task with generative adversarial networks (GANs) as an example due to the ease of collecting model weights spanning a wide range. Specifically, we first collect a dataset with various image editing concepts and their corresponding trained weights, which are later used for the training of the weight generator. To address the different characteristics among layers and the substantial number of weights to be predicted, we divide the weights into equal-sized blocks and assign each block an index. Subsequently, a diffusion model is trained with such a dataset using both text conditions of the concept and the block indexes. By initializing the image translation model with the denoised weights predicted by our diffusion model, the training requires only 43.3 seconds. Compared to training from scratch (i.e., Pix2pix), we achieve a 15x training time acceleration for a new concept while obtaining even better image generation quality.