Data Advisor: Dynamic Data Curation for Safety Alignment of Large Language Models
Fei Wang, Ninareh Mehrabi, Palash Goyal, Rahul Gupta, Kai-Wei Chang, Aram Galstyan
2024-10-13

Summary
This paper introduces Data Advisor, a new method for improving the safety and quality of data used to train large language models (LLMs) by dynamically curating the training data based on its effectiveness.
What's the problem?
Data quality is crucial for aligning LLMs with human values and ensuring they produce safe and appropriate responses. However, data generated by LLMs often has problems, such as missing important information or containing low-quality examples. This can lead to models that do not perform well or behave in undesirable ways.
What's the solution?
To solve these issues, the authors propose Data Advisor, which continuously monitors the training data and identifies weaknesses in it. By following a set of predefined principles, Data Advisor advises on how to improve the next round of data generation. This means it can suggest adding more relevant examples or removing poor-quality ones, helping to create a more balanced and effective dataset. The method can be easily integrated into existing data generation processes to enhance overall data quality.
Why it matters?
This research is important because it addresses the critical challenge of ensuring that LLMs are safe and aligned with human values. By improving the quality of training data through dynamic curation, Data Advisor helps create more reliable AI systems that can better understand and respond to human needs, ultimately leading to safer and more ethical AI applications.
Abstract
Data is a crucial element in large language model (LLM) alignment. Recent studies have explored using LLMs for efficient data collection. However, LLM-generated data often suffers from quality issues, with underrepresented or absent aspects and low-quality datapoints. To address these problems, we propose Data Advisor, an enhanced LLM-based method for generating data that takes into account the characteristics of the desired dataset. Starting from a set of pre-defined principles in hand, Data Advisor monitors the status of the generated data, identifies weaknesses in the current dataset, and advises the next iteration of data generation accordingly. Data Advisor can be easily integrated into existing data generation methods to enhance data quality and coverage. Experiments on safety alignment of three representative LLMs (i.e., Mistral, Llama2, and Falcon) demonstrate the effectiveness of Data Advisor in enhancing model safety against various fine-grained safety issues without sacrificing model utility.