< Explain other AI papers

Scrub It Out! Erasing Sensitive Memorization in Code Language Models via Machine Unlearning

Zhaoyang Chu, Yao Wan, Zhikun Zhang, Di Wang, Zhou Yang, Hongyu Zhang, Pan Zhou, Xuanhua Shi, Hai Jin, David Lo

2025-09-18

Scrub It Out! Erasing Sensitive Memorization in Code Language Models via Machine Unlearning

Summary

This paper investigates a way to remove sensitive information that code-generating AI models accidentally memorize from the data they were trained on, without having to completely retrain the models.

What's the problem?

AI models that write code, called Code Language Models, are really good at their job, but they sometimes unintentionally memorize and can reproduce private or confidential code snippets they saw during training. Existing solutions to this problem require completely retraining the AI, which is expensive and time-consuming. This means if a model leaks sensitive data, fixing it is a huge undertaking.

What's the solution?

The researchers explored a technique called 'machine unlearning,' which is like editing a model *after* it's already been trained. They developed a new method called CodeEraser that specifically targets and removes the memorized sensitive code segments while trying to keep the rest of the model working correctly. They tested this on several different code-generating AI models and showed it effectively removes the sensitive information without significantly harming the model's overall performance.

Why it matters?

This research is important because it offers a practical way to address privacy concerns with code-generating AI. Instead of costly full retraining, we might be able to 'erase' specific sensitive data, making these powerful tools safer to use with real-world codebases that contain confidential information. This could allow for wider adoption of these AI tools without the constant fear of data leaks.

Abstract

While Code Language Models (CLMs) have demonstrated superior performance in software engineering tasks such as code generation and summarization, recent empirical studies reveal a critical privacy vulnerability: these models exhibit unintended memorization of sensitive training data, enabling verbatim reproduction of confidential information when specifically prompted. To address this issue, several approaches, including training data de-duplication and differential privacy augmentation, have been proposed. However, these methods require full-model retraining for deployed CLMs, which incurs substantial computational costs. In this paper, we aim to answer the following research question: Can sensitive information memorized by CLMs be erased effectively and efficiently? We conduct a pioneering investigation into erasing sensitive memorization in CLMs through machine unlearning - a post-hoc modification method that removes specific information from trained models without requiring full retraining. Specifically, we first quantify the memorization risks of sensitive data within CLM training datasets and curate a high-risk dataset of 50,000 sensitive memorized samples as unlearning targets. We study two widely used gradient ascent-based unlearning approaches: the vanilla and constraint-based methods, and introduce CodeEraser, an advanced variant that selectively unlearns sensitive memorized segments in code while preserving the structural integrity and functional correctness of the surrounding code. Extensive experiments on three families of CLMs, i.e., CodeParrot, CodeGen-Mono, and Qwen2.5-Coder, validate the effectiveness and efficiency of CodeEraser in erasing targeted sensitive memorization while maintaining model utility.