< Explain other AI papers

EtCon: Edit-then-Consolidate for Reliable Knowledge Editing

Ruilin Li, Yibin Wang, Wenhong Zhu, Chenglin Li, Jinghao Zhang, Chenliang Li, Junchi Yan, Jiaqi Wang

2025-12-11

EtCon: Edit-then-Consolidate for Reliable Knowledge Editing

Summary

This paper focuses on a way to update information within large language models, like ChatGPT, without having to completely retrain them from scratch. It's about 'knowledge editing' – changing specific facts the model knows.

What's the problem?

Current methods for knowledge editing work well in controlled tests, but they often fail when used in real-world situations where the model is constantly learning. Two main issues cause this: first, the model tends to 'overlearn' the new fact, forgetting things it already knew. Second, the new information isn't properly woven into how the model actually generates text, so it doesn't consistently use the updated knowledge when responding to questions.

What's the solution?

The researchers propose a new approach called 'Edit-then-Consolidate'. It works in two steps. First, they use a technique called 'Targeted Proximal Supervised Fine-Tuning' to carefully update the model's knowledge, minimizing the risk of it forgetting previous information. Then, they use 'Group Relative Policy Optimization' to ensure the new knowledge is actually used during text generation, making the model’s responses consistent with the updated facts. This consolidation step is key to making the edits stick.

Why it matters?

This research is important because it makes knowledge editing much more practical. If we can reliably update the information in large language models without full retraining, it will be easier to keep them accurate and up-to-date, which is crucial for their usefulness in real-world applications like answering questions, providing information, and assisting with tasks.

Abstract

Knowledge editing aims to update specific facts in large language models (LLMs) without full retraining. Prior efforts sought to tune the knowledge layers of LLMs, proving effective for making selective edits. However, a significant gap exists between their performance in controlled, teacher-forcing evaluations and their real-world effectiveness in lifelong learning scenarios, which greatly limits their practical applicability. This work's empirical analysis reveals two recurring issues associated with this gap: (1) Most traditional methods lead the edited model to overfit to the new fact, thereby degrading pre-trained capabilities; (2) There is a critical absence of a knowledge consolidation stage, leaving new facts insufficiently integrated into LLMs' inference-time behavior under autoregressive generation, thereby leading to a mismatch between parametric knowledge and actual generation behavior. To this end, we propose Edit-then-Consolidate, a novel knowledge editing paradigm that aims to bridge the gap between theoretical knowledge editing methods and their real-world applicability. Specifically, (1) our framework mitigates overfitting via Targeted Proximal Supervised Fine-Tuning (TPSFT) that localizes the edit via a trust-region objective to limit policy drift; (2) Then, a consolidation stage using Group Relative Policy Optimization (GRPO) aligns the edited knowledge with CoT-based inference policy by optimizing trajectory-level behavior under comprehensive reward signals. Extensive experiments demonstrate our framework consistently improves editing reliability and generalization under real-world evaluations, while better preserving locality and pre-trained capabilities.