See and Fix the Flaws: Enabling VLMs and Diffusion Models to Comprehend Visual Artifacts via Agentic Data Synthesis
Jaehyun Park, Minyoung Ahn, Minkyu Kim, Jonghyun Lee, Jae-Gil Lee, Dongmin Park
2026-02-25
Summary
This paper introduces a new system called ArtiAgent that automatically creates images with realistic flaws, or 'artifacts', that often appear in AI-generated pictures.
What's the problem?
AI image generators are getting better, but their pictures still often look fake because of noticeable imperfections. Fixing these imperfections is hard because it usually requires lots of examples of images *with* these flaws, and getting people to manually label those flaws is expensive and time-consuming. We need a way to automatically create these flawed images for training and testing.
What's the solution?
ArtiAgent works by using three different 'agents'. First, it identifies objects within a real image. Then, it adds realistic flaws to those objects using a special technique that subtly changes parts of the image within a diffusion model. Finally, it checks the quality of the added flaws and explains *why* they look realistic, both locally (around the flaw) and globally (in the whole image). This process creates a large dataset of 100,000 images with detailed information about the flaws.
Why it matters?
This research is important because it provides a way to automatically improve AI image generators. By training them to recognize and fix these common flaws, we can create AI-generated images that are much more realistic and useful for a variety of applications.
Abstract
Despite recent advances in diffusion models, AI generated images still often contain visual artifacts that compromise realism. Although more thorough pre-training and bigger models might reduce artifacts, there is no assurance that they can be completely eliminated, which makes artifact mitigation a highly crucial area of study. Previous artifact-aware methodologies depend on human-labeled artifact datasets, which are costly and difficult to scale, underscoring the need for an automated approach to reliably acquire artifact-annotated datasets. In this paper, we propose ArtiAgent, which efficiently creates pairs of real and artifact-injected images. It comprises three agents: a perception agent that recognizes and grounds entities and subentities from real images, a synthesis agent that introduces artifacts via artifact injection tools through novel patch-wise embedding manipulation within a diffusion transformer, and a curation agent that filters the synthesized artifacts and generates both local and global explanations for each instance. Using ArtiAgent, we synthesize 100K images with rich artifact annotations and demonstrate both efficacy and versatility across diverse applications. Code is available at link.