< Explain other AI papers

WeEdit: A Dataset, Benchmark and Glyph-Guided Framework for Text-centric Image Editing

Hui Zhang, Juntao Liu, Zongkai Liu, Liqiang Niu, Fandong Meng, Zuxuan Wu, Yu-Gang Jiang

2026-03-13

WeEdit: A Dataset, Benchmark and Glyph-Guided Framework for Text-centric Image Editing

Summary

This paper focuses on improving how well computers can edit text *within* images, based on what a user asks them to do. It's about making changes to the words you see in a picture, like changing what they say or moving them around.

What's the problem?

Current image editing programs are pretty good at changing objects or the overall style of a picture, but they often mess up when you ask them to edit the text inside an image. The text often comes out blurry, or the computer makes up characters that weren't there before. This happens because these programs haven't been specifically trained to handle text editing, and there aren't enough good examples and ways to test how well they're doing.

What's the solution?

The researchers created a system called WeEdit to fix this. They built a way to automatically create a huge dataset of images with text that needs to be edited, covering many different languages and types of edits. Then, they trained a computer model in two steps: first, they showed it exactly where the text is and what it should look like, and second, they used a reward system to encourage it to follow instructions, keep the text clear, and not mess up the rest of the image. This process helps the model learn to edit text accurately.

Why it matters?

This work is important because it makes image editing more useful and precise. Being able to easily edit text in images has lots of applications, like creating memes, updating screenshots, or fixing errors in graphics. By providing a better system and a way to measure progress, this research helps move the field of image editing forward and makes it easier for anyone to manipulate images with text.

Abstract

Instruction-based image editing aims to modify specific content within existing images according to user-provided instructions while preserving non-target regions. Beyond traditional object- and style-centric manipulation, text-centric image editing focuses on modifying, translating, or rearranging textual elements embedded within images. However, existing leading models often struggle to execute complex text editing precisely, frequently producing blurry or hallucinated characters. We attribute these failures primarily to the lack of specialized training paradigms tailored for text-centric editing, as well as the absence of large-scale datasets and standardized benchmarks necessary for a closed-loop training and evaluation system. To address these limitations, we present WeEdit, a systematic solution encompassing a scalable data construction pipeline, two benchmarks, and a tailored two-stage training strategy. Specifically, we propose a novel HTML-based automatic editing pipeline, which generates 330K training pairs covering diverse editing operations and 15 languages, accompanied by standardized bilingual and multilingual benchmarks for comprehensive evaluation. On the algorithmic side, we employ glyph-guided supervised fine-tuning to inject explicit spatial and content priors, followed by a multi-objective reinforcement learning stage to align generation with instruction adherence, text clarity, and background preservation. Extensive experiments demonstrate that WeEdit outperforms previous open-source models by a clear margin across diverse editing operations.