AI Navigate

WeEdit: A Dataset, Benchmark and Glyph-Guided Framework for Text-centric Image Editing

arXiv cs.CV / 3/13/2026

📰 NewsModels & Research

Key Points

  • WeEdit presents a scalable data construction pipeline, two benchmarks, and a tailored two-stage training strategy for text-centric image editing.
  • It introduces an HTML-based automatic editing pipeline that generates about 330K training pairs across 15 languages, enabling multilingual text editing in images.
  • The framework uses glyph-guided supervised fine-tuning to inject explicit spatial and content priors, followed by multi-objective reinforcement learning to improve instruction adherence, text clarity, and background preservation.
  • The approach provides standardized bilingual and multilingual benchmarks for comprehensive evaluation of text-centric image editing models.
  • Experiments show WeEdit outperforming previous open-source models across diverse editing operations.

Abstract

Instruction-based image editing aims to modify specific content within existing images according to user-provided instructions while preserving non-target regions. Beyond traditional object- and style-centric manipulation, text-centric image editing focuses on modifying, translating, or rearranging textual elements embedded within images. However, existing leading models often struggle to execute complex text editing precisely, frequently producing blurry or hallucinated characters. We attribute these failures primarily to the lack of specialized training paradigms tailored for text-centric editing, as well as the absence of large-scale datasets and standardized benchmarks necessary for a closed-loop training and evaluation system. To address these limitations, we present WeEdit, a systematic solution encompassing a scalable data construction pipeline, two benchmarks, and a tailored two-stage training strategy. Specifically, we propose a novel HTML-based automatic editing pipeline, which generates 330K training pairs covering diverse editing operations and 15 languages, accompanied by standardized bilingual and multilingual benchmarks for comprehensive evaluation. On the algorithmic side, we employ glyph-guided supervised fine-tuning to inject explicit spatial and content priors, followed by a multi-objective reinforcement learning stage to align generation with instruction adherence, text clarity, and background preservation. Extensive experiments demonstrate that WeEdit outperforms previous open-source models by a clear margin across diverse editing operations.