EruDiff: Refactoring Knowledge in Diffusion Models for Advanced Text-to-Image Synthesis

arXiv cs.CV / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that text-to-image diffusion models struggle with implicit prompts because their underlying knowledge structures become dislocated, leading to chaotic representation and counter-factual outputs.
  • It introduces EruDiff, which refactors model knowledge by matching the distribution of difficult implicit prompts to that of explicit “anchor” prompts using Diffusion Knowledge Distribution Matching (DK-DM).
  • To mitigate biases introduced by explicit prompt rendering, the method uses Negative-Only Reinforcement Learning (NO-RL) for fine-grained correction during fine-tuning.
  • Experiments show significant performance gains over leading models (including FLUX and Qwen-Image) on benchmarks targeting scientific and broad world knowledge (Science-T2I and WISE), with claimed generalizability.
  • The authors provide an open-source code repository for implementation and replication: https://github.com/xiefan-guo/erudiff.

Abstract

Text-to-image diffusion models have achieved remarkable fidelity in synthesizing images from explicit text prompts, yet exhibit a critical deficiency in processing implicit prompts that require deep-level world knowledge, ranging from natural sciences to cultural commonsense, resulting in counter-factual synthesis. This paper traces the root of this limitation to a fundamental dislocation of the underlying knowledge structures, manifesting as a chaotic organization of implicit prompts compared to their explicit counterparts. In this paper, we propose EruDiff, which aims to refactor the knowledge within diffusion models. Specifically, we develop the Diffusion Knowledge Distribution Matching (DK-DM) to register the knowledge distribution of intractable implicit prompts with that of well-defined explicit anchors. Furthermore, to rectify the inherent biases in explicit prompt rendering, we employ the Negative-Only Reinforcement Learning (NO-RL) strategy for fine-grained correction. Rigorous empirical evaluations demonstrate that our method significantly enhances the performance of leading diffusion models, including FLUX and Qwen-Image, across both the scientific knowledge benchmark (i.e., Science-T2I) and the world knowledge benchmark (i.e., WISE), underscoring the effectiveness and generalizability. Our code is available at https://github.com/xiefan-guo/erudiff.