LACON: Training Text-to-Image Model from Uncurated Data

arXiv cs.CV / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current text-to-image training often relies on a filter-first approach that discards low-quality raw data, potentially wasting useful information.
  • It introduces LACON (Labeling-and-Conditioning), which reframes quality signals from uncurated data—like aesthetic scores and watermark probabilities—into explicit conditioning labels rather than dropping samples.
  • The training objective teaches the model to represent the full quality spectrum, learning boundaries between higher- and lower-quality content.
  • Experiments reportedly show improved generation quality over baseline approaches that train only on filtered data while using the same compute budget, suggesting uncurated data has value when used correctly.

Abstract

The success of modern text-to-image generation is largely attributed to massive, high-quality datasets. Currently, these datasets are curated through a filter-first paradigm that aggressively discards low-quality raw data based on the assumption that it is detrimental to model performance. Is the discarded bad data truly useless, or does it hold untapped potential? In this work, we critically re-examine this question. We propose LACON (Labeling-and-Conditioning), a novel training framework that exploits the underlying uncurated data distribution. Instead of filtering, LACON re-purposes quality signals, such as aesthetic scores and watermark probabilities as explicit, quantitative condition labels. The generative model is then trained to learn the full spectrum of data quality, from bad to good. By learning the explicit boundary between high- and low-quality content, LACON achieves superior generation quality compared to baselines trained only on filtered data using the same compute budget, proving the significant value of uncurated data.