AI Navigate

Na\"ive PAINE: Lightweight Text-to-Image Generation Improvement with Prompt Evaluation

arXiv cs.AI / 3/16/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • Naïve PAINE predicts the numerical quality of an image directly from the initial noise and the given prompt to guide diffusion-based text-to-image generation.
  • It selects a subset of noise seeds with the highest predicted quality and uses them for generation, reducing the need for multiple trial runs.
  • The approach provides feedback on how well the diffusion model’s outputs align with the prompt, and is designed to be lightweight enough to integrate into existing DM pipelines.
  • Experimental results show Naïve PAINE outperforming existing approaches on several prompt corpus benchmarks.

Abstract

Text-to-Image (T2I) generation is primarily driven by Diffusion Models (DM) which rely on random Gaussian noise. Thus, like playing the slots at a casino, a DM will produce different results given the same user-defined inputs. This imposes a gambler's burden: To perform multiple generation cycles to obtain a satisfactory result. However, even though DMs use stochastic sampling to seed generation, the distribution of generated content quality highly depends on the prompt and the generative ability of a DM with respect to it. To account for this, we propose Na\"ive PAINE for improving the generative quality of Diffusion Models by leveraging T2I preference benchmarks. We directly predict the numerical quality of an image from the initial noise and given prompt. Na\"ive PAINE then selects a handful of quality noises and forwards them to the DM for generation. Further, Na\"ive PAINE provides feedback on the DM generative quality given the prompt and is lightweight enough to seamlessly fit into existing DM pipelines. Experimental results demonstrate that Na\"ive PAINE outperforms existing approaches on several prompt corpus benchmarks.