When Understanding Becomes a Risk: Authenticity and Safety Risks in the Emerging Image Generation Paradigm

arXiv cs.CV / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal LLMs (MLLMs) can create a new class of authenticity and safety risks despite offering stronger semantic understanding than diffusion models.
  • Experiments across unsafe-content benchmarks find that MLLMs generate more unsafe images than diffusion models, partly because diffusion models may fail on abstract prompts and thus produce corrupted (less usable) outputs.
  • The study finds that existing fake-image detectors struggle more with MLLM-generated images, and even MLLM-specific retraining does not fully prevent bypass when users supply longer, more descriptive inputs.
  • Overall, the authors conclude that MLLM-driven safety risks are under-recognized and create new challenges for real-world safety systems focused on image authenticity.
  • The work reframes safety evaluation for image generation by comparing MLLMs against diffusion models across both unsafe generation and fake synthesis/attribution dimensions.

Abstract

Recently, multimodal large language models (MLLMs) have emerged as a unified paradigm for language and image generation. Compared with diffusion models, MLLMs possess a much stronger capability for semantic understanding, enabling them to process more complex textual inputs and comprehend richer contextual meanings. However, this enhanced semantic ability may also introduce new and potentially greater safety risks. Taking diffusion models as a reference point, we systematically analyze and compare the safety risks of emerging MLLMs along two dimensions: unsafe content generation and fake image synthesis. Across multiple unsafe generation benchmark datasets, we observe that MLLMs tend to generate more unsafe images than diffusion models. This difference partly arises because diffusion models often fail to interpret abstract prompts, producing corrupted outputs, whereas MLLMs can comprehend these prompts and generate unsafe content. For current advanced fake image detectors, MLLM-generated images are also notably harder to identify. Even when detectors are retrained with MLLMs-specific data, they can still be bypassed by simply providing MLLMs with longer and more descriptive inputs. Our measurements indicate that the emerging safety risks of the cutting-edge generative paradigm, MLLMs, have not been sufficiently recognized, posing new challenges to real-world safety.