Image Generators are Generalist Vision Learners

arXiv cs.CV / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that image and video generators develop zero-shot visual understanding abilities similar to emergent reasoning in LLMs trained via generative pretraining.
  • It presents Vision Banana, a generalist vision model created by instruction-tuning Nano Banana Pro with a mix of original training data and a small amount of vision-task data.
  • By parameterizing vision task outputs as RGB images, the authors reframe perception tasks as an image-generation problem, enabling a unified interface across tasks.
  • Vision Banana achieves state-of-the-art or competitive performance on multiple 2D and 3D understanding tasks, outperforming or rivaling task specialists such as SAM 3 for segmentation and Depth Anything for depth estimation.
  • The results suggest that lightweight instruction-tuning can preserve strong image-generation capability while producing broadly useful visual representations, implying a potential paradigm shift toward foundational vision models built from generative pretraining.

Abstract

Recent works show that image and video generators exhibit zero-shot visual understanding behaviors, in a way reminiscent of how LLMs develop emergent capabilities of language understanding and reasoning from generative pretraining. While it has long been conjectured that the ability to create visual content implies an ability to understand it, there has been limited evidence that generative vision models have developed strong understanding capabilities. In this work, we demonstrate that image generation training serves a role similar to LLM pretraining, and lets models learn powerful and general visual representations that enable SOTA performance on various vision tasks. We introduce Vision Banana, a generalist model built by instruction-tuning Nano Banana Pro (NBP) on a mixture of its original training data alongside a small amount of vision task data. By parameterizing the output space of vision tasks as RGB images, we seamlessly reframe perception as image generation. Our generalist model, Vision Banana, achieves SOTA results on a variety of vision tasks involving both 2D and 3D understanding, beating or rivaling zero-shot domain-specialists, including Segment Anything Model 3 on segmentation tasks, and the Depth Anything series on metric depth estimation. We show that these results can be achieved with lightweight instruction-tuning without sacrificing the base model's image generation capabilities. The superior results suggest that image generation pretraining is a generalist vision learner. It also shows that image generation serves as a unified and universal interface for vision tasks, similar to text generation's role in language understanding and reasoning. We could be witnessing a major paradigm shift for computer vision, where generative vision pretraining takes a central role in building Foundational Vision Models for both generation and understanding.