AI Navigate

Omni-I2C: A Holistic Benchmark for High-Fidelity Image-to-Code Generation

arXiv cs.CV / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Omni-I2C is a new, comprehensive benchmark designed to evaluate Large Multimodal Models' ability to convert complex, structured digital graphics into executable code, requiring deep perceptual understanding and precise code generation.
  • It comprises 1080 curated samples spanning diverse subjects, image modalities, and programming languages, each paired with executable reference code sourced from authentic user cases.
  • The evaluation framework decouples perceptual fidelity from symbolic precision, revealing granular structural failures and reasoning bottlenecks in current models.
  • Findings show a substantial performance gap among leading LMMs, underscoring that multimodal code generation remains a formidable challenge; data and code are available at the provided GitHub link.

Abstract

We present Omni-I2C, a comprehensive benchmark designed to evaluate the capability of Large Multimodal Models (LMMs) in converting complex, structured digital graphics into executable code. We argue that this task represents a non-trivial challenge for the current generation of LMMs: it demands an unprecedented synergy between high-fidelity visual perception -- to parse intricate spatial hierarchies and symbolic details -- and precise generative expression -- to synthesize syntactically sound and logically consistent code. Unlike traditional descriptive tasks, Omni-I2C requires a holistic understanding where any minor perceptual hallucination or coding error leads to a complete failure in visual reconstruction. Omni-I2C features 1080 meticulously curated samples, defined by its breadth across subjects, image modalities, and programming languages. By incorporating authentic user-sourced cases, the benchmark spans a vast spectrum of digital content -- from scientific visualizations to complex symbolic notations -- each paired with executable reference code. To complement this diversity, our evaluation framework provides necessary depth; by decoupling performance into perceptual fidelity and symbolic precision, it transcends surface-level accuracy to expose the granular structural failures and reasoning bottlenecks of current LMMs. Our evaluation reveals a substantial performance gap among leading LMMs; even state-of-the-art models struggle to preserve structural integrity in complex scenarios, underscoring that multimodal code generation remains a formidable challenge. Data and code are available at https://github.com/MiliLab/Omni-I2C.