EXAONE 4.5 Technical Report

arXiv cs.CL / 4/13/2026

📰 NewsSignals & Early TrendsIndustry & Market MovesModels & Research

Key Points

  • LG AI Research released the EXAONE 4.5 technical report introducing EXAONE 4.5 as the first open-weight vision-language model from the EXAONE line.
  • The model is built by adding a dedicated visual encoder to EXAONE 4.0, enabling multimodal pretraining across visual and text data.
  • Training emphasizes curated, document-centric corpora aligned with LG’s application focus, yielding large gains in document understanding and improved general language performance.
  • EXAONE 4.5 extends context length to up to 256K tokens, targeting long-context reasoning and enterprise-scale deployment scenarios.
  • Benchmark comparisons show competitive general performance while outperforming similar-scale state-of-the-art models in document understanding and Korean contextual reasoning.

Abstract

This technical report introduces EXAONE 4.5, the first open-weight vision language model released by LG AI Research. EXAONE 4.5 is architected by integrating a dedicated visual encoder into the existing EXAONE 4.0 framework, enabling native multimodal pretraining over both visual and textual modalities. The model is trained on large-scale data with careful curation, particularly emphasizing document-centric corpora that align with LG's strategic application domains. This targeted data design enables substantial performance gains in document understanding and related tasks, while also delivering broad improvements across general language capabilities. EXAONE 4.5 extends context length up to 256K tokens, facilitating long-context reasoning and enterprise-scale use cases. Comparative evaluations demonstrate that EXAONE 4.5 achieves competitive performance in general benchmarks while outperforming state-of-the-art models of similar scale in document understanding and Korean contextual reasoning. As part of LG's ongoing effort toward practical industrial deployment, EXAONE 4.5 is designed to be continuously extended with additional domains and application scenarios to advance AI for a better life.