Enhancing Multimodal Large Language Models for Ancient Chinese Character Evolution Analysis via Glyph-Driven Fine-Tuning

arXiv cs.CL / 4/14/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a new multimodal LLM benchmark for ancient Chinese character evolution analysis, covering 11 tasks and 130,000+ instances to systematically evaluate model capabilities.
  • Evaluations across several mainstream MLLMs find that current systems have limited glyph-level comparison ability, and constrained performance on key tasks like character recognition and evolutionary reasoning.
  • To address these gaps, the authors propose a glyph-driven fine-tuning framework (GEVO) that steers models to learn consistent glyph transformations relevant to textual evolution.
  • Results indicate that GEVO yields performance gains across all benchmark tasks, including for relatively small ~2B-parameter models.
  • The authors publicly release the benchmark and trained models to enable follow-on research and replication (GitHub repository provided).

Abstract

In recent years, rapid advances in Multimodal Large Language Models (MLLMs) have increasingly stimulated research on ancient Chinese scripts. As the evolution of written characters constitutes a fundamental pathway for understanding cultural transformation and historical continuity, how MLLMs can be systematically leveraged to support and advance text evolution analysis remains an open and largely underexplored problem. To bridge this gap, we construct a comprehensive benchmark comprising 11 tasks and over 130,000 instances, specifically designed to evaluate the capability of MLLMs in analyzing the evolution of ancient Chinese scripts. We conduct extensive evaluations across multiple widely used MLLMs and observe that, while existing models demonstrate a limited ability in glyph-level comparison, their performance on core tasks-such as character recognition and evolutionary reasoning-remains substantially constrained. Motivated by these findings, we propose a glyph-driven fine-tuning framework (GEVO) that explicitly encourages models to capture evolutionary consistency in glyph transformations and enhances their understanding of text evolution. Experimental results show that even models at the 2B scale achieve consistent and comprehensive performance improvements across all evaluated tasks. To facilitate future research, we publicly release both the benchmark and the trained models\footnote{https://github.com/songruiecho/GEVO}.