MNAFT: modality neuron-aware fine-tuning of multimodal large language models for image translation

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Multimodal large language models often miss fine-grained text details in images, creating a modality gap that hurts image translation quality.
  • The proposed method, MNAFT (modality neuron-aware fine-tuning), uses instruction-driven activation analysis to identify which neurons are language-agnostic vs language-specific across both vision and language modules.
  • MNAFT performs selective fine-tuning by updating only the identified neuron parameters in task-relevant layers, aiming to preserve existing pretrained knowledge and avoid redundant parameter updates.
  • Experiments across multiple benchmarks show MNAFT significantly improves image translation over prior approaches, including cascaded systems, full fine-tuning, and parameter-efficient tuning.
  • The paper includes interpretability-focused analysis (e.g., activation visualizations and clustering) to explain how different neuron groups support cross-modal understanding and language-specific translation.

Abstract

Multimodal large language models (MLLMs) have shown impressive capabilities, yet they often struggle to effectively capture the fine-grained textual information within images crucial for accurate image translation. This often leads to a modality gap between visual text inputs and textual inputs/outputs for image translation. Existing methods, primarily relying on instruction fine-tuning, risk parameter redundancy of pre-trained knowledge, hindering generalization performance. To address this, we introduce modality neuron-aware fine-tuning (MNAFT), a novel approach that takes advantage of the specialized roles of individual neurons within MLLMs for enhanced image translation. MNAFT identifies language-agnostic and language-specific neurons in both vision and language modules through an instruction-driven activation analysis, evaluating their importance in various translation tasks. We then perform selective fine-tuning, updating only the parameters of language-specific and language-agnostic neurons within the selected layers relevant to the target task, while preserving the knowledge encoded in other neurons and layers. Our extensive experiments on multiple benchmarks demonstrate that MNAFT significantly outperforms state-of-the-art image translation methods, including cascaded models, standard full fine-tuning, and parameter-efficient tuning techniques. Furthermore, we provide comprehensive analysis, including visualizations of neuron activations and clustering patterns, to offer insights into the roles of different neuron groups in mediating cross-modal understanding and facilitating accurate language-specific translation.

MNAFT: modality neuron-aware fine-tuning of multimodal large language models for image translation | AI Navigate