Visual Instruction-Finetuned Language Model for Versatile Brain MR Image Tasks

arXiv cs.CV / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces LLaBIT, a visual instruction-finetuned language model designed for multiple clinically relevant brain MRI tasks rather than limited text-to-image generation.
  • It addresses spatial information loss from image tokenization by reusing feature maps from the image encoder to preserve clinically important spatial detail.
  • To overcome scarce brain MRI image-text paired data, the authors generate additional text data using LLMs under strict predefined instructions for consistent augmentation.
  • LLaBIT is evaluated on five brain MRI datasets across four tasks—report generation, visual question answering, image segmentation, and image translation—with results showing superior performance versus both generalists and specialized task-specific models.
  • The work suggests that a single versatile multimodal language model can unify diverse MRI workflows, potentially reducing the need for separate models per task.

Abstract

LLMs have demonstrated remarkable capabilities in linguistic reasoning and are increasingly adept at vision-language tasks. The integration of image tokens into transformers has enabled direct visual input and output, advancing research from image-to-text descriptions to text-to-image generation. However, simple text-to-image generation holds limited clinical utility. In medical imaging, tasks such as image segmentation for localizing pathologies or image translation for reconstructing missing sequences have much greater clinical importance. Despite this, integrating these diverse, clinically relevant tasks within a single, versatile language model remains unexplored. Our method, LLaBIT (Large Language Model for Brain Image Translation), extends the visual reasoning of LLMs to these clinically meaningful tasks in the brain MRI domain. To mitigate the spatial information loss inherent in image tokenization, we incorporate a mechanism to reuse feature maps from the image encoder, minimizing data degradation. We also generate text data using LLMs with strict predefined instructions to augment limited image-text paired data in brain MRI. We comprehensively evaluated our method on five brain MRI datasets across four distinct tasks: report generation, visual question answering, image segmentation, and image translation. Our model not only demonstrated superior performance across all tasks but also outperformed specialized, task-specific models in direct comparisons, highlighting its efficacy and versatility