Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models

arXiv cs.CV / 3/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Photon is presented as a framework for multimodal large language models to better handle 3D medical volumes in clinical visual question answering without relying on 2D slices or fixed-length token compression.
  • It represents 3D volumes as variable-length token sequences and uses instruction-conditioned token scheduling plus surrogate gradient propagation to adaptively reduce tokens during both training and inference.
  • Photon includes a custom backpropagation rule with gradient restoration to support differentiable optimization even when discrete token dropping is used.
  • To improve reliability of visual evidence, it adds regularization objectives intended to reduce language-only bias and mitigate attention dilution from redundant tokens.
  • Experiments across multiple medical VQA tasks reportedly achieve state-of-the-art accuracy while lowering compute and speeding up training and inference.

Abstract

Multimodal large language models are promising for clinical visual question answering tasks, but scaling to 3D imaging is hindered by high computational costs. Prior methods often rely on 2D slices or fixed-length token compression, disrupting volumetric continuity and obscuring subtle findings. We present Photon, a framework that represents 3D medical volumes with token sequences of variable length. Photon introduces instruction-conditioned token scheduling and surrogate gradient propagation to adaptively reduce tokens during both training and inference, which lowers computational cost while mitigating the attention dilution caused by redundant tokens. It incorporates a custom backpropagation rule with gradient restoration to enable differentiable optimization despite discrete token drop. To stabilize token compression and ensure reliable use of visual evidence, Photon further applies regularization objectives that mitigate language-only bias and improve reliability. Experiments on diverse medical visual question answering tasks show that Photon achieves state-of-the-art accuracy while reducing resource usage and accelerating both training and inference.
広告