FixationFormer: Direct Utilization of Expert Gaze Trajectories for Chest X-Ray Classification

arXiv cs.CV / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • FixationFormerは、放射線科医の注視(eye gaze)軌跡を時系列のトークン列として表現し、CNNよりもTransformerに適した形で医用画像分類へ直接統合する枠組みを提案しています。
  • 注視データは時系列的に高密度である一方、空間的に疎でノイジー、専門家間でばらつくため、画像特徴と注視トークン列を共同で学習し、cross-attentionによりこの課題に対処します。
  • 3つの公開ベンチマークの胸部X線データセットで評価し、胸部X線分類において最先端(SOTA)の性能を示したと報告しています。
  • 注視をヒートマップのような縮約表現ではなく「シーケンス」として保持することで、より直接的できめ細かな診断的手がかりの取り込みが可能になる点が強調されています。

Abstract

Expert eye movements provide a rich, passive source of domain knowledge in radiology, offering a powerful cue for integrating diagnostic reasoning into computer-aided analysis. However, direct integration into CNN-based systems, which historically have dominated the medical image analysis domain, is challenging: gaze recordings are sequential, temporally dense yet spatially sparse, noisy, and variable across experts. As a consequence, most existing image-based models utilize reduced representations such as heatmaps. In contrast, gaze naturally aligns with transformer architectures, as both are sequential in nature and rely on attention to highlight relevant input regions. In this work, we introduce FixationFormer, a transformer-based architecture that represents expert gaze trajectories as sequences of tokens, thereby preserving their temporal and spatial structure. By modeling gaze sequences jointly with image features, our approach addresses sparsity and variability in gaze data while enabling a more direct and fine-grained integration of expert diagnostic cues through explicit cross-attention between the image and gaze token sequences. We evaluate our method on three publicly available benchmark chest X-ray datasets and demonstrate that it achieves state-of-the-art classification performance, highlighting the value of representing gaze as a sequence in transformer-based medical image analysis.