Beyond Independent Frames: Latent Attention Masked Autoencoders for Multi-View Echocardiography

arXiv cs.CV / 4/17/2026

📰 NewsModels & Research

Key Points

  • The paper proposes LAMAE (Latent Attention Masked Autoencoder), a foundation-model architecture designed to handle echocardiography’s sparse, heterogeneous, multi-view spatiotemporal structure.
  • Unlike prior MAE approaches that treat frames or short clips independently, LAMAE introduces latent attention to exchange information across both time frames and different views in latent space.
  • LAMAE is pretrained on MIMIC-IV-ECHO, leveraging real-world clinical variability via a large, uncurated dataset, and is evaluated for downstream tasks.
  • The authors report early results for predicting ICD-10 codes directly from echocardiography videos, and show adult-learned representations can transfer effectively to pediatric cohorts.
  • Overall, the work argues that adding structural priors—specifically multi-view attention—improves robustness and transferability of learned medical-imaging representations.

Abstract

Echocardiography is a widely used modality for cardiac assessment due to its non-invasive and cost-effective nature, but the sparse and heterogeneous spatiotemporal views of the heart pose distinct challenges. Existing masked autoencoder (MAE) approaches typically process images or short clips independently, failing to capture the inherent multi-view structure required for coherent cardiac representation. We introduce Latent Attention Masked Autoencoder (LAMAE), a foundation model architecture tailored to the multi-view nature of medical imaging. LAMAE augments the standard MAE with a latent attention module that enables information exchange across frames and views directly in latent space. This allows the model to aggregate variable-length sequences and distinct views, reconstructing a holistic representation of cardiac function from partial observations. We pretrain LAMAE on MIMIC-IV-ECHO, a large-scale, uncurated dataset reflecting real-world clinical variability. To the best of our knowledge, we present the first results for predicting ICD-10 codes from MIMIC-IV-ECHO videos. Furthermore, we empirically demonstrate that representations learned from adult data transfer effectively to pediatric cohorts despite substantial anatomical differences. These results provide evidence that incorporating structural priors, such as multi-view attention, yields significantly more robust and transferable representations.