Hierarchical Mesh Transformers with Topology-Guided Pretraining for Morphometric Analysis of Brain Structures
arXiv cs.CV / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a Hierarchical Mesh Transformer that can learn from heterogeneous brain meshes (volumetric and surface) using topology-guided hierarchical partitions built from arbitrary-order simplicial complexes.
- It introduces a feature projection module to integrate variable-length, clinically relevant morphometric descriptors (e.g., cortical thickness, curvature, sulcal depth, myelin content) while decoupling geometric structure from feature dimensionality.
- The method uses self-supervised pretraining via masked reconstruction of both mesh coordinates and morphometric channels on large unlabeled neuroimaging cohorts to produce a transferable encoder for multiple downstream tasks.
- Experiments on ADNI (Alzheimer’s disease classification and amyloid burden prediction) and MELD (focal cortical dysplasia detection) report state-of-the-art performance across the tested benchmarks.
- Overall, the framework aims to enable more generalizable representation learning across different imaging pipelines without requiring topology-specific architectural changes.

