Multimodal Deep Learning for Dynamic and Static Neuroimaging: Integrating MRI and fMRI for Alzheimer Disease Analysis
arXiv cs.CV / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper presents a multimodal deep learning framework that integrates MRI and fMRI for classifying Alzheimer's disease, mild cognitive impairment, and normal cognition.
- It uses 3D CNNs to extract structural features from MRI and recurrent architectures to learn temporal features from fMRI sequences, fused for joint spatial-temporal learning.
- Experiments on a small paired MRI-fMRI dataset (29 subjects) show that data augmentation substantially improves classification stability and generalization for the multimodal model, while augmentation is ineffective for a large-scale single-modality MRI dataset.
- The results highlight the importance of dataset size and modality selection when designing augmentation strategies for neuroimaging-based AD classification.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to