Improving clinical interpretability of linear neuroimaging models through feature whitening
arXiv cs.LG / 4/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- Linear neuroimaging models are useful for biomarker discovery, but their weight interpretations are often not clinically meaningful because correlated brain regions cause shared (not region-specific) signals to be mixed into the learned weights.
- The paper proposes a whitening method that uses prior neuroanatomical knowledge to decorrelate groups of brain regions with known shared variance, aiming to disentangle overlapping information across correlated measures.
- It also introduces a regularized whitening variant that enables controlled tuning of how strongly the features are decorrelated.
- Experiments on ROI features for two psychiatric classification tasks (bipolar disorder vs. controls, schizophrenia vs. controls) show improved interpretability of linear model weights while maintaining predictive performance.
- Unlike PCA/ICA whitening used for dimensionality reduction, the method preserves the full input signal and is designed specifically for feature interpretation rather than feature selection.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to