FoMo X: Modular Explainability Signals for Outlier Detection Foundation Models
arXiv cs.LG / 3/19/2026
📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research
Key Points
- FoMo-X adds modular diagnostic heads to PFN-based outlier detection models to provide intrinsic, lightweight explainability without expensive post-hoc methods.
- The approach leverages frozen PFN backbone embeddings and trains auxiliary heads offline using the same generative simulator prior, enabling one-pass deterministic inference that retains uncertainty signals.
- It introduces a Severity Head for discretizing deviations into interpretable risk tiers and an Uncertainty Head for calibrated confidence measures.
- Evaluations on synthetic data and real-world benchmarks (ADBench) show high fidelity to ground-truth diagnostic signals with negligible inference overhead, supporting trustworthy zero-shot outlier detection.




