AI Navigate

FoMo X: Modular Explainability Signals for Outlier Detection Foundation Models

arXiv cs.LG / 3/19/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • FoMo-X adds modular diagnostic heads to PFN-based outlier detection models to provide intrinsic, lightweight explainability without expensive post-hoc methods.
  • The approach leverages frozen PFN backbone embeddings and trains auxiliary heads offline using the same generative simulator prior, enabling one-pass deterministic inference that retains uncertainty signals.
  • It introduces a Severity Head for discretizing deviations into interpretable risk tiers and an Uncertainty Head for calibrated confidence measures.
  • Evaluations on synthetic data and real-world benchmarks (ADBench) show high fidelity to ground-truth diagnostic signals with negligible inference overhead, supporting trustworthy zero-shot outlier detection.

Abstract

Tabular foundation models, specifically Prior-Data Fitted Networks (PFNs), have revolutionized outlier detection (OD) by enabling unsupervised zero-shot adaptation to new datasets without training. However, despite their predictive power, these models typically function as opaque black boxes, outputting scalar outlier scores that lack the operational context required for safety-critical decision-making. Existing post-hoc explanation methods are often computationally prohibitive for real-time deployment or fail to capture the epistemic uncertainty inherent in zero-shot inference. In this work, we introduce FoMo-X, a modular framework that equips OD foundation models with intrinsic, lightweight diagnostic capabilities. We leverage the insight that the frozen embeddings of a pretrained PFN backbone already encode rich, context-conditioned relational information. FoMo-X attaches auxiliary diagnostic heads to these embeddings, trained offline using the same generative simulator prior as the backbone. This allows us to distill computationally expensive properties, such as Monte Carlo dropout based epistemic uncertainty, into a deterministic, single-pass inference. We instantiate FoMo-X with two novel heads: a Severity Head that discretizes deviations into interpretable risk tiers, and an Uncertainty Head that provides calibrated confidence measures. Extensive evaluation on synthetic and real-world benchmarks (ADBench) demonstrates that FoMo-X recovers ground-truth diagnostic signals with high fidelity and negligible inference overhead. By bridging the gap between foundation model performance and operational explainability, FoMo-X offers a scalable path toward trustworthy, zero-shot outlier detection.