I Know What I Don't Know: Latent Posterior Factor Models for Multi-Evidence Probabilistic Reasoning
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Latent Posterior Factors (LPF), a framework that converts Variational Autoencoder latent posteriors into soft likelihood factors for tractable probabilistic reasoning over unstructured evidence with calibrated uncertainty estimates.
- It presents two architectures, LPF-SPN and LPF-Learned, to enable a principled comparison between explicit probabilistic reasoning and learned aggregation under a shared uncertainty representation.
- Across eight domains (seven synthetic and the FEVER benchmark), LPF-SPN achieves up to 97.8% accuracy and low calibration error (ECE 1.4%), substantially outperforming evidential deep learning, LLMs and graph-based baselines across 15 random seeds.
- The work offers a reproducible training methodology, cross-domain validation, and formal guarantees discussed in a companion paper.




