Calibrating Scientific Foundation Models with Inference-Time Stochastic Attention
arXiv cs.LG / 4/22/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces “Stochastic Attention,” an inference-time modification for transformer-based scientific foundation models that randomizes attention to better support calibrated predictive uncertainty.
- Instead of deterministic softmax attention weights, it uses normalized multinomial sampling governed by a single concentration parameter, enabling predictive ensembles without retraining.
- The authors propose a calibration objective to set the concentration parameter via an efficient univariate post-hoc tuning process that aligns stochastic outputs with targets.
- Experiments on weather and time-series forecasting foundation models (plus another regression task) show improved calibration and sharper prediction intervals compared with uncertainty-aware baselines.
- The approach is computationally efficient, needing only minutes of post-hoc tuning to reach competitive performance, versus days of retraining for comparable baseline methods.
![AI TikTok Marketing for Pet Brands [2026 Guide]](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Fj35r9qm34d68qf2gq7no.png&w=3840&q=75)


