Evidence-based Distributional Alignment for Large Language Models
arXiv cs.LG / 3/17/2026
📰 NewsModels & Research
Key Points
- Evi-DA is an evidence-based alignment method for LLMs that predicts how a target population would distribute responses across multiple-choice options instead of collapsing disagreement into a single consensus.
- It addresses instability under domain and cultural shift by retrieving World Values Survey items, predicting a Welzel value signature for each option, and inferring country-conditioned distributions in a structured format.
- The approach uses a two-stage reinforcement learning training pipeline that optimizes survey-derived rewards to improve intermediate value predictions, faithful final distributions, well-formed outputs, and reduced cultural bias.
- Empirical results show Jensen-Shannon divergence reductions relative to strong baselines, with average relative improvements up to 44% across in-domain and out-of-domain benchmarks on multiple open-source backbones.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA