IDEA: An Interpretable and Editable Decision-Making Framework for LLMs via Verbal-to-Numeric Calibration
arXiv cs.AI / 4/15/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces IDEA, an interpretable and editable decision-making framework for LLMs that addresses miscalibrated probabilities and unfaithful explanations in high-stakes use cases.
- IDEA extracts decision knowledge from an LLM into a parametric model using verbal-to-numeric calibration learned jointly with decision parameters via EM, preserving dependencies between meaningful factors.
- The method supports direct parameter editing with mathematical guarantees, enabling quantitative human–AI collaboration beyond what prompting alone can achieve.
- Experiments on five datasets show IDEA using Qwen-3-32B attains 78.6% performance and achieves perfect factor exclusion and exact calibration, outperforming DeepSeek R1 and GPT-5.2.
- An open-source implementation is provided via a public GitHub repository to facilitate adoption and further evaluation.
Related Articles

Black Hat Asia
AI Business
Are gamers being used as free labeling labor? The rise of "Simulators" that look like AI training grounds [D]
Reddit r/MachineLearning

I built a trading intelligence MCP server in 2 days — here's how
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Qwen3.5-35B running well on RTX4060 Ti 16GB at 60 tok/s
Reddit r/LocalLLaMA