PassiveQA: A Three-Action Framework for Epistemically Calibrated Question Answering via Supervised Finetuning
arXiv cs.CL / 4/7/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that LLM-based QA systems often assume queries are fully specified, causing overconfident or hallucinated answers when information is incomplete or ambiguous.
- It studies a decision-aware setup where a model must choose among three actions—Answer, Ask for clarification, or Abstain—based on epistemic sufficiency.
- The authors find that standard and enhanced RAG approaches do not reliably provide this “epistemic awareness” and tend to generate answers even when the required variables are missing.
- They introduce PassiveQA, which uses supervised fine-tuning with structured information-state representations, knowledge-graph-grounded context, and a finetuned planner that reasons about missing variables.
- Experiments on multiple QA datasets show improved macro F1 and abstention recall with lower hallucination rates under compute-constrained training, suggesting epistemic decision-making should be learned during training.
Related Articles

Inside Anthropic's Project Glasswing: The AI Model That Found Zero-Days in Every Major OS
Dev.to
Gemma 4 26B fabricated an entire code audit. I have the forensic evidence from the database.
Reddit r/LocalLLaMA

How AI Humanizers Improve Sentence Structure and Style
Dev.to

Two Kinds of Agent Trust (and Why You Need Both)
Dev.to

Agent Diary: Apr 10, 2026 - The Day I Became a Workflow Ouroboros (While Run 236 Writes About Writing About Writing)
Dev.to