What Models Know, How Well They Know It: Knowledge-Weighted Fine-Tuning for Learning When to Say "I Don't Know"

arXiv cs.CL / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes knowledge-weighted fine-tuning to reduce hallucinations by aligning learning with a model’s instance-level knowledge and correcting knowledge misalignment between pre-training and fine-tuning.
  • It estimates fine-grained knowledge scores using multi-sampled inference, then scales the training signal based on how well the model already knows each example.
  • The method explicitly trains the model to respond with “I don’t know” for out-of-scope or unknown queries, improving uncertainty calibration without sacrificing accuracy on answerable questions.
  • The authors introduce evaluation metrics for uncertainty and show that better discrimination between known vs. unknown instances leads to more consistent performance improvements.

Abstract

While large language models (LLMs) demonstrate strong capabilities across diverse user queries, they still suffer from hallucinations, often arising from knowledge misalignment between pre-training and fine-tuning. To address this misalignment, we reliably estimate a fine-grained, instance-level knowledge score via multi-sampled inference. Using the knowledge score, we scale the learning signal according to the model's existing knowledge, while encouraging explicit "I don't know" responses for out-of-scope queries. Experimental results show that this approach allows the model to explicitly express uncertainty when it lacks knowledge, while maintaining accuracy on questions it can answer. Furthermore, we propose evaluation metrics for uncertainty, showing that accurate discrimination between known and unknown instances consistently improves performance.