Context-Fidelity Boosting: Enhancing Faithful Generation through Watermark-Inspired Decoding
arXiv cs.CL / 4/27/2026
📰 NewsTools & Practical UsageModels & Research
Key Points
- The paper introduces Context-Fidelity Boosting (CFB), a decoding-time framework aimed at reducing “faithfulness hallucinations” where LLM outputs contradict or ignore the input context.
- CFB uses watermark-inspired logit shaping by adding token-level logit adjustments proportional to how well each candidate token is supported by the input context.
- It proposes three variants—static boosting, context-aware boosting, and token-aware boosting—ranging from fixed biases to adaptive, relevance-informed adjustments.
- CFB is lightweight and does not require retraining or model architecture changes, and experiments on summarization and QA show consistent improvements with minimal generation overhead.
- An open-source implementation is provided, suggesting the method can be readily adopted across many existing open-source LLMs.


