MoRFI: Monotonic Sparse Autoencoder Feature Identification

arXiv cs.CL / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates why adding new factual knowledge during post-training can increase hallucinations in LLMs, focusing on a controlled setup for closed-book QA.
  • It fine-tunes multiple open models (Llama 3.1 8B, Gemma 2 9B, and Mistral 7B v03) on several single-QA datasets while varying the amount of new knowledge and training epochs, then confirms that more incremental new knowledge (especially with longer training) leads to higher hallucination rates.
  • Using pre-trained sparse autoencoders (SAEs), the authors analyze residual stream activations across checkpoints to find latent directions causally linked to hallucinations.
  • They propose Monotonic Relationship Feature Identification (MoRFI), which extracts SAE features that change monotonically with controlled fine-tuning mixtures, enabling the discovery of single-latent interventions that can recover stored knowledge.
  • The results indicate that exposure to unknown facts can disrupt the model’s ability to retrieve previously stored knowledge along specific residual-stream directions, and the approach generalizes across different model families.

Abstract

Large language models (LLMs) acquire most of their factual knowledge during the pre-training stage, through next token prediction. Subsequent stages of post-training often introduce new facts outwith the parametric knowledge, giving rise to hallucinations. While it has been demonstrated that supervised fine-tuning (SFT) on new knowledge may exacerbate the problem, the underlying mechanisms are still poorly understood. We conduct a controlled fine-tuning experiment, focusing on closed-book QA, and find latent directions that causally contribute to hallucinations. Specifically, we fine-tune Llama 3.1 8B, Gemma 2 9B and Mistral 7B v03 on seven distinct single QA datasets, controlling for the percentage of new knowledge and number of training epochs. By measuring performance on the test set, we validate that incrementally introducing new knowledge increases hallucinations, with the effect being more pronounced with prolonged training. We leverage pre-trained sparse autoencoders (SAEs) to analyze residual stream activations across various checkpoints for each model and propose Monotonic Relationship Feature Identification (MoRFI) for capturing causally relevant latents. MoRFI filters SAE features that respond monotonically to controlled fine-tuning data mixtures of a target property. Our findings show that exposure to unknown facts disrupts the model's ability to retrieve stored knowledge along a set of directions in the residual stream. Our pipeline reliably discovers them across distinct models, recovering knowledge through single-latent interventions.