Domain-Filtered Knowledge Graphs from Sparse Autoencoder Features

arXiv cs.AI / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a method to turn large sparse autoencoder (SAE) feature inventories into domain-specific, structured knowledge by filtering out weakly grounded and generic features.
  • It builds a strict concept universe for a target domain using contrastive activations followed by a multi-stage filtering pipeline to reduce concept mixing.
  • From the filtered features, it creates two aligned graph views: a corpus-level co-occurrence graph at multiple granularities and a transcoder-based mechanism graph connecting source- and target-layer features via sparse latent pathways.
  • Automated edge labeling converts these graph structures into readable knowledge graphs, demonstrated with a biology textbook case study that recovers chapter/subchapter organization and reveals bridging concepts.
  • The approach reframes SAE interpretability from isolated feature lists into a global internal map of model knowledge that can support audits of reasoning faithfulness.

Abstract

Sparse autoencoders (SAEs) extract millions of interpretable features from a language model, but flat feature inventories aren't very useful on their own. Domain concepts get mixed with generic and weakly grounded features, while related ideas are scattered across many units, and there's no way to understand relationships between features. We address this by first constructing a strict domain-specific concept universe from a large SAE inventory using contrastive activations and a multi-stage filtering process. Next, we build two aligned graph views on the filtered set: a co-occurrence graph for corpus-level conceptual structure, organized at multiple levels of granularity, and a transcoder-based mechanism graph that links source-layer and target-layer features through sparse latent pathways. Automated edge labeling then turns these graph views into readable knowledge graphs rather than unlabeled layouts. In a case study on a biology textbook, these graphs recover coherent chapter and subchapter-level structure, reveal concepts that bridge neighboring topics, and transform messy sentence-level activity containing thousands of features into compact, readable views that illustrate the model's local activity. Taken together, this reframes a flat SAE inventory as an internal knowledge graph that converts feature-level interpretability into a global map of model knowledge and enables audits of reasoning faithfulness.