Towards Intrinsic Interpretability of Large Language Models:A Survey of Design Principles and Architectures

arXiv cs.CL / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The survey argues that large language models’ “opaque” internal workings limit trustworthiness and safe deployment, motivating methods that improve interpretability beyond post-hoc explanations.
  • It focuses on intrinsic interpretability, aiming to embed transparency directly into model architectures and computations rather than approximating explanations after training.
  • The paper organizes recent intrinsic interpretability efforts into five design paradigms: functional transparency, concept alignment, representational decomposability, explicit modularization, and latent sparsity induction.
  • It highlights remaining open challenges and proposes directions for future research in this rapidly developing area.
  • A curated list of related work is provided via an accompanying GitHub repository for further exploration.

Abstract

While Large Language Models (LLMs) have achieved strong performance across many NLP tasks, their opaque internal mechanisms hinder trustworthiness and safe deployment. Existing surveys in explainable AI largely focus on post-hoc explanation methods that interpret trained models through external approximations. In contrast, intrinsic interpretability, which builds transparency directly into model architectures and computations, has recently emerged as a promising alternative. This paper presents a systematic review of the recent advances in intrinsic interpretability for LLMs, categorizing existing approaches into five design paradigms: functional transparency, concept alignment, representational decomposability, explicit modularization, and latent sparsity induction. We further discuss open challenges and outline future research directions in this emerging field. The paper list is available at: https://github.com/PKU-PILLAR-Group/Survey-Intrinsic-Interpretability-of-LLMs.