AI Navigate

From Documents to Spans: Code-Centric Learning for LLM-based ICD Coding

arXiv cs.CL / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Code-Centric Learning for LLM-based ICD coding, shifting supervision from full clinical documents to short, scalable evidence spans to improve generalization to unseen ICD codes.
  • It introduces a mixed training strategy and code-centric data expansion that reduces training cost while enhancing accuracy and interpretability.
  • Span-level learning enables LLMs to perform document-level ICD coding efficiently, addressing the challenge of long clinical documents.
  • The method outperforms strong baselines under the same LLM backbone and allows small-scale LLMs to match the performance of larger proprietary models.
  • The approach preserves interpretability by attaching explicit evidence for assigned codes.

Abstract

ICD coding is a critical yet challenging task in healthcare. Recently, LLM-based methods demonstrate stronger generalization than discriminative methods in ICD coding. However, fine-tuning LLMs for ICD coding faces three major challenges. First, existing public ICD coding datasets provide limited coverage of the ICD code space, restricting a model's ability to generalize to unseen codes. Second, naive fine-tuning diminishes the interpretability of LLMs, as few public datasets contain explicit supporting evidence for assigned codes. Third, ICD coding typically involves long clinical documents, making fine-tuning LLMs computationally expensive. To address these issues, we propose Code-Centric Learning, a training framework that shifts supervision from full clinical documents to scalable, short evidence spans. The key idea of this framework is that span-level learning improves LLMs' ability to perform document-level ICD coding. Our proposed framework consists of a mixed training strategy and code-centric data expansion, which substantially reduces training cost, improves accuracy on unseen ICD codes and preserves interpretability. Under the same LLM backbone, our method substantially outperforms strong baselines. Notably, our method enables small-scale LLMs to achieve performance comparable to much larger proprietary models, demonstrating its effectiveness and potential for fully automated ICD coding.