Concept Training for Human-Aligned Language Models

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes replacing next-token prediction targets with a concept-based objective that predicts sets of semantically related tokens for a given prefix.
  • It argues this better matches how natural language continuations can be valid in multiple surface forms while preserving meaning.
  • Experiments show concept-supervised models improve alignment with human semantic similarity judgments across several lexical benchmarks.
  • The approach also reports lower perplexity on semantically meaningful words, alongside a modest increase in global token-level perplexity, indicating a tradeoff versus standard NTP.
  • Overall, the results suggest concept-level training can enhance semantic alignment without severely hurting language modeling performance.

Abstract

The next-token prediction (NTP) objective trains language models to predict a single continuation token at each step. In natural language, however, a prefix can be continued in many valid ways, and even similar meanings may differ in surface form. For example, the sentence ``this website is safe to \underline{browse}'' could plausibly continue with words such as browse, search, visit, surf, or navigate. While standard NTP training treats these alternatives as mutually exclusive targets, we explore a framework that instead predicts concepts, approximated as sets of semantically related tokens. We show that models trained with concept supervision exhibit stronger alignment with human semantic similarity judgments on multiple lexical benchmarks. These gains are accompanied by lower perplexity on semantically meaningful words (definition in Section 3.1), and a modest increase in global token-level perplexity, reflecting a tradeoff between standard NTP optimization and concept-level supervision. Our results suggest that concept-level objectives can improve semantic alignment while maintaining competitive language modeling performance.