To See the Unseen: on the Generalization Ability of Transformers in Symbolic Reasoning

arXiv cs.AI / 4/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how decoder-only transformer models can generalize in abstract symbolic reasoning tasks, especially propositional logic problems given in-context examples.
  • It explains prior failures with unseen variable names by showing a “representational collapse,” where the unembedding vectors for unseen tokens converge to nearly the same representation during training.
  • The collapse makes it hard for models to distinguish between different unseen variables, offering a mechanistic rationale for why heuristic methods like “active forgetting” can help by periodically resetting token (un)embeddings.
  • The authors propose a combined approach—small architectural changes to improve copying, more diverse training data, and strategies such as freezing or resetting (un)embeddings—that improves generalization to unseen tokens, supported by extensive controlled experiments.
  • They also find evidence of similar (un)embedding collapse in open-weight Gemma 3 family models, noting that correlated embeddings among reserved unused tokens can be a weak starting point for fine-tuning.

Abstract

We investigate the ability of decoder-only transformer models to perform abstract symbolic reasoning; specifically solving propositional logic reasoning problems given in-context. Previous work demonstrated that models fail to generalize to problems involving variable names that were not observed during training, and it was shown that one reason behind this is the difficulty of copying (or generating) unseen tokens. We show both theoretically and empirically that a particular representational collapse also has a crucial role: the unembeddings (last-layer weights) of unseen tokens collapse to nearly the same vector during training. The collapse makes distinguishing multiple unseen variables difficult for the model (especially when the embedding and unembedding parameters are shared), and provides a mechanistic explanation for the effectiveness of existing heuristic interventions like "active forgetting", which periodically reset the token (un)embeddings. Based on these observations, we devise a combination of techniques, involving a small architecture change facilitating copying, data diversity, and freezing or resetting (un)embeddings, that achieves generalization to unseen tokens. We support our claims with extensive controlled experiments on propositional logic reasoning problems. Beyond synthetic experiments, we also observe evidence of (un)embedding collapse in the open-weight models in the Gemma 3 family, which includes 99 unused tokens reserved for downstream use. Empirically we find that the correlated embeddings of these tokens are a poor initialization for finetuning applications.