Impact of Positional Encoding: Clean and Adversarial Rademacher Complexity for Transformers under In-Context Regression

arXiv stat.ML / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how positional encoding in a single-layer Transformer affects generalization in in-context regression, explicitly treating positional encoding as a fully trainable module.
  • The authors show that positional encoding systematically increases the generalization gap between training and test performance.
  • In an adversarial setting, they derive adversarial Rademacher complexity bounds and find that adversarial attack magnifies the performance gap between models with and without positional encoding.
  • The study includes empirical simulations that validate the theoretical clean and adversarial generalization bounds.
  • Overall, the work proposes a framework for understanding both robustness and generalization behavior of in-context learning with positional encodings.

Abstract

Positional encoding (PE) is a core architectural component of Transformers, yet its impact on the Transformer's generalization and robustness remains unclear. In this work, we provide the first generalization analysis for a single-layer Transformer under in-context regression that explicitly accounts for a completely trainable PE module. Our result shows that PE systematically enlarges the generalization gap. Extending to the adversarial setting, we derive the adversarial Rademacher generalization bound. We find that the gap between models with and without PE is magnified under attack, demonstrating that PE amplifies the vulnerability of models. Our bounds are empirically validated by a simulation study. Together, this work establishes a new framework for understanding the clean and adversarial generalization in ICL with PE.