Stochasticity in Tokenisation Improves Robustness

arXiv cs.CL / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that deterministic canonical tokenisation makes LLMs brittle under perturbations and adversarial tokenisation attacks, while stochastic tokenisation can improve internal stability.
  • It systematically evaluates stochastic tokenisation across multiple learning regimes (pre-training, supervised fine-tuning, and in-context learning), datasets, and model architectures, focusing on robustness to both adversarial and random perturbations.
  • Training with uniformly sampled stochastic tokenisations during pre-training and fine-tuning improves robustness against random and adversarial perturbations.
  • When evaluating a canonically trained Llama-1b model on uniformly sampled non-canonical tokenisations, its accuracy drops by 29.8%, highlighting the sensitivity to tokenisation choices.
  • The authors report that using stochastic tokenisation during training preserves accuracy without increasing inference cost, suggesting a practical robustness gain.

Abstract

The widespread adoption of large language models (LLMs) has increased concerns about their robustness. Vulnerabilities in perturbations of tokenisation of the input indicate that models trained with a deterministic canonical tokenisation can be brittle to adversarial attacks. Recent studies suggest that stochastic tokenisation can deliver internal representations that are less sensitive to perturbations. In this paper, we analyse how stochastic tokenisations affect robustness to adversarial attacks and random perturbations. We systematically study this over a range of learning regimes (pre-training, supervised fine-tuning, and in-context learning), data sets, and model architectures. We show that pre-training and fine-tuning with uniformly sampled stochastic tokenisations improve robustness to random and adversarial perturbations. Evaluating on uniformly sampled non-canonical tokenisations reduces the accuracy of a canonically trained Llama-1b model by 29.8%. We find that training with stochastic tokenisation preserves accuracy without increasing inference cost.