The Compression Gap: Why Discrete Tokenization Limits Vision-Language-Action Model Scaling

arXiv cs.RO / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that scaling Vision-Language-Action (VLA) models by improving the vision encoder works for vision-language tasks but can fail for visuomotor action pipelines when actions are represented as discrete tokens.
  • It introduces an information-theoretic “Compression Gap” principle: performance scaling is limited by the tightest information bottleneck in the visuomotor pipeline, not by uniformly increasing capacity.
  • When actions are continuous (e.g., Diffusion Policy), the vision encoder acts as the binding constraint, so encoder upgrades yield strong gains in manipulation performance.
  • When actions are discretized via a fixed-capacity codebook (e.g., OAT), the codebook becomes the binding constraint, so encoder improvements do not meaningfully propagate past that bottleneck.
  • Experiments on the LIBERO benchmark provide evidence via (1) an encoder-upgrade factorial study, (2) encoder quality gradients across four encoders, and (3) a codebook-size experiment showing that increasing codebook capacity partially restores sensitivity to encoder improvements.

Abstract

Scaling Vision-Language-Action (VLA) models by upgrading the vision encoder is expected to improve downstream manipulation performance--as it does in vision-language modeling. We show that this expectation fails when actions are represented as discrete tokens, and explain why through an information-theoretic principle we call the Compression Gap: in any visuomotor pipeline, scaling behavior is governed by the location of the tightest information bottleneck. When actions are continuous (e.g., Diffusion Policy), the vision encoder is the binding constraint, and upgrading it directly improves performance. When actions are discretized through a fixed-capacity codebook (e.g., OAT), the codebook becomes the binding constraint, and encoder improvements cannot propagate past it--regardless of how rich the upstream representation is. We validate this principle on the LIBERO benchmark with three lines of evidence: a factorial experiment showing that encoder upgrades improve Diffusion Policy by over 21 percentage points while OAT gains are substantially attenuated across model scales; an encoder quality gradient across four encoders confirming that Diffusion Policy tracks encoder quality monotonically while OAT remains flat; and a codebook size experiment demonstrating that relaxing codebook capacity partially recovers encoder sensitivity, providing causal evidence for the bottleneck hypothesis. Our findings reveal that scaling in Physical AI requires identifying where information bottlenecks lie in the pipeline, rather than uniformly increasing model or data size.

The Compression Gap: Why Discrete Tokenization Limits Vision-Language-Action Model Scaling | AI Navigate