Understanding Behavior Cloning with Action Quantization

arXiv cs.LG / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies behavior cloning for continuous control when actions must be discretized via action quantization, a common but not well-theorized technique used with autoregressive models like Transformers and VLAs.
  • It analyzes how quantization error compounds over time (along the prediction horizon) and how this interacts with statistical sample complexity in training from expert demonstrations.
  • The authors show that using behavior cloning with quantized actions and log-loss can achieve optimal sample complexity, matching known lower bounds, with only polynomial dependence on quantization error under stability and probabilistic smoothness assumptions.
  • The paper compares quantization schemes by characterizing which ones satisfy or violate the required conditions, and introduces model-based augmentation that provably reduces error without relying on policy smoothness.
  • It also derives fundamental limits that jointly quantify the trade-offs between quantization error and statistical complexity.

Abstract

Behavior cloning is a fundamental paradigm in machine learning, enabling policy learning from expert demonstrations across robotics, autonomous driving, and generative models. Autoregressive models like transformer have proven remarkably effective, from large language models (LLMs) to vision-language-action systems (VLAs). However, applying autoregressive models to continuous control requires discretizing actions through quantization, a practice widely adopted yet poorly understood theoretically. This paper provides theoretical foundations for this practice. We analyze how quantization error propagates along the horizon and interacts with statistical sample complexity. We show that behavior cloning with quantized actions and log-loss achieves optimal sample complexity, matching existing lower bounds, and incurs only polynomial horizon dependence on quantization error, provided the dynamics are stable and the policy satisfies a probabilistic smoothness condition. We further characterize when different quantization schemes satisfy or violate these requirements, and propose a model-based augmentation that provably improves the error bound without requiring policy smoothness. Finally, we establish fundamental limits that jointly capture the effects of quantization error and statistical complexity.