Dataset Distillation Efficiently Encodes Low-Dimensional Representations from Gradient-Based Learning of Non-Linear Tasks

arXiv stat.ML / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper provides a theoretical analysis of dataset distillation, focusing on how training-aware data compression can work beyond mostly empirical results.
  • It studies gradient-based dataset distillation for two-layer neural networks and connects the mechanism to a non-linear “multi-index model” task structure.
  • The authors prove that the low-dimensional/intrinsic structure of the task is efficiently captured in the distilled synthetic data points.
  • They show the resulting distilled dataset can reproduce models with strong generalization while using a memory complexity on the order of ~Θ(r^2 d + L), linking compression rate to input and intrinsic dimensions.
  • The work claims to be among the first theoretical studies that uses a specific task structure and intrinsic dimensionality while analyzing algorithms that rely only on gradient-based procedures.

Abstract

Dataset distillation, a training-aware data compression technique, has recently attracted increasing attention as an effective tool for mitigating costs of optimization and data storage. However, progress remains largely empirical. Mechanisms underlying the extraction of task-relevant information from the training process and the efficient encoding of such information into synthetic data points remain elusive. In this paper, we theoretically analyze practical algorithms of dataset distillation applied to the gradient-based training of two-layer neural networks with width L. By focusing on a non-linear task structure called multi-index model, we prove that the low-dimensional structure of the problem is efficiently encoded into the resulting distilled data. This dataset reproduces a model with high generalization ability for a required memory complexity of $\tilde{\Theta}$$(r^2d+L), where d and r$ are the input and intrinsic dimensions of the task. To the best of our knowledge, this is one of the first theoretical works that include a specific task structure, leverage its intrinsic dimensionality to quantify the compression rate and study dataset distillation implemented solely via gradient-based algorithms.