MoireMix: A Formula-Based Data Augmentation for Improving Image Classification Robustness

arXiv cs.CV / 3/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MoireMix is a formula-based, procedural data augmentation method that generates structured Moiré interference patterns on-the-fly for improving image classification robustness.
  • The approach uses a closed-form mathematical formulation to synthesize Moiré textures in memory with very low overhead (about 0.0026 seconds per image) and requires no external datasets or generative diffusion models.
  • During training, the generated patterns are mixed with input images and then discarded immediately, enabling a storage-free augmentation pipeline.
  • Experiments using Vision Transformers show consistent robustness gains across benchmarks such as ImageNet-C, ImageNet-R, and adversarial tests, outperforming standard baselines and other external-data-free augmentation methods.
  • The authors conclude that analytic interference patterns can serve as an efficient alternative to data-driven generative augmentation techniques.

Abstract

Data augmentation is a key technique for improving the robustness of image classification models. However, many recent approaches rely on diffusion-based synthesis or complex feature mixing strategies, which introduce substantial computational overhead or require external datasets. In this work, we explore a different direction: procedural augmentation based on analytic interference patterns. Unlike conventional augmentation methods that rely on stochastic noise, feature mixing, or generative models, our approach exploits Moire interference to generate structured perturbations spanning a wide range of spatial frequencies. We propose a lightweight augmentation method that procedurally generates Moire textures on-the-fly using a closed-form mathematical formulation. The patterns are synthesized directly in memory with negligible computational cost (0.0026 seconds per image), mixed with training images during training, and immediately discarded, enabling a storage-free augmentation pipeline without external data. Extensive experiments with Vision Transformers demonstrate that the proposed method consistently improves robustness across multiple benchmarks, including ImageNet-C, ImageNet-R, and adversarial benchmarks, outperforming standard augmentation baselines and existing external-data-free augmentation approaches. These results suggest that analytic interference patterns provide a practical and efficient alternative to data-driven generative augmentation methods.