AI Navigate

FLUX: Data Worth Training On

arXiv cs.CL / 3/17/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • FLUX is a preprocessing pipeline designed to break the traditional trade-off between data quality and scale by maximizing token retention with strict quality controls for modern LLM training.
  • In experiments, a 3B-parameter model trained on 60B FLUX-curated tokens achieves 32.14% MMLU, surpassing DCLM (31.98%) and FineWeb (29.88%), demonstrating improved performance.
  • FLUX reduces training compute by 34.4% to reach the same aggregate score as a DCLM-trained model using 39B tokens, illustrating efficiency gains.
  • At the data level, FLUX extracts 50B usable tokens from CC-MAIN-2025-51, compared to 40B from DCLM (+25% retention); FLUX-Base yields 192B tokens, exceeding FineWeb's 170B while maintaining superior quality.
  • Overall, FLUX establishes a new state-of-the-art in web-scale data preprocessing, showing that high retention, strong quality control, and computational efficiency can be achieved simultaneously, redefining scalable dataset construction for modern language models.

Abstract

Modern large language model training is no longer limited by data availability, but by the inability of existing preprocessing pipelines to simultaneously achieve massive scale and high data quality. Current approaches are forced to sacrifice one for the other: either aggressively filtering to improve quality at the cost of severe token loss, or retaining large volumes of data while introducing substantial noise. In this work, we introduce FLUX, a preprocessing pipeline specifically designed to break this long-standing trade-off by maximizing token retention while enforcing rigorous quality control. Models trained on FLUX-curated data consistently outperform prior methods. A 3B-parameter model trained on 60B tokens with FLUX achieves 32.14% MMLU accuracy, surpassing the previous state-of-the-art pipeline DCLM (31.98%) and significantly outperforming FineWeb (29.88%). FLUX achieves the same aggregate score as a model trained on DCLM data using only 39B tokens, resulting in a 34.4% reduction in training compute. At the data level, FLUX extracts 50B usable tokens from a single dump (CC-MAIN-2025-51), compared to 40B from DCLM (+25% retention). FLUX-Base yields 192B tokens, exceeding FineWeb's 170B while still maintaining superior quality. Overall, FLUX establishes a new state of the art in web-scale data preprocessing by demonstrating that high retention, strong quality control, and computational efficiency can be achieved simultaneously, redefining the limits of scalable dataset construction for modern language models.