IndustryCode: A Benchmark for Industry Code Generation

arXiv cs.CL / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • IndustryCode is introduced as a new benchmark to evaluate LLM code generation and comprehension across multiple industrial domains and programming languages, addressing the limitations of existing single-domain benchmarks.
  • The benchmark includes 579 sub-problems drawn from 125 primary industrial challenges, with detailed problem statements and test cases spanning finance, automation, aerospace, and remote sensing.
  • It supports diverse languages including MATLAB, Python, C++, and Stata to better reflect real-world coding requirements in complex industrial scenarios.
  • In reported evaluations, Claude 4.5 Opus achieves 68.1% accuracy on sub-problems and 42.5% on main problems, indicating current headroom and measurable performance across the suite.
  • The authors plan to release the IndustryCode dataset and automated evaluation code publicly upon acceptance.

Abstract

Code generation and comprehension by Large Language Models (LLMs) have emerged as core drivers of industrial intelligence and decision optimization, finding widespread application in fields such as finance, automation, and aerospace. Although recent advancements have demonstrated the remarkable potential of LLMs in general code generation, existing benchmarks are mainly confined to single domains and languages. Consequently, they fail to effectively evaluate the generalization capabilities required for real-world industrial applications or to reflect the coding proficiency demanded by complex industrial scenarios. To bridge this gap, we introduce IndustryCode, the first comprehensive benchmark designed to span multiple industrial domains and programming languages. IndustryCode comprises 579 sub-problems derived from 125 primary industrial challenges, accompanied by rigorous problem descriptions and test cases. It covers a wide range of fields, including finance, automation, aerospace, and remote sensing-and incorporates diverse programming languages such as MATLAB, Python, C++, and Stata. In our evaluation, the top-performing model, Claude 4.5 Opus, achieved an overall accuracy of 68.1% on sub-problems and 42.5% main problems. The benchmark dataset and automated evaluation code will be made publicly available upon acceptance.