VoxelCodeBench: Benchmarking 3D World Modeling Through Code Generation

arXiv cs.LG / 4/6/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper introduces VoxelCode, a platform that evaluates code generation models for 3D spatial reasoning by executing generated code in Unreal Engine via an API-driven pipeline.
  • It presents VoxelCodeBench, a benchmark covering voxel manipulation tasks across symbolic interpretation, geometric construction, and artistic composition to test different reasoning capabilities.
  • The evaluation of leading code generation models finds that generating executable code is substantially easier than generating spatially correct outputs, with geometric construction and multi-object composition being especially difficult.
  • The platform combines automated metrics with human assessment and supports unified evaluation, aiming to better reflect real-world correctness beyond superficial text-match measures.
  • The authors open-source both the platform and benchmark to enable the research community to extend infrastructure for future 3D code generation benchmarks and spatial reasoning studies.

Abstract

Evaluating code generation models for 3D spatial reasoning requires executing generated code in realistic environments and assessing outputs beyond surface-level correctness. We introduce a platform VoxelCode, for analyzing code generation capabilities for 3D understanding and environment creation. Our platform integrates natural language task specification, API-driven code execution in Unreal Engine, and a unified evaluation pipeline supporting both automated metrics and human assessment. To demonstrate its utility, we construct VoxelCodeBench, a benchmark of voxel manipulation tasks spanning three reasoning dimensions: symbolic interpretation, geometric construction, and artistic composition. Evaluating leading code generation models, we find that producing executable code is far easier than producing spatially correct outputs, with geometric construction and multi-object composition proving particularly challenging. By open-sourcing our platform and benchmark, we provide the community with extensible infrastructure for developing new 3D code generation benchmarks and probing spatial reasoning in future models.