The Path Not Taken: Duality in Reasoning about Program Execution

arXiv cs.LG / 4/24/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper argues that adopting LLMs for coding requires understanding actual program execution causally, not just matching surface patterns or input-to-output correlations.
  • It critiques existing benchmarks because they often measure properties tied to specific inputs (like code coverage or outputs), giving a limited and potentially contaminated view of dynamic reasoning.
  • The authors propose a “duality” framework for execution understanding using two complementary tasks: predicting a program’s observed behavior on an input and inferring how the input should be mutated to reach a target behavioral objective.
  • They implement the idea in DexBench, a new benchmark with 445 paired instances, and test 13 LLMs, finding that dual-path reasoning is a robust and discriminative proxy for dynamic code understanding.

Abstract

Large language models (LLMs) have shown remarkable capabilities across diverse coding tasks. However, their adoption requires a true understanding of program execution rather than relying on surface-level patterns. Existing benchmarks primarily focus on predicting program properties tied to specific inputs (e.g., code coverage, program outputs). As a result, they provide a narrow view of dynamic code reasoning and are prone to data contamination. We argue that understanding program execution requires evaluating its inherent duality through two complementary reasoning tasks: (i) predicting a program's observed behavior for a given input, and (ii) inferring how the input must be mutated toward a specific behavioral objective. Both tasks jointly probe a model's causal understanding of execution flow. We instantiate this duality in DexBench, a benchmark comprising 445 paired instances, and evaluate 13 LLMs. Our results demonstrate that dual-path reasoning provides a robust and discriminative proxy for dynamic code understanding.