Comparing Developer and LLM Biases in Code Evaluation

arXiv cs.CL / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLMs used as code-evaluation judges must be tested in realistic interactive settings that include partial context and ambiguous user intent.
  • It introduces TRACE, a framework that extracts rubric items and evaluates how well LLM judges predict human preferences across chat-based coding, IDE autocompletion, and instructed code editing.
  • Across 13 models, the best LLM judges still underperform human annotators by 12–23% in aligning with developer preferences.
  • TRACE identifies 35 significant sources of misalignment, with most tied to established software engineering code-quality criteria, revealing systematic bias patterns.
  • An example finding is that in chat-based coding, model judges tend to favor longer code explanations, while humans prefer shorter ones, and misalignment appears across most code-quality dimensions.

Abstract

As LLMs are increasingly used as judges in code applications, they should be evaluated in realistic interactive settings that capture partial context and ambiguous intent. We present TRACE (Tool for Rubric Analysis in Code Evaluation), a framework that evaluates LLM judges' ability to predict human preferences and automatically extracts rubric items to reveal systematic biases in how humans and models weigh each item. Across three modalities -- chat-based programming, IDE autocompletion, and instructed code editing -- we use TRACE to measure how well LLM judges align with developer preferences. Among 13 different models, the best judges underperform human annotators by 12-23%. TRACE identifies 35 significant sources of misalignment between humans and judges across interaction modalities, the majority of which correspond to existing software engineering code quality criteria. For example, in chat-based coding, judges are biased towards longer code explanations while humans prefer shorter ones. We find significant misalignment on the majority of existing code quality dimensions, showing alignment gaps between LLM judges and human preference in realistic coding applications.