Stanford: Self improving Meta-Harness

Reddit r/LocalLLaMA / 4/11/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • Stanford researchers propose Meta-Harness, an outer-loop system that automatically searches for and improves “harness” code used to decide what context to store, retrieve, and present to an LLM.
  • The system uses an agentic proposer with access to source code and filesystem-based scoring/execution traces of prior candidates, effectively enabling automated harness engineering rather than hand-designed prompts/context managers.
  • Reported results show Meta-Harness improves online text classification by 7.7 points while using 4× fewer context tokens versus a state-of-the-art context management approach.
  • For retrieval-augmented math reasoning, a single discovered harness boosts accuracy by 4.7 points on 200 IMO-level problems across five held-out models.
  • For agentic coding, discovered harnesses outperform the best hand-engineered baselines on TerminalBench-2, suggesting practical performance gains for local and deployed LLM systems.
Stanford: Self improving Meta-Harness

We had Prompt engineering, then Context engineering, then Agents and Harness. Now we have Meta Harness, a harness that auto corrects its agentic mistakes and improves performance and uses less context:
https://arxiv.org/abs/2603.28052

"The performance of large language model (LLM) systems depends not only on model weights, but also on their harness: the code that determines what information to store, retrieve, and present to the model. Yet harnesses are still designed largely by hand, and existing text optimizers are poorly matched to this setting because they compress feedback too aggressively. We introduce Meta-Harness, an outer-loop system that searches over harness code for LLM applications. It uses an agentic proposer that accesses the source code, scores, and execution traces of all prior candidates through a filesystem. On online text classification, Meta-Harness improves over a state-of-the-art context management system by 7.7 points while using 4x fewer context tokens. On retrieval-augmented math reasoning, a single discovered harness improves accuracy on 200 IMO-level problems by 4.7 points on average across five held-out models. On agentic coding, discovered harnesses surpass the best hand-engineered baselines on TerminalBench-2. Together, these results show that richer access to prior experience can enable automated harness engineering."

Looks like an easy performance gain for local LLMs since you can have it running after main tasks are done to improve on mistakes, opencode or the project itself here: https://github.com/stanford-iris-lab/meta-harness-tbench2-artifact

submitted by /u/GodComplecs
[link] [comments]