HARBOR: Automated Harness Optimization
arXiv cs.LG / 4/24/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that long-horizon language-model agents’ performance and complexity are driven more by the “harness” (wrappers like context compaction, tool caching, semantic memory, and execution sandbox glue) than by the underlying model itself.
- It presents automated harness optimization as a constrained, noisy Bayesian optimization problem over a mixed, heterogeneous configuration space, using cold-start-corrected rewards and a posterior chance-constrained safety check.
- The authors introduce HARBOR, a reference solver that combines a block-additive SAAS surrogate, multi-fidelity cost-aware acquisition, and TuRBO-style trust regions.
- Experiments on a flag-gated harness for a production coding agent show an end-to-end HARBOR run that is compared against a controlled multi-round manual tuning study on a fixed task suite.
- The method is designed to be task-class agnostic, applying to other agent harnesses as long as the flag space is bounded and a reproducible task suite is available.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA