WAter: A Workload-Adaptive Knob Tuning System based on Workload Compression
arXiv cs.AI / 4/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses the high cost of ML-based DBMS parameter tuning, emphasizing that runtime efficiency (reducing evaluation time per configuration) has been less explored than sample efficiency.
- It introduces WAter, a workload-adaptive tuning system that splits tuning into time slices and evaluates only small subsets of queries rather than running full workloads for every candidate configuration.
- WAter uses runtime profiling to select more representative query subsets in later slices, aiming to better approximate full-workload performance while lowering evaluation cost.
- At each time slice, WAter measures the most promising configurations on the original workload, ensuring final choices reflect true performance.
- Experiments report up to 73.5% less tuning time to find best-performing configurations and up to 16.2% higher performance versus strong alternatives.
Related Articles

Knowledge Governance For The Agentic Economy.
Dev.to

AI server farms heat up the neighborhood for miles around, paper finds
The Register
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA

87.4% of My Agent's Decisions Run on a 0.8B Model
Dev.to

AIエージェントをソフトウェアチームに変える無料ツール「Paperclip」
Dev.to