HoWToBench: Holistic Evaluation for LLM's Capability in Human-level Writing using Tree of Writing

arXiv cs.CL / 4/22/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Tree-of-Writing (ToW), a holistic evaluation approach that addresses inconsistencies in LLM-as-a-judge methods by explicitly modeling how sub-features are aggregated for writing quality.
  • It also releases HowToBench, a large-scale Chinese writing benchmark with 12 genres and 1,302 instructions spanning contextual completion, outline-guided writing, and open-ended generation.
  • Experimental results show ToW substantially reduces bias and achieves strong alignment with human judgments, with a 0.93 Pearson correlation.
  • The authors find that common overlap-based metrics and typical LLM-as-a-judge practices are sensitive to textual perturbations, whereas ToW is more robust.
  • They further report a negative correlation between input length and content-related scores in the outline-guided (Guide) task, suggesting that adding more input does not automatically improve writing evaluations.

Abstract

Evaluating the writing capabilities of large language models (LLMs) remains a significant challenge due to the multidimensional nature of writing skills and the limitations of existing metrics. LLM's performance in thousand-words level and open-ended writing is inadequately assessed by traditional reference-based metrics or modern LLM-as-a-judge methods. We propose Tree-of-Writing (ToW), to resolve the implicit inconsistency often found when LLM-as-a-judge aggregates all sub-features in text evaluation. ToW incorporates a tree-structured workflow by explicitly modeling the aggregation weights of sub-features. We also present HowToBench, a large-scale Chinese writing benchmark encompassing 12 genres and 1302 instructions across three task categories: contextual completion, outline-guided writing, and open-ended generation. ToW successfully mitigates the biases, achieving a 0.93 Pearson correlation with human judgments. Furthermore, we detect that both overlap-based text generation metrics and popular LLM-as-a-judge practices are vulnerable to textual disturbances, while ToW is robust to them. We also uncover a negative correlation between input length and content-related scores in the Guide task, showcasing that it cannot be simply improved by input-side information piling.