From Coarse to Fine: Benchmarking and Reward Modeling for Writing-Centric Generation Tasks

arXiv cs.CL / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper argues that current benchmarks and reward models for writing-centric generation are too coarse, and they do not adequately reflect performance against specific writing requirements.
  • It introduces WEval, a fine-grained evaluation pipeline that assesses writing reward models by correlating the reward model’s rankings with gold-standard rankings across multiple task categories and requirement types.
  • It also proposes WRL, a fine-grained reinforcement learning training framework that creates positive and negative samples by selectively dropping instruction requirements to improve requirement-adherence reward modeling.
  • Experiments indicate substantial gains on multiple writing benchmarks and strong generalization, and the authors release the code and data publicly.

Abstract

Large language models have achieved remarkable progress in text generation but still struggle with generative writing tasks. In terms of evaluation, existing benchmarks evaluate writing reward models coarsely and fail to measure performance from the perspective of specific requirements. In terms of training, existing training methods either use LLM-as-a-judge approaches or train coarse-grained reward models, lacking fine-grained requirement-adherence reward modeling. To address these issues, we propose a fine-grained evaluation pipeline WEval for writing reward models and a fine-grained reinforcement learning training framework WRL. The evaluation data of WEval covers multiple task categories and requirement types, enabling systematic evaluation of writing reward models by measuring the correlation between the rankings of the reward model and gold rankings. WRL constructs positive and negative samples by selectively dropping instruction requirements, allowing for more precise reward model training. Experiments show that our models achieve substantial improvements across various writing benchmarks and exhibit strong generalization. The code and data are publicly available at \href{https://github.com/Rainier-rq1/From_Coarse_to_Fine}{https://github.com/Rainier-rq1/From\_Coarse\_to\_Fine}.