Teaching LLMs Human-Like Editing of Inappropriate Argumentation via Reinforcement Learning

arXiv cs.CL / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that LLMs and humans differ in editing behavior: LLMs often make multiple scattered, meaning-altering edits, while humans use meaning-preserving, self-contained edit encapsulation.
  • It proposes a reinforcement learning method to train LLMs to produce human-like edits that improve argument appropriateness.
  • The approach generates independent, sentence-level edit suggestions that can be accepted or rejected separately, aiming to keep edits controlled and context-consistent.
  • Training uses group relative policy optimization with a multi-component reward that balances semantic similarity, fluency, pattern conformity, and overall argument-level appropriateness.
  • Experiments (automatic and human evaluation, including multi-round editing) report improved performance over baselines, reaching appropriateness near full rewriting while maintaining human-like editing characteristics.

Abstract

Editing human-written text has become a standard use case of large language models (LLMs), for example, to make one's arguments more appropriate for a discussion. Comparing human to LLM-generated edits, however, we observe a mismatch in editing strategies: While LLMs often perform multiple scattered edits and tend to change meaning notably, humans rather encapsulate dependent changes in self-contained, meaning-preserving edits. In this paper, we present a reinforcement learning approach that teaches LLMs human-like editing to improve the appropriateness of arguments. Our approach produces self-contained sentence-level edit suggestions that can be accepted or rejected independently. We train the approach using group relative policy optimization with a multi-component reward function that jointly optimizes edit-level semantic similarity, fluency, and pattern conformity as well as argument-level appropriateness. In automatic and human evaluation, it outperforms competitive baselines and the state of the art in human-like editing, with multi-round editing achieving appropriateness close to full rewriting.