LLM Output Detectability and Task Performance Can be Jointly Optimized

arXiv cs.CL / 5/5/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM output detectability (e.g., for transparency and accountability) can be jointly improved alongside downstream task performance rather than optimized in isolation.
  • It introduces PUPPET, a reinforcement-learning fine-tuning framework that uses two reward signals: one from a machine-text detector and another from a task-specific evaluator metric.
  • Experiments on long-form QA, summarization, and essay writing show that PUPPET-trained models reach detectability levels competitive with traditional watermarking while achieving better downstream task results.
  • The method is reported to be efficient, requiring only a few thousand samples and about 1–2 GPU hours, and the benefits generalize across out-of-domain tasks, LLM families, and model sizes.
  • The approach is also claimed to be robust against paraphrasing attacks, suggesting improved practicality for real-world deployment.

Abstract

Detecting machine-generated text is essential for transparency and accountability when deploying large language models (LLMs). Among detection approaches, watermarking is a statistically reliable method by design -- it embeds detectable signals into LLM outputs by biasing their token distributions. However, it has been reported that watermarked LLMs often perform worse on downstream tasks. We propose PUPPET, a framework that fine-tunes an LLM via reinforcement learning to generate text that is both more detectable and better performing on downstream tasks. We use two reward functions: a detector that outputs a machine-class likelihood and an evaluator that measures a task-specific metric. Experiments on long-form QA, summarization, and essay writing show that LLMs trained with PUPPET achieve high detectability competitive with watermarking methods while outperforming them on downstream tasks. The analysis shows that this optimization can be performed efficiently with only a few thousand samples in 1--2 GPU hours. Moreover, these gains are consistent across out-of-domain tasks, different LLM families, and model sizes, and are even robust to paraphrasing attacks.