When Personalization Tricks Detectors: The Feature-Inversion Trap in Machine-Generated Text Detection

arXiv cs.CL / 5/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that detecting machine-generated text becomes harder when LLMs imitate a specific individual’s style, creating new risks of identity impersonation.
  • It introduces a new benchmark dataset ("\dataset") to evaluate how robust current detectors are under personalized settings using pairs of original texts and LLM-generated imitations.
  • Experiments reveal large performance gaps across existing detectors in personalized scenarios, with some state-of-the-art methods experiencing substantial drops in accuracy.
  • The authors attribute the degradation to a "feature-inversion trap," where features effective in general domains become reversed and misleading for personalized text.
  • They propose "\method," which uses probe datasets targeting latent inverted feature directions to predict how a detector’s performance will change, achieving 85% correlation with observed performance gaps.

Abstract

Large language models (LLMs) have grown more powerful in language generation, producing fluent text and even imitating personal style. Yet, this ability also heightens the risk of identity impersonation. To the best of our knowledge, no prior work has examined personalized machine-generated text (MGT) detection. In this paper, we introduce \dataset, the first benchmark for evaluating detector robustness in personalized settings, built from literary and blog texts paired with their LLM-generated imitations. Our experimental results demonstrate large performance gaps across detectors in personalized settings: some state-of-the-art models suffer significant drops. We attribute this limitation to the \textit{feature-inversion trap}, where features that are discriminative in general domains become inverted and misleading when applied to personalized text. Based on this finding, we propose \method, a simple and reliable way to predict detector performance changes in personalized settings. \method identifies latent directions corresponding to inverted features and constructs probe datasets that differ primarily along these features to evaluate detector dependence. Our experiments show that \method can accurately predict both the direction and the magnitude of post-transfer changes, showing 85\% correlation with the actual performance gaps. We hope that this work will encourage further research on personalized text detection.