Alignment Imprint: Zero-Shot AI-Generated Text Detection via Provable Preference Discrepancy

arXiv cs.AI / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that modern LLM alignment (fine-tuning and preference tuning) leaves a measurable “Alignment Imprint” that can be used to detect AI-generated text.
  • It provides a theoretical derivation showing the log-likelihood ratio decomposes into implicit instructional biases and preference rewards, motivating the imprint concept.
  • To address instability in high-entropy regions, the authors introduce Log-likelihood Alignment Preference Discrepancy (LAPD), an information-weighted statistic grounded in the alignment imprint.
  • The work claims statistical and theoretical advantages over Fast-DetectGPT, including dominance in performance and strict improvement of unweighted alignment scores when aligned and base models are close.
  • Experiments report a 45.82% relative improvement over the strongest existing baselines, with consistent gains across settings.

Abstract

Detecting AI-generated text is an important but challenging problem. Existing likelihood-based detection methods are often sensitive to content complexity and may exhibit unstable performance. In this paper, our key insight is that modern Large Language Models (LLMs) undergo alignment (including fine-tuning and preference tuning), leaving a measurable distributional imprint. We theoretically derive this imprint by abstracting the alignment process as a sequence of constrained optimization steps, showing that the log-likelihood ratio can naturally decompose into implicit instructional biases and preference rewards. We refer to this quantity as the Alignment Imprint. Furthermore, to mitigate the instability in high-entropy regions, we introduce Log-likelihood Alignment Preference Discrepancy (LAPD), a standardized information-weighted statistic based on alignment imprint. We provide statistical guarantee that alignment-based statistics dominate Fast-DetectGPT in performance. We also theoretically show that LAPD strictly improves the unweighted alignment scores when the aligned and base models are close in distribution. Extensive experiments show that LAPD achieves an improvement 45.82% relative to the strongest existing baselines, yielding large and consistent gains across all settings.