DSIPA: Detecting LLM-Generated Texts via Sentiment-Invariant Patterns Divergence Analysis
arXiv cs.AI / 4/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces DSIPA, a training-free, zero-shot framework to detect LLM-generated text by analyzing how sentiment patterns change under controlled stylistic variations.
- DSIPA is designed to be robust to adversarial perturbations, paraphrasing attacks, and domain shifts, avoiding common requirements such as access to model parameters or large labeled datasets.
- It operates in a black-box setting using two unsupervised metrics—sentiment distribution consistency and sentiment distribution preservation—to capture differences between typically emotion-stable LLM outputs and more affectively diverse human writing.
- Experiments across multiple state-of-the-art proprietary and open-source models (e.g., GPT-5.2, Gemini-1.5-pro, Claude-3, LLaMa-3.3) and five domains show improvements in F1 scores by up to 49.89% versus baseline detection methods.
- The authors report strong cross-domain generalization and resilience to adversarial conditions, presenting an interpretable behavioral signal for secure content identification as LLM capabilities evolve.
Related Articles
Claude Opus 4.7: What Actually Changed and Whether You Should Migrate
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
The Inference Inflection: Why AI's Center of Gravity Has Shifted from Training to Inference
Dev.to
AI transparency index on pvgomes.com
Dev.to
Mastering On-Device GenAI: How to Fine-Tune LLMs for Android Using LoRA and Kotlin 2.x
Dev.to