Human vs. Machine Deception: Distinguishing AI-Generated and Human-Written Fake News Using Ensemble Learning
arXiv cs.CL / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how AI-generated fake news differs from human-written misinformation, focusing on linguistic, structural, and emotional signals.
- It builds document-level features using sentence structure, lexical diversity, punctuation patterns, readability metrics, and emotion-related measures (e.g., fear, anger, trust, anticipation).
- Multiple classifiers (logistic regression, random forest, SVM, XGBoost, and a neural network) are compared, and evaluation uses accuracy and ROC-AUC.
- Results indicate that readability-based features are the most informative predictors, and AI-generated text tends to show more uniform stylistic patterns.
- An ensemble approach that aggregates model predictions yields modest but consistent performance gains over individual models, suggesting robust distinguishability via text properties.
Related Articles

Emerging Properties in Unified Multimodal Pretraining
Dev.to

Build a Profit-Generating AI Agent with LangChain: A Step-by-Step Tutorial
Dev.to

Open source AI is winning — but here's why I still pay $2/month for Claude API
Dev.to

AI Agents Need Real Email Infrastructure
Dev.to

Beyond the Prompt: Why AI Agents Are Hitting the Deployment Wall
Dev.to