Synthetic Trust Attacks: Modeling How Generative AI Manipulates Human Decisions in Social Engineering Fraud
arXiv cs.AI / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that the key threat from generative AI-driven scams is not synthetic-media detection but manipulating the victim’s decision-making through “synthetic trust.”
- It introduces Synthetic Trust Attacks (STAs) as a formal threat category and proposes STAM, an eight-stage operational model covering the attacker’s full chain from reconnaissance to post-compliance leverage.
- Using reported performance gaps (e.g., human deepfake detection around ~55.5% and higher compliance rates for LLM scam agents), the authors contend that the perception layer is already failing in many real-world scenarios.
- The research provides a Trust-Cue Taxonomy, a reproducible incident coding schema, and four falsifiable hypotheses connecting attack structure to compliance outcomes.
- As a decision-layer defense, it operationalizes the Calm, Check, Confirm protocol and reframes defenses toward improving human/organizational decision processes rather than only detecting fakes.
Related Articles

Black Hat Asia
AI Business
[N] Just found out that Milla Jovovich is a dev, invested in AI, and just open sourced a project
Reddit r/MachineLearning

ALTK‑Evolve: On‑the‑Job Learning for AI Agents
Hugging Face Blog

Context Windows Are Getting Absurd — And That's a Good Thing
Dev.to

Every AI Agent Registry in 2026, Compared
Dev.to