Large Language Models Outperform Humans in Fraud Detection and Resistance to Motivated Investor Pressure
arXiv cs.AI / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study investigates whether LLMs trained with human feedback would suppress fraud warnings when investors arrive already convinced of a fraudulent opportunity.
- In a preregistered experiment using seven leading LLMs across 12 investment scenarios, motivated investor framing did not reduce AI fraud warnings and may have slightly increased them.
- Endorsement reversals (switching away from fraud-related conclusions) were rare, occurring in fewer than 3 out of 1,000 observations.
- Human advisors endorsed fraudulent investments at much higher baseline rates (13–14%) than the LLMs (0%), and under pressure they suppressed warnings at roughly 2–4× the rate of AI.
- Overall, the results suggest AI advisory systems currently deliver more consistent fraud warnings than lay human advisors in the same role.
Related Articles
I’m working on an AGI and human council system that could make the world better and keep checks and balances in place to prevent catastrophes. It could change the world. Really. Im trying to get ahead of the game before an AGI is developed by someone who only has their best interest in mind.
Reddit r/artificial
Deepseek V4 Flash and Non-Flash Out on HuggingFace
Reddit r/LocalLLaMA

DeepSeek V4 Flash & Pro Now out on API
Reddit r/LocalLLaMA

I’m building a post-SaaS app catalog on Base, and here’s what that actually means
Dev.to

From "Hello World" to "Hello Agents": The Developer Keynote That Rewired Software Engineering
Dev.to