Supercharged scams
MIT Technology Review / 4/22/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisIndustry & Market Moves
Key Points
- The public release of ChatGPT in late 2022 highlighted how generative AI can rapidly produce convincing human-like text from simple prompts.
- Criminals quickly adopted large language models to scale up malicious email campaigns, including both broad spam and more targeted, sophisticated attacks.
- The article implies that generative AI is lowering the cost and effort required for scam operations, accelerating the growth and quality of fraud attempts.
- It underscores the need for improved defenses and awareness as AI-generated content becomes increasingly prevalent in cybercrime.
- Overall, the piece frames “supercharged scams” as an emerging threat driven by accessible generative AI capabilities.
When ChatGPT was released to the public in late 2022, it opened people’s eyes to how easily generative AI could churn out vast amounts of human-seeming text from simple prompts. This quickly caught the attention of criminals, who soon began using large language models to produce malicious emails—both the untargeted spam kind and more sophisticated,…
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat USA
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Day 3 — Shipping Agent Governance and Pipeline Phase B
Dev.to

SpaceX is working with Cursor and has an option to buy the startup for $60 billion
TechCrunch

Day 2 — Hardening the Pipeline and Observability
Dev.to