| Apparently the best defense against AI copying your voice is strawberry mango forklift supersize fries. [link] [comments] |
A comedian’s strategy for poisoning AI training data
Reddit r/artificial / 4/28/2026
💬 OpinionSignals & Early TrendsIdeas & Deep Analysis
Key Points
- The article claims a comedian’s “strategy” involves poisoning AI training data to interfere with AI copying or mimicking a person’s voice.
- It frames the approach as a defensive tactic against voice cloning, using a nonsensical example phrase as a satirical or coded reference.
- The content appears to be shared via a Reddit post, suggesting it may be anecdotal or intentionally absurd rather than a concrete, validated method.
- Overall, it highlights concerns around AI training data integrity and the potential for adversarial manipulation to protect personal data and identity.
Related Articles

China’s DeepSeek prices new V4 AI model at 97% below OpenAI’s GPT-5.5
SCMP Tech

I built Dispatch AI. I just wanted to share it. If you find it cool, take a look and leave a comment.
Dev.to

Replit AI Agent: Practical Guide for Dev Workflows
Dev.to

Open source Xiaomi MiMo-V2.5 and V2.5-Pro are among the most efficient (and affordable) at agentic 'claw' tasks
VentureBeat

Building My Own AI Coding Agent From Scratch: A Learning Journey
Dev.to