Dharma, Data and Deception: An LLM-Powered Rhetorical Analysis of Cow-Urine Health Claims on YouTube
arXiv cs.CL / 4/27/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The study analyzes 100 YouTube transcripts that either promote or debunk cow-urine (gomutra) health claims, showing how cultural traditions can intersect with science-sounding misinformation.
- Researchers use multiple LLMs (including GPT-4 variants, Gemini 2.5 Pro, and Mistral Medium 3) to annotate persuasive rhetoric with a 14-category taxonomy focused on tactics like authority appeals, efficacy claims, and conspiracy framing.
- The results indicate that claim promoters mainly lean on efficacy appeals and social proof, whereas debunkers more often stress authority and direct rebuttals.
- Human evaluation on a subset of annotations achieved 90.1% inter-annotator agreement, supporting the reliability of the taxonomy and the validation approach.
- The work contributes computational methods for studying misinformation at scale and demonstrates LLMs as tools for mapping cultural discourse dynamics online.
Related Articles

Subagents: The Building Block of Agentic AI
Dev.to

GET Serves Cache, POST Runs Inference: Cost Safety for a Public LLM Endpoint
Dev.to

DeepSeek-V4 Models Could Change Global AI Race
AI Business

Got OpenAI's privacy filter model running on-device via ExecuTorch
Reddit r/LocalLLaMA

The Agent-Skill Illusion: Why Prompt-Based Control Fails in Multi-Agent Business Consulting Systems
Dev.to