Dharma, Data and Deception: An LLM-Powered Rhetorical Analysis of Cow-Urine Health Claims on YouTube

arXiv cs.CL / 4/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study analyzes 100 YouTube transcripts that either promote or debunk cow-urine (gomutra) health claims, showing how cultural traditions can intersect with science-sounding misinformation.
  • Researchers use multiple LLMs (including GPT-4 variants, Gemini 2.5 Pro, and Mistral Medium 3) to annotate persuasive rhetoric with a 14-category taxonomy focused on tactics like authority appeals, efficacy claims, and conspiracy framing.
  • The results indicate that claim promoters mainly lean on efficacy appeals and social proof, whereas debunkers more often stress authority and direct rebuttals.
  • Human evaluation on a subset of annotations achieved 90.1% inter-annotator agreement, supporting the reliability of the taxonomy and the validation approach.
  • The work contributes computational methods for studying misinformation at scale and demonstrates LLMs as tools for mapping cultural discourse dynamics online.

Abstract

Health misinformation remains one of the most pressing challenges on social media, particularly when cultural traditions intersect with scientific-sounding claims. These dynamics are not only global but also deeply local, manifesting in culturally specific controversies that require careful analysis. Motivated by this, we examine 100 YouTube transcripts that promote or debunk cow urine (gomutra) as a health remedy, focusing on rhetorical strategies such as appeals to authority, efficacy appeals, and conspiracy framing. We employ large language models (LLMs) including GPT-4, GPT-4o, GPT-4.1, GPT-5, Gemini 2.5 Pro, and Mistral Medium 3 to annotate transcripts using a 14-category taxonomy of persuasive tactics. Our analysis reveals that promoters predominantly rely on efficacy appeals and social proof, while debunkers emphasize authority and rebuttal. Human evaluation of a subset of annotations yielded 90.1\% inter-annotator agreement, confirming the reliability of our taxonomy and validation process. This work advances computational methods for misinformation analysis and demonstrates how LLMs can support large-scale studies of cultural discourse online.