GPT-4 hallucinated 'Amritavati' as a real drug yesterday. Confident. Wrong. Dangerous.
For a health platform, this isn't a funny bug. It's why building health AI for India can't be a translation job.
Day 27 of building GoDavaii. We're deep in our AI-verified Desi Ilaaj feature - bringing home remedies into the AI age, cross-verified for safety. That means grappling with frontier model limitations head-on.
The Language Barrier: Beyond Translation
22+ Indian languages isn't just UI work. It's how health concepts are expressed, how symptoms are described, how remedies are understood. A user typing 'tabiyat theek nahi' in Tamil - colloquial for 'not feeling well' - needs an AI that reads the underlying symptoms without losing context. Our AI Health Chat has to parse these nuances.
The real challenge is the cultural knowledge layer. Desi Ilaaj isn't uniformly documented. An LLM trained on English internet data fills gaps with plausible-sounding fiction when faced with region-specific traditional queries. 'Amritavati' was exactly that - a confident hallucination.
Our approach: a dedicated knowledge graph plus a verification layer beyond generic LLM inference.
Cross-Verification: Allopathy Meets Ayurveda
GoDavaii's core moat is cross-verifying allopathic and Ayurvedic remedies - not just within their systems, but against each other. A common fever remedy in one tradition might interact with a prescription from another. No global competitor does this at scale in local languages.
This is an architectural problem. We use fine-tuned models (Gemini 2.5 Flash for summarization and language tasks) plus a custom knowledge graph built by medical professionals and Ayurvedic experts. When a user asks about Desi Ilaaj, we check our verified database first. Only then do we query general models, with strict guardrails to flag hallucinations or low-confidence outputs. 'Amritavati' was caught by this process.
Building for Safety-First
Our Top 14 Global Finalist spot at Startup Flight Vietnam gave us exposure, but the core feedback always circles back to our unique Indian problem set. The people coming online won't be English-first. They'll have health questions specific to their diet, environment, traditional practices, language. This isn't just accessibility - it's safety. Incorrect health information in a trusted AI interface is dangerous.
GoDavaii is a question-builder for families. An extra check before your next appointment. A way to surface sharper questions for doctors. A catch for what a busy clinic visit might miss - especially the interplay of modern medicine and traditional practices.
What health query do you think an English-only AI would struggle with most in India? Let me know in the comments.
Try GoDavaii at godavaii.com



