Pramana: Fine-Tuning Large Language Models for Epistemic Reasoning through Navya-Nyaya
arXiv cs.AI / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Pramana, a fine-tuning approach aimed at reducing LLM “epistemic gaps” by teaching models explicit, evidence-grounded reasoning rather than fluent but unfounded claims.
- Pramana leverages Navya-Nyaya logic, enforcing a structured six-phase methodology (SAMSHAYA, PRAMANA, PANCHA AVAYAVA, TARKA, HETVABHASA, NIRNAYA) to differentiate knowledge from hypotheses and detect fallacies.
- Experiments fine-tune Llama 3.2-3B and DeepSeek-R1-Distill-Llama-8B on 55 Navya-Nyaya-formatted problems, achieving 100% semantic correctness on held-out evaluation despite imperfect adherence to the strict reasoning format.
- Ablation results indicate that format prompting and generation temperature substantially influence performance, with different optimal configurations across training stages.
- The authors release models, datasets, and training infrastructure on Hugging Face to support further research into epistemic reasoning frameworks for AI.
Related Articles

Black Hat Asia
AI Business

Meta's latest model is as open as Zuckerberg's private school
The Register

AI fuels global trade growth as China-US flows shift, McKinsey finds
SCMP Tech

Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial