Introduction: AI is convenient, but slip-ups happen easily
Generative AI (such as ChatGPT) is versatile, capable of tasks from writing, research, summarization, to code assistance. On the other hand, hallucinations (plausible-sounding but incorrect information), information leakage, and bias can directly lead to real-world trouble depending on how you use it.
This article breaks down the difficult topics as clearly as possible and presents practical measures you can implement starting today.
Risk 1: Hallucinations (AI's Plausible-Sounding Misinformation)
What problems can occur?
Hallucination is a phenomenon where the AI speaks with confident misstatements. It tends to occur especially in the following contexts.
- Recent information (outside the model's training data or recent news)
- Specialized domains (law, medicine, finance, security)
- Source required (papers, statistics, regulations, policies)
- Proper nouns (people, company, product names; clause numbers, model numbers)
Common "accidents"
- Presenting nonexistent papers or URLs as references
- Stating laws or regulations as if they are current when they are outdated
- Unilaterally filling in internal rules in a way that misleads by making it look plausible
Mitigation: Treat AI as a "drafting assistant" rather than an "answerer"
In short, preventing hallucinations means not making the AI's answer the final response. The following are effective:
1) Requesting evidence (sources) together with the answer
Include in the prompt Always provide evidence and If something is uncertain, mark it as uncertain.
Example: Answers should follow the order: Conclusion → Evidence → Verification method. Always include a source URL or primary information (official document name). If something is unknown, do not guess—write Unknown.
2) Transform into a form that's easy to verify
Rather than having the AI make declarative statements, prompt it to produce a checklist of verification items, which improves safety.
- Break down claims into bullet points
- Tag each claim with a "to be verified" flag
- Propose where to verify (official sites, terms, primary sources)
3) Align with Retrieval-Augmented Generation (RAG) systems and citation-based workflows
When used in business, it's important not to dump everything into the model. You need a mechanism to fetch reliable sources. For example, Retrieval-Augmented Generation (RAG) searches internal documents and knowledge bases and uses the results as the basis for the answer.
Concrete options include Azure AI Search, Amazon Kendra, Elasticsearch, OpenSearch, or search infrastructures integrated with Notion/Confluence/Google Drive, etc.




