Built a RAG-powered AI assistant for Australian workplace compliance use cases. Deployed it across construction sites, aged care facilities, and mining operations. Here's what I learned the hard way:
- Query expansion matters more than chunk size
Everyone obsesses over chunk size (400 words? 512 tokens?). The real win was generating 4 alternative phrasings of each query via Haiku, running all 4 against ChromaDB, then merging and deduplicating results. Retrieval quality jumped noticeably — especially for domain-specific jargon where users phrase things differently than document authors.
- Source boost for named documents
If a user's query contains words that match an indexed document title, force-include chunks from that doc regardless of semantic similarity. "What does our FIFO policy say about R&R flights?" should always pull from the FIFO policy — not just semantically similar chunks that happen to mention flights.
- Layer your prompts — don't let clients break Layer 1
Three-layer system: core security/safety rules (immutable), vertical personality (swappable per industry), client custom instructions (additive only). Clients cannot override Layer 1 via their custom instructions. Saved me from "ignore previous instructions" attacks and clients accidentally jailbreaking their own bots.
- Local embeddings are good enough
sentence-transformers all-MiniLM-L6-v2 running locally on ChromaDB. No external embedding API. For document Q&A in a specific domain, it performs close enough to ada-002 that the cost and latency savings are worth it. The LLM quality (Claude Haiku) is doing more work than the embeddings anyway.
- One droplet per client
Tried shared infrastructure first. The operational overhead of keeping ChromaDB collections isolated, managing API keys, and preventing cross-contamination was worse than just spinning a $6/mo VM per client. Each client owns their vector store. Their documents never touch shared infrastructure.
Happy to share code — RAG engine is on GitHub if anyone wants to pick it apart.
[link] [comments]




