Hallucination

AI-generated information that sounds plausible but is factually incorrect.

Why it matters

An agent hallucinating and then acting on false information — sending wrong emails, making incorrect updates — is a business risk.

In practice

We mitigate hallucinations with RAG (grounding in real data), FAQ matching (pre-verified answers), and confidence thresholds.

Related terms

Back to glossary