← AI Glossary

What Is an AI Hallucination?

An AI hallucination occurs when a large language model generates information that sounds plausible but is factually incorrect, fabricated, or nonsensical.

Why Do AI Models Hallucinate?

LLMs are trained to predict the most likely next token — not to verify facts. This means they can:

  • Invent citations that don’t exist
  • Fabricate statistics with false precision
  • Confuse entities (mixing up people, dates, or events)
  • Generate plausible-sounding code that doesn’t compile

Hallucinations are more likely when the model is asked about niche topics, recent events (after its training cutoff), or when temperature is set high.

How to Reduce Hallucinations

  1. Use RAG: Ground the model’s answers in real documents
  2. Lower temperature: Reduce randomness for factual tasks
  3. Ask for sources: Prompt the model to cite its reasoning
  4. Use web search: Let the model verify facts against live data
  5. Cross-check with multiple models: Run the same query through different LLMs

Hallucination in Elvean

Elvean helps reduce hallucinations by connecting models to web search and external data sources via MCP servers — giving them access to real, up-to-date information instead of relying solely on training data.

Elvean brings all these concepts together in one native Mac app — local models, cloud APIs, agentic tools, and more.

Learn more about Elvean