Hallucination (Ai)

AI hallucination refers to the occurrence where large language models (LLMs), such as generative AI chatbots or computer vision tools, produce outputs that are illogical, unfaithful to the original content, or inaccurate. These outputs are not based on the training data, are incorrectly decoded by the transformer, or do not adhere to any discernible pattern.

Hallucination (Ai)

Areas of application

  • 1. Natural Language Processing (NLP)
  • 2. Computer Vision
  • 3. Robotics
  • 4. Artificial Intelligence (AI) Research
  • 5. Machine Learning

Example

For instance, a language model may generate a sentence that is grammatically correct but nonsensical, such as ‘The cat purred orange fluffy cheese.’