AI devices have a tendency to “hallucinate,” meaning they can produce output that is not real but sounds true. This phenomenon was first identified by Google DeepMind researchers in 2018, who found that neural machine translation systems are susceptible to producing highly pathological translations that are untethered from the original material. Hallucinations are unexpected and incorrect responses from AI programs that can occur for unknown reasons. Examples of this include AI programs producing unexpected output when asked a question, such as a recipe for fruit when asked about planting fruit trees.
