Google’s AI chatbot, Bard, has introduced a new feature that allows users to double check the accuracy of its responses. The feature highlights parts of the answer that Bard is very sure about in green, and parts where Bard has found sources that could refute the statement or where no relevant sources could be found in orange. Google is also working on a language model (LLM) to estimate how a sentence should continue, which will help reduce the occurrence of “hallucinations” – errors where the AI gives out completely wrong or fabricated information.
