Amazon’s Q, an AI chatbot for workers in its cloud division, has been found to be blurting out confidential information and providing inaccurate legal advice. It has also been returning harmful or inappropriate responses that could put customer accounts at risk. Despite being designed to be a more reliable alternative to consumer-focused AI chatbots, Q has been experiencing severe hallucinations, which have been described as “broad and egregious” by an AWS manager. Amazon has not identified any security issues related to Q, and will continue to tune it as it transitions to being generally available.
