Researchers have found that OpenAI’s AI chatbot ChatGPT 3.5 provided inappropriate recommendations for cancer treatment, highlighting the need for awareness of the technology’s limitations. The researchers prompted the AI chatbot to provide treatment advice that aligned with guidelines established by the National Comprehensive Cancer Network (NCCN). The study found that 98% of responses included at least one treatment approach that agreed with NCCN guidelines, but 34% of these responses also included one or more non-concordant recommendations. In 12.5% of cases, ChatGPT produced “hallucinations,” or a treatment recommendation entirely absent from NCCN guidelines. This form of misinformation can incorrectly set patients’ expectations about treatment and pose a risk to patient safety.
