This study compared the responses of three AI chatbots (GPT-3.5, GPT-4, and Claude AI) to those of six verified oncologists in response to 200 patient questions about cancer on an online forum. The results showed that the chatbots consistently received higher ratings for empathy, response quality, and readability, although the reading grade level of the chatbot responses was higher than that of the physician responses. This suggests that chatbots may be a useful tool for physicians to provide empathetic response templates for patients, potentially improving access to care and reducing physician burnout.
