In June 2022, a Google software engineer, Blake Lemoine, claimed that the company’s artificial intelligence chatbot generator LaMDA had become sentient. Lemoine was tasked with investigating whether LaMDA contained any harmful biases and spent a lot of time interviewing the different personas this language model can create. This case raises interesting questions about the ethical implications of machine consciousness, such as whether a machine can have experiences with a phenomenal quality and whether it can have experiences like pleasure or pain.
