In this article, Christine Nellemann discusses the potential for artificial intelligence to perpetuate gender and racial stereotypes due to the biases present in the datasets used to train AI models. She highlights the responsibility of AI researchers and developers to ensure that their models are trained on representative and contemporary data in order to avoid perpetuating harmful stereotypes. Sara Sterlie, a student at DTU, has conducted research on how ChatGPT, a popular AI model, responds to gender stereotypes.