This article discusses the potential security implications of large language models, such as ChatGPT, being used by terrorists and violent extremists. The authors explore how these models could be exploited to spread extremist content and offer recommendations for policymakers to address these issues.