Add to Favourites
To login click here

This article discusses the security implications of generative artificial intelligence (AI) and a new study by researchers at IBM, Taiwan’s National Tsing Hua University and The Chinese University of Hong Kong which shows that malicious actors can implant backdoors in diffusion models with minimal resources. Diffusion models are deep neural networks trained to denoise data, and are used in DALL-E 2 and open-source text-to-image models such as Stable Diffusion. The study highlights the broader security implications of generative AI, which is gradually finding its way into all kinds of applications.