This article discusses the security implications of generative artificial intelligence (AI) and a new study by researchers at IBM, Taiwan’s National Tsing Hua University and The Chinese University of Hong Kong which shows that malicious actors can implant backdoors in diffusion models with minimal resources. Diffusion models are deep neural networks trained to denoise data, and are used in DALL-E 2 and open-source text-to-image models such as Stable Diffusion. The study highlights the broader security implications of generative AI, which is gradually finding its way into all kinds of applications.
Previous ArticleUipath, Inc. Management’s Discussion And Analysis Of Financial Condition And Results Of Operations (form 10-k)
Next Article Chatgpt In Agriculture: Relevance And Concerns