AI models such as ChatGPT, self-driving vehicles, and autonomous drones have been gaining popularity, but there is a need for better approaches to testing their security and safety. Davi Ottenheimer, the vice president of trust and digital ethics at Inrupt, will be giving a presentation on the topic at the RSA Conference in San Francisco next week. He argues that security researchers and technologists have already found ways to circumvent protections placed on AI systems, but society needs to have broader discussions about how to test and improve safety. ChatGPT has already been used for various applications, such as triaging security incidents, and Microsoft’s Security Copilot is based on a more advanced language model.
Previous ArticleIdentity And Access Management Market To Reach Valuation Of Usd 53.1 Bn At Cagr Of 13.7% By 2032
Next Article Splunk Appoints Min Wang As Chief Technology Officer