MIT researchers have developed a technique called Probably Approximately Correct (PAC) Privacy, which enables users to add the smallest amount of noise possible to machine-learning models while still protecting sensitive data. This new privacy metric and framework can automatically determine the minimal amount of noise that needs to be added, and in several cases, the amount of noise required to protect sensitive data from adversaries is far less with PAC Privacy than with other approaches.
Previous ArticleHow New Tech Elevates Release Management’s Quality Standards
Next Article Twtw Weekly Wrap Up: 30 July