This article discusses the use of differential privacy in image recognition and how the existing methods for differentially private deep learning need to be improved. To address this problem, a research team from Shanghai University in China suggests a simulated annealing-based differentially private stochastic gradient descent (SA-DPSGD) approach that accepts a candidate update with a probability that depends on the quality of the update and the number of iterations.
Previous ArticleQuantum Computing And Ai: A Leap Forward Or A Distant Dream?
Next Article What Is Keras Core?