Florian Tramer, Gautam Kamath, and Nicholas Carlini won an award for their paper on differentially private learning with large-scale public pretraining. In this interview, Gautam discusses the challenges of training high-utility models while preserving privacy and suggests moving away from a dichotomy of public and private data.
