This article discusses the potential of quantum computing to provide strong resilience against adversarial attacks in machine learning models. It explains how data manipulation attacks can be launched in several ways, such as mixing corrupt data into a training dataset or injecting manipulated data during the testing phase. It also highlights how physical attacks can be launched, such as putting a sticker on a stop sign to fool a self-driving car’s AI.