A new paper by a Los Alamos team has established a theoretical framework for predicting the implications of overparametrization in quantum machine learning models. Overparametrization is a well-known concept in classical machine learning that adds more and more parameters and can prevent a stall-out in the algorithm’s training. The results of the paper could be useful in using machine learning to learn the properties of quantum data, such as classifying different phases of matter in quantum materials research.