Recent research has explored how generalization error improves in Deep Neural Networks by increasing model capacity, contrary to the bias-variance tradeoff. It has been suggested that regularization is the source of these gains, however, Zhang et al. show that this is highly unlikely. Belkin et al. and Nakkiran et al. also provide insight into the gains achieved with overparameterized models and regularization.
