The continual backpropagation algorithm extends standard deep learning by adding a source of continuing variability to the weights of the network, allowing for the maintenance of plasticity. This involves a form of variation and selection in the space of neuron-like units, combined with continuing gradient descent. The contribution utility measure is used to selectively reinitialize low-utility units in the network.