AI is increasingly being used to make decisions that affect human lives, but the problem is that AI systems are often black boxes, unable to offer explanations for these decisions. This article argues that unless regulators insist that AI needs to be explainable and interpretable, we are about to enter an era of the absurd.
