AI is increasingly being used to make decisions that affect human lives, but the problem is that AI systems are often black boxes, unable to offer explanations for these decisions. Unless regulators insist that AI needs to be explainable and interpretable, we are about to enter an era of the absurd. AI designers understand the abstract level of what AI products do, but they are unable to explain why an AI program produces the results it does.