AI systems are becoming increasingly prevalent in our lives, but they are fundamentally unexplainable and unpredictable. This is because many of their inner workings are impenetrable, making it difficult to trust them. AI systems are built on deep learning neural networks, which contain interconnected neurons with variables or parameters that affect the strength of connections between the neurons. As a naïve network is presented with training data, it “learns” how to classify the data by adjusting these parameters. This makes it difficult to understand why AI systems make the decisions that they do, creating the AI explainability problem.