Add to Favourites
To login click here

The article discusses the importance of explainable AI in understanding the decision-making process of deep neural networks. It highlights the need for engineers to spend more time unboxing these “black boxes” to identify biases and weaknesses, and to build trust between AI products and consumers. The article also discusses various explanation methods for understanding how computer vision models arrive at a decision, such as heat maps and gradient-based methods.