Explainable AI (XAI) is a collection of methods and techniques used to provide visibility into how AI models work, allowing us to identify potential biases and ensure that these systems are being used ethically and responsibly. XAI aims to make AI more transparent, allowing us to better understand how it works and why it made a particular decision. Visualization techniques and decision trees are two methods of XAI that can be used to illustrate how the AI system arrived at a particular decision.