This article explores the importance of explainability in machine learning models and its implications for the responsible adoption of AI in the real world. It discusses the difficulties associated with explainability and the solutions that have been devised to improve interpretability. It also provides an example of how explainability can be used to gain insights into the decision-making process of a linear regression model.