Explainable AI (XAI) is a growing field in the tech industry that aims to make AI decision-making processes more transparent. This is in response to concerns about trust, legal and ethical implications, and the complexity of modern AI systems. Feature attribution techniques, such as SHAP, are one approach to achieving this goal.