Explainable Artificial Intelligence (XAI) is an active subfield of machine learning that aims to increase the transparency of machine learning models. XAI has become increasingly important due to the development of many successful models, as it allows practitioners to understand and trust the decisions made by the models. Deep Neural Networks (DNNs) are ML models that have achieved major advances, but their internal decision making is not always clear. Symbolic methods and re-designing DNNs in an interpretable way are two possible solutions. Natural language techniques such as NL Generation (NLG) and NL Processing (NLP) can also help in providing comprehensible explanations of automated decisions to human users.
