This thesis explores the field of Explainable AI, which aims to provide explanations for the predictions of AI systems. It proposes to move beyond dense feature attributions by adopting structured internal representations as a more interpretable explanation domain. The works included in this thesis address questions such as how to obtain structured representations, how to use them for downstream tasks, and how to evaluate the resulting explanations.
