XAI has several limitations, some of them relating to its implementation, such as complexity making a holistic understanding of the development process and values embedded within AI systems less attainable. Additionally, there is a clash between the prescribed nature of algorithms and code on the one hand and the flexibility of open-ended terminology on the other. Moreover, when AI’s interpretability is tested by looking at the most critical parameters and factors shaping a decision, questions such as what amounts to “transparent” or “interpretable” AI arise. To illustrate this, research on “generative agents” has been published, where large language models were combined with computational, interactive agents that autonomously spread invitations to a party.
