A new method has been proposed to enhance explainability in deep neural networks, particularly in NLP, computer vision, and attention-based models. This method involves reformulating selective state-space layers as self-attention, allowing for the extraction of attention matrices and the development of class-agnostic and class-specific tools for explainable AI.
