The article discusses the importance of trust and explainability in AI tools. It highlights the need for specific understanding of how different AI technologies…
Browsing: Explainability
Explainability is a fundamental principle of AI ethics, but its application is not self-evident and depends on the context. The concept has become important…
This Special Issue focuses on recent theoretical advances and practical applications in deep learning, with a particular emphasis on neural network model architectures, explainability…
Cloud computing and Artificial Intelligence (AI) are converging to create a remarkable era of progress in 2024. Ethical considerations are paramount in this era,…
This study, conducted by researchers from TU Darmstadt, the University of Cambridge, Merck, and TU Munich’s Klinikum rechts der Isar, explored the potential of…
This Special Issue aims to collect state-of-the-art research findings on the latest developments, up-to-date issues, and challenges in the field of knowledge representation formalisms…
This article discusses the opportunities and challenges of artificial intelligence (AI) and the development of related governance frameworks. It highlights the rapid progress of…
This Special Issue focuses on the most recent advances in the models, algorithms, theories, and applications of Graph Machine Learning (GML), both in academic…
This research project focuses on addressing and managing Non-Functional Requirements (NFRs) for Machine Learning (ML) systems. Through interviews, survey, and a part of systematic…
This paper examines the challenges associated with AI-driven diagnostic tools for Chest X-rays (CXRs) transmitted through Smart Phones. It reveals that existing Deep Learning…