University of Waterloo researchers have developed a new explainable artificial intelligence (AI) model to reduce bias and enhance trust and accuracy in machine learning-generated decision-making and knowledge organization. This model aims to eliminate barriers by untangling complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances, thus enhancing trust and reliability in Explainable Artificial Intelligence (XAI).
