This article examines the accuracy-explainability tradeoff, which is the idea that more accurate algorithms are less explainable. The authors tested a variety of AI models on nearly 100 datasets and found that 70% of the time, a more explainable model could be used without sacrificing accuracy. The authors argue that organizations should think carefully before integrating unexplainable, “black box” AI tools into their operations, and take steps to determine whether these models are really worth the risk before moving forward.