This article discusses the differences between Interpretable Artificial Intelligence (IAI) and Explainable Artificial Intelligence (XAI) models. IAI models are easily understood by humans by only looking at their model summaries and parameters without the aid of any additional tools or approaches. XAI models are too complex for humans to understand without the aid of additional methods, and can give a clear idea of why a decision was made but not how it arrived at that decision. As ML models gain popularity in a number of crucial industries, it is important to understand the differences between IAI and XAI models in order to select the best strategy for a given use case.