Machine learning anti-patterns are common mistakes made in the development or application of ML models that can lead to poor performance, biases, overfitting, or other problems. The term “Phantom Menace” refers to differences between training and test data that may not be immediately apparent during the development and evaluation phase, but can become a problem when the model is deployed in the real world. The training/serving skew occurs when the statistical properties of the training data are different from the data the model is exposed to during inference. To mitigate this, it is important to ensure the training data is representative of the data the model will encounter during inference and to monitor the model’s performance in production. The “Sentinel” anti-pattern is a technique used to validate models or data in an online environment before deploying them to production.