Generative AI has been around for more than a decade and has been propelled by the development of open-source software libraries, innovations in neural network architectures and training, and hardware improvements. This article explains what generative models are, how they got to where they are today, and how they should be used, but also explores their limitations. Generative models learn the distribution of training data for the purpose of being able to sample or produce synthetic data that is statistically like the original data. This requires a two-step process of training the model on a large static data set and then sampling the model to obtain a new data point.