In this podcast, Curtis Anderson of Panasas talks about how to size storage for artificial intelligence (AI) and machine learning (ML) workloads that can range from batches of large images to many, many tiny files. Anderson discusses the ways different AI/ML frameworks pull data into compute and the implications of that for how data is stored, as well as whether to go all-flash on-premise, use of the cloud and the benchmarking organisations that can help organisations size their infrastructure for AI/ML.