Panmnesia, a Korean startup, has developed a way of running recommendation models 5x faster by feeding data from external memory pools to GPUs via CXL caching rather than host CPU-controlled memory transfers. This technology, called TrainingCXL, was developed by computing researchers at the Korea Advanced Institute of Science & Technology (KAIST). Panmnesia has also developed DirectCXL, a hardware and software way of implementing CXL memory pooling and switching technology specifically for recommendation engines. This technology is described in two IEEE papers and is beneficial for other large-scale machine learning applications.