MLCommons recently published results of its MLPerf Inference v3.1 performance benchmark for GPT-J, a 6 billion parameter large language model, as well as computer vision and natural language processing models. Intel submitted results for Habana® Gaudi®2 accelerators, 4th Gen Intel® Xeon® Scalable processors, and Intel® Xeon® CPU Max Series, showing competitive performance for AI inference. Intel’s AI products give customers flexibility and choice when choosing an optimal AI solution based on their own respective performance, efficiency and cost targets. The Habana Gaudi2 inference performance results for GPT-J provide strong validation of its competitive performance.
Previous ArticleWhat Is Generative Ai? Definition And Examples
Next Article It Spending Should Prioritise Data, Cloud, Security