The latest MLPerf Inference benchmark (4.0) results were released, showcasing the performance of various organizations in the field of AI. Two new workloads were added to the benchmark suite, and Nvidia, Qualcomm, and Intel/Habana all showed gains. Juniper Networks, Red Hat-Supermicro, and Wiwynn were first-time participants, highlighting the importance of the network in AI. Overall, the number of submitters has remained stable in recent years, with 23 organizations participating in this round. Notably, Nvidia’s inference revenue now accounts for 40% of their datacenter revenue.