Add to Favourites
To login click here

MLCommons releases benchmarks for AI inference processing every 6 months, and NVIDIA and its partners have consistently been able to best the competition. This round of MLPerf 3.0 includes SiMa.ai and Neuchips for edge image classification and data center recommendation, respectively. The community is also working on a new benchmark that will test inference and training performance and power consumption of 100B-parameter-class models. NVIDIA H100 brandishes a Transformer Engine which dominated BERT in the MLPerf 3.0 benchmarks.