MLPerf: Difference between revisions

From HPCWIKI
Jump to navigation Jump to search
No edit summary
 
Line 4: Line 4:
The latest [https://www.hpcwire.com/2023/04/05/mlperf-inference-3-0-highlights-nvidia-intel-qualcomm-andchatgpt/ MLPerf Inference 3.0 trends] shows the latest MLPerf result and trends at the time of createing this document.
The latest [https://www.hpcwire.com/2023/04/05/mlperf-inference-3-0-highlights-nvidia-intel-qualcomm-andchatgpt/ MLPerf Inference 3.0 trends] shows the latest MLPerf result and trends at the time of createing this document.


== MLPerf Submission Categories<ref>https://www.nvidia.com/en-us/data-center/resources/mlperf-benchmarks/</ref> ==
== MLPerf Categories<ref>https://www.nvidia.com/en-us/data-center/resources/mlperf-benchmarks/</ref> ==
{| class="wikitable"
{| class="wikitable"
|+
|+
!version
!Categories
!Description
!Description
!Official Result
!Official Result

Latest revision as of 11:43, 22 May 2023

MLPerf is a consortium of key contributors from the AI/ML (Artificial Intelligence and Machine Learning) community with its 50+ founding Members and Affiliates, including startups, leading companies, academics, and non-profits from around the globe that provides unbiased AI/ML performance evaluations of hardware, software, and services.[1]


The latest MLPerf Inference 3.0 trends shows the latest MLPerf result and trends at the time of createing this document.

MLPerf Categories[2]

Categories Description Official Result
MLPerf Training v2.1 The seventh instantiation for training and consists of eight different workloads covering a broad diversity of use cases, including vision, language, recommenders, and reinforcement learning https://mlcommons.org/en/training-normal-21/
MLPerf Inference v3.0 The seventh instantiation for inference and tested seven different use cases across seven different kinds of neural networks. Three of these use cases were for computer vision, one was for recommender systems, two were for language processing, and one was for medical imaging. https://mlcommons.org/en/inference-edge-30/
MLPerf HPC v2.0 The third iteration for HPC and tested three different scientific computing use cases, including climate atmospheric river identification, cosmology parameter prediction, and quantum molecular modeling. https://mlcommons.org/en/training-hpc-20/

Benchmark Script

#MIG slice benchmark
trap "date; echo failed :(; exit 1" ERR # catch execution failures
ALL_GPUS=$(nvidia-smi -L | grep "UUID: MIG-GPU" | cut -d" " -f8 | cut -d')' -f1)
for gpu in $(echo "$ALL_GPUS"); do
    export CUDA_VISIBLE_DEVICES=$gpu
    $MLPERF_BENCHMARK & # launch workload in background
done

wait # wait for the completion of all the background processes

References