Library

From HPCWIKI
Jump to navigation Jump to search

Here are some core library to run HPC system benchmark among the bunch of available numerical libraries for performance optimization

Type Name Description build Reference
BLAS

(Basic Linear Algebra Subprograms)

oneAPI Math Kernel Library formerly Intel Math Kernel Library or Intel MKL, is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math especially for Intel processor architecture
BLIS Like Intel, AMD does provide optimized numerical compute libraries for the Zen architecture. The core “BLAS” library is called BLIS. This is the library for optimal matrix-vector matrix-matrix operations on all of the “Zen-core” processors i.e. Ryzen desktop processors and EPYC “server” processors
cuBLAS NVIDIA's BLAS, called as cuBLAS for use with CUDA on their GPU’s. It’s highly optimized and a significant factor in the “stunningly good” compute performance possible on their GPU’s. Many of the Top500 supercomputers get the bulk of their performance from (lots of) NVIDIA GPU’s
MAGMA[1] MAGMA is a collection of next generation linear algebra (LA) GPU accelerated libraries designed and implemented by the team that developed LAPACK and ScaLAPACK. The main benefits of using MAGMA are that it can enable applications to fully exploit the power of current heterogeneous systems of multi/manycore CPUs and multi-GPUs as of March 2003, The latest releases are MAGMA 2.7.1 for CUDA and HIP, MAGMA MIC 1.4.0 for Intel Xeon Phi, and clMAGMA 1.3 for OpenCL. MAGMA is also used by Pytorch https://rgb.sh/blog/magma

Reference