Library: Difference between revisions

From HPCWIKI
Jump to navigation Jump to search
No edit summary
No edit summary
Line 6: Line 6:
!Description
!Description
|-
|-
| rowspan="3" |BLAS
| rowspan="4" |BLAS
('''Basic Linear Algebra Subprograms)'''
('''Basic Linear Algebra Subprograms)'''
|oneAPI '''Math Kernel Library'''
|oneAPI '''Math Kernel Library'''
Line 16: Line 16:
|cuBLAS
|cuBLAS
|NVIDIA's BLAS, called as cuBLAS for use with [[CUDA]] on their GPU’s. It’s highly optimized and a significant factor in the “stunningly good” compute performance possible on their GPU’s. Many of the Top500 supercomputers get the bulk of their performance from (lots of) NVIDIA GPU’s
|NVIDIA's BLAS, called as cuBLAS for use with [[CUDA]] on their GPU’s. It’s highly optimized and a significant factor in the “stunningly good” compute performance possible on their GPU’s. Many of the Top500 supercomputers get the bulk of their performance from (lots of) NVIDIA GPU’s
|-
|MAGMA
|MAGMA is a collection of next generation linear algebra (LA) GPU accelerated libraries designed and implemented by the team that developed LAPACK and ScaLAPACK. The main benefits of using MAGMA are that it can enable applications to fully exploit the power of current heterogeneous systems of multi/manycore CPUs and multi-GPUs<ref>https://developer.nvidia.com/magma</ref>
|}
|}


== Reference ==
== Reference ==

Revision as of 10:21, 28 March 2023

Here are some core library to run HPC system benchmark among the bunch of available numerical libraries for performance optimization

Type Name Description
BLAS

(Basic Linear Algebra Subprograms)

oneAPI Math Kernel Library formerly Intel Math Kernel Library or Intel MKL, is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math especially for Intel processor architecture
BLIS Like Intel, AMD does provide optimized numerical compute libraries for the Zen architecture. The core “BLAS” library is called BLIS. This is the library for optimal matrix-vector matrix-matrix operations on all of the “Zen-core” processors i.e. Ryzen desktop processors and EPYC “server” processors
cuBLAS NVIDIA's BLAS, called as cuBLAS for use with CUDA on their GPU’s. It’s highly optimized and a significant factor in the “stunningly good” compute performance possible on their GPU’s. Many of the Top500 supercomputers get the bulk of their performance from (lots of) NVIDIA GPU’s
MAGMA MAGMA is a collection of next generation linear algebra (LA) GPU accelerated libraries designed and implemented by the team that developed LAPACK and ScaLAPACK. The main benefits of using MAGMA are that it can enable applications to fully exploit the power of current heterogeneous systems of multi/manycore CPUs and multi-GPUs[1]

Reference