Library: Difference between revisions

From HPCWIKI
Jump to navigation Jump to search
(새 문서: {| class="wikitable" |+ !Name !Description !Notes !Others |- |oneAPI '''Math Kernel Library''' |formerly Intel Math Kernel Library or Intel MKL, is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math especially for Intel processor architecture |'''Intel MKL is free to use in any commercial and academic purposes'''. Although it nee...)
 
No edit summary
Line 1: Line 1:
{| class="wikitable"
{| class="wikitable"
|+
|+
!Type
!Name
!Name
!Description
!Description
!Notes
!Others
|-
|-
| rowspan="3" |BLAS
('''Basic Linear Algebra Subprograms)'''
|oneAPI '''Math Kernel Library'''
|oneAPI '''Math Kernel Library'''
|formerly Intel Math Kernel Library  or Intel MKL, is a  library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math especially for Intel processor architecture
|formerly Intel Math Kernel Library  or Intel MKL, is a  library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math especially for Intel processor architecture
|'''Intel MKL is free to use in any commercial and academic purposes'''. Although it needs no charge, you have to register (free) to get the MKL package.
|
|-
|-
|
|BLIS
|
|Like Intel,  AMD does provide optimized numerical compute libraries for the Zen architecture. The core “BLAS” library is called BLIS. This is the library for optimal matrix-vector matrix-matrix operations on all of the “Zen-core” processors i.e. Ryzen desktop processors and EPYC “server” processors
|
|
|-
|-
|
|cuBLAS
|
|NVIDIA's BLAS, called as cuBLAS for use with [[CUDA]] on their GPU’s. It’s highly optimized and a significant factor in the “stunningly good” compute performance possible on their GPU’s. Many of the Top500 supercomputers get the bulk of their performance from (lots of) NVIDIA GPU’s
|
|
|}
|}


== Reference ==
== Reference ==

Revision as of 18:39, 26 March 2023

Type Name Description
BLAS

(Basic Linear Algebra Subprograms)

oneAPI Math Kernel Library formerly Intel Math Kernel Library or Intel MKL, is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math especially for Intel processor architecture
BLIS Like Intel, AMD does provide optimized numerical compute libraries for the Zen architecture. The core “BLAS” library is called BLIS. This is the library for optimal matrix-vector matrix-matrix operations on all of the “Zen-core” processors i.e. Ryzen desktop processors and EPYC “server” processors
cuBLAS NVIDIA's BLAS, called as cuBLAS for use with CUDA on their GPU’s. It’s highly optimized and a significant factor in the “stunningly good” compute performance possible on their GPU’s. Many of the Top500 supercomputers get the bulk of their performance from (lots of) NVIDIA GPU’s

Reference