XLA: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
[[파일:Openxla.png|섬네일|openxla]] | |||
XLA<ref>https://github.com/openxla/xla</ref> (Accelerated Linear Algebra) is an open-source machine learning (ML) compiler for GPUs, CPUs, and ML accelerators. | XLA<ref>https://github.com/openxla/xla</ref> (Accelerated Linear Algebra) is an open-source machine learning (ML) compiler for GPUs, CPUs, and ML accelerators. | ||
The XLA compiler takes models from popular ML frameworks such as PyTorch, TensorFlow, and JAX, and optimizes them for high-performance execution across different hardware platforms including GPUs, CPUs, and ML accelerators. | The XLA compiler takes models from popular [[Deep Learning Frameworks|ML frameworks]] such as PyTorch, TensorFlow, and JAX, and optimizes them for high-performance execution across different hardware platforms including GPUs, CPUs, and ML accelerators. | ||
===Reference=== | ===Reference=== | ||
<references/> | <references/> |
Revision as of 18:38, 25 March 2023
섬네일|openxla XLA[1] (Accelerated Linear Algebra) is an open-source machine learning (ML) compiler for GPUs, CPUs, and ML accelerators.
The XLA compiler takes models from popular ML frameworks such as PyTorch, TensorFlow, and JAX, and optimizes them for high-performance execution across different hardware platforms including GPUs, CPUs, and ML accelerators.