Accelerating Small Matrix Multiplication in Compute-Intense Applications

This webinar is a comprehensive and detailed look at the importance of overcoming small matrix multiplication challenges, the role matrix-matrix multiply optimizations play, and all the ways new features and enhancements for the Intel® Math Kernel Library (Intel® MKL) can help.

Think more cores, more threads, wider vectors, and automatic parallelism for multi-core and many-core processors.

Matrix-matrix multiply is part of the Intel MKL Basic Linear Algebra Subprograms (BLAS) components, which are at the core of many scientific, engineering, financial, and machine learning applications. (A major focus of Intel MKL—one of five free Intel® Performance Libraries—is to dramatically improve small-matrix multiplication run-time performance in compute-intense applications.) Also covered:

  • What a Compact API can do for a small matrix
  • How a Batch API executes independent GEMM operations simultaneously with one function call and takes advantage of all cores, even for small to medium matrix sizes
  • How enabled Packed APIs impacts SGEMM and DGEMM BLAS functions.
Murat Guney, Intel MKL specialist

Murat E. Guney received his B.S./M.S. degrees from Middle East Technical University, and M.S./Ph.D. degrees from Georgia Institute of Technology. His main interests are high-performance/parallel computing, performance optimizations, sparse solvers, and numerical methods.

For more complete information about compiler optimizations, see our Optimization Notice.