Get Deep Learning Framework Performance on Intel® Architecture

To build successful AI applications, developers must use highly optimized deep learning (DL) models—models that are developed and trained using DL frameworks such as TensorFlow* and MXNet*.

But until recently there’s been a challenge: most of these frameworks have been by default optimized only for GPUs, making CPUs a less attractive option for AI training.

To remedy that, Intel has developed several optimized DL computational functions (aka primitives) and integrated them into many popular frameworks to enable high performance for AI training on Intel-based devices. (The basic building blocks of the Intel® MKL-DNN library were at the heart of these optimizations.)

Join Louie Tsai, Intel Senior Software Engineer and embedded software specialist, to learn how these Intel-optimized frameworks can accelerate your AI applications on Intel® architecture.

Topics covered include:

  • Introduction to Intel-optimized versions of popular frameworks like TensorFlow and MXNet
  • A brief overview of the types of accelerations implemented on these frameworks
  • How to acquire and use these framework packages with Intel’s accelerations

Get the software
For source code access and installation details visit:

Nathan Greeneltch, PhD, Software Engineer, Intel Corporation

Nathan is a Data Scientist and Technical Consulting Engineer in Intel’s Technical Computing, Analyzers and Runtimes group. He is responsible for driving customer engagement with and adoption of Intel AI products, Intel® Distribution of Python*, and Intel® Performance Libraries, with focus on leveraging the synergies between Intel® Distribution for Python and the Intel® Math Kernel Library (Intel® MKL and Intel® MKL-DNN).

Before joining the TCAR team, Nathan spent 3 years on the hardware side of Intel as a Machine Learning expert responsible for predicting and identifying potential vulnerabilities in future Intel® processor generations.

Nathan has a PhD in physical chemistry from Northwestern University, where he worked on nanoscale lithography of metal wave-guides for amplification of laser-initiated vibrational signal in small molecules.

For more complete information about compiler optimizations, see our Optimization Notice.