Write Once and Deploy Inference across the Latest Intel® Architectures

The power of AI continues to shift from potential to reality, driving a sea change in nearly every major industry.

If that weren’t enough, compute architectures also continue to shift, moving from yesterday’s CPU- and GPU-only platforms to today’s heterogeneous setups.

But you knew that already.

What you may not know is that the Intel® Distribution of OpenVINO™ toolkit was designed specifically to help developers deploy AI-powered solutions across the heterogeneous landscape—combinations of CPU, GPU, VPU, FPGA—with write-once-deploy-anywhere flexibility.

In this webinar, technical consulting engineer Munara Tolubaeva will showcase the OpenVINO toolkit and its core role in AI application and solution development. Topics covered:

  • How it can be used to develop and deploy AI deep learning applications across Intel® architecture—CPUs, CPUs with Intel® Processor Graphics, Intel® Movidius™ VPUs, and FPGAs
  • Cross-architecture deployment of your apps and solutions with little to no code re-writing
  • Innovative improvements from the hardware and software stacks

Get the software

Munara Tolubaeva, Software Technical Consulting Engineer, Intel Corporation

Munara Tolubaeva is a Senior Technical Consulting Engineer at Intel responsible for enabling customers to be successful on Intel platforms through the use of Intel software. She specializes in areas like high performance computing, AI and deep learning, performance analysis and optimization, compilers and heterogeneous computing. She holds a PhD in Computer Science from University of Houston.

For more complete information about compiler optimizations, see our Optimization Notice.