Optimize Deep Learning Inference Applications using OpenVINO™ Toolkit

Can you use a trained model without deploying the entire framework? Or use a small part of the framework just for inferencing? These are two common challenges faced by software developers and data scientist when deploying models. Solving these challenges is what this webinar is about, using the new OpenVINO™ toolkit.

Short for Open Visual Inference & Neural Network Optimization, the OpenVINO™ toolkit (formerly Intel® CV SDK) contains optimized OpenCV and OpenVX libraries, deep learning code samples, and pretrained models to enhance computer vision development. It’s validated on 100+ open source and custom models, and is available absolutely free. In this short webinar you’ll learn about:

  • Using the toolkit to deploy a neural network and optimize models
  • The Intel® Deep Learning Deployment Toolkit—part of OpenVINO—including its Model Optimizer (helps quantize pretrained models) and its Inference Engine (runs seamlessly across CPU, GPU, FPGA, and VPU without requiring the entire framework to be loaded)
  • How the Inference Engine lets you utilize new layers in C/C++ for CPU and OpenCL™ for GPU

OpenVINO is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

Ran Cohen, Intel® Deep Learning Deployment Toolkit Architect, Intel Corporation

Ran joined Intel in 2016 and is the chief architect and product owner for Intel® Deep Learning Deployment toolkit. Previously he worked with startups in the Telecom domain and with Philips Healthcare as a Pre-Development Unit Manager. He owns five patents for matching of modified visual and audio media, content distribution and tracking, virtual call center, traffic control in cellular networks, and Internet voice transmission. Ran earned a Bsc in Computer Engineering (Summa Cum Laude) and a BSc in Physics (Summa Cum Laude) from Technion – Israel Institute of Technology. Ran has over 30 years of software programing experience and 20 years’ experience in managing software teams.

For more complete information about compiler optimizations, see our Optimization Notice.