Can you use a trained model without deploying the entire framework? Or use a small part of the framework just for inferencing? These are two common challenges faced by software developers and data scientist when deploying models. Solving these challenges is what this webinar is about, using the new Intel® Distribution of OpenVINO™ toolkit.
Short for Open Visual Inference & Neural Network Optimization, the Intel® Distribution of OpenVINO™ toolkit (formerly Intel® CV SDK) contains optimized OpenCV and OpenVX libraries, deep learning code samples, and pretrained models to enhance computer vision development. It’s validated on 100+ open source and custom models, and is available absolutely free. In this short webinar you’ll learn about:
- Using the toolkit to deploy a neural network and optimize models
- The Intel® Deep Learning Deployment Toolkit—part of OpenVINO—including its Model Optimizer (helps quantize pretrained models) and its Inference Engine (runs seamlessly across CPU, GPU, FPGA, and VPU without requiring the entire framework to be loaded)
- How the Inference Engine lets you utilize new layers in C/C++ for CPU and OpenCL™ for GPU
OpenVINO is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries.