If you use the Intel® Distribution of OpenVINO™ toolkit (even if you don’t … yet), the latest release introduces a new profiler tool to more easily run and optimize deep learning models.
Called Deep Learning Workbench, this production-ready tool enables developers to visualize key performance metrics such as latency, throughput, and performance counters for neural network topologies and their layers. It also streamlines configuration for inference experiments including int8 calibration, accuracy check, and automatic detection of optimal performance settings.
Join senior software engineer Shubha Ramani for an overview and how-to demos of DL Workbench, where she’ll cover:
- How to download, install, and get started with the tool
- Its new features, including model analysis, int8 and Winograd optimizations, accuracy, and benchmark data
- How to run experiments with key parameters such as batch size, parallel streams, and more to determine the most optimal configuration for your application.
Get the software
Be sure to download the latest version of Intel® Distribution of OpenVINO™ toolkit so you can follow along during the webinar.