Continuing the momentum from August 5th, this webinar (which is Part 2 in a 3-part series) looks at the Intel® oneAPI AI Analytics Toolkit from the perspective of deep learning (DL) workloads.
As in … performance benefits and features that can enhance DL training, inference, and workflows.
Join software engineer Louis Tsai for this PART 2 session that delivers insights into the latest optimizations for Intel® Optimization for TensorFlow* and PyTorch which leverage the new acceleration instructions including Intel® DL Boost and BF16 support from 3rd Gen Intel® Xeon® Scalable processors.
- How to quantize a model from fp32/bf16 to int8 and analyze the performance speedup among different data types (fp32, bf16, and int8) in depth
- Model Zoo for Intel® Architecture and low-precision tools included in the AI Kit
- Efficiencies when building ML pipelines
- Get the Jupyter notebooks in the first demo—These Jupyter notebooks help users analyze the performance benefit from using Intel Optimizations for Tensorflow with the oneDNN library.
- Read the latest Intel AI Analytics blogs on Medium.
- Develop in the Cloud—Sign up for an Intel® DevCloud account, a free development sandbox with access to the latest Intel® hardware and oneAPI software.
- Subscribe to the POD—Code Together is an interview series that explores the challenges at the forefront of cross-architecture development. Each bi-weekly episode features industry VIPs who are blazing new trails through today’s data-centric world. Listen and subscribe today.