Microsoft Azure and ONNX Runtime for Intel® Distribution of OpenVINO™ toolkit

Unlock insights to deploy AI models with streamlined training to inference using Intel® Distribution of OpenVINO™ toolkit, Microsoft Azure, and Open Neural Network Exchange (ONNX) Runtime.

Tune in to hear Intel product experts Savitha Gandikota and Arindam Paul, and Microsoft principal program manager Manash Goswami discuss how to train on Microsoft Azure, streamline on ONNX Runtime, and infer on Intel Distribution of OpenVINO toolkit to accelerate time-to-production. With ready-to-use apps available on Microsoft Azure marketplace, take advantage of the power of a streamlined train-to-deployment pipeline.

In this webinar, you will:

  • Get an overview how to accelerate train-to-deploy workflows
  • See relevant demonstrations
  • Learn how to use these apps

Watch.

Get started

Savitha Gandikota, Edge-to-Cloud Solutions Product Manager, Intel Corporation

Savitha is a Technical Business Leader at Intel, driving edge AI. She brings a unique blend of expertise of hardware and software architectures and technologies through her experiences from Server, Networking and Embedded industries. Her passion for building products from the ground up keeps her busy in driving core capabilities needed for edge computing revolution. Disruption due to AI is here and building scalable Edge-to-Cloud solutions is the key to success.

Arindam Paul, Product Manager, Intel Corporation

Arindam is a veteran in the technology industry having led teams in EMC, Cisco, Akamai and Brocade to market leading innovations. Insanity workouts keep him hungry and Technology innovations keep him foolish.

Manash Goswami, Principal Program Manager, Microsoft Corporation

Manash Goswami is Principal Program Manager in the AI Frameworks team at Microsoft. In this role Manash is responsible for defining the strategy for integrating with HW platforms to enable ML model execution with the ONNX Runtime and enabling inference solutions in mobile and IoT platforms with ONNX Runtime.

For more complete information about compiler optimizations, see our Optimization Notice.