Software AI Acceleration

Intel @ Software for AI Optimization Summit 2021

Intel participated in the Software for AI Optimization virtual summit on June 8-9, 2021 to talk about our latest software optimization tools and how you can seamlessly integrate and deploy them into your AI/ML workflows. We presented how software AI acceleration and other optimizations can improve the performance of AI hardware (CPUs, GPUs, FPGAs, and AI accelerators) by reducing training length, inference time, energy consumption, memory usage, and cost while still maintaining high levels of performance and accuracy. The summit featured Wei Li, VP & GM, Machine Learning Performance, Intel as the invited opening keynote speaker along with additional Intel speaker sessions, a virtual booth, and access to on-demand content.

 

Software AI Accelerators: The Next Frontier

Driven by the exponential growth of data, AI demands computer systems to deliver significantly higher performance to meet ever-expanding computing requirements. In this talk, we will show “Software AI Accelerators” delivering orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics. This software acceleration is key to enabling AI Everywhere with applications across sports, telecommunication, drug discovery, and more.

Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency

Deep learning frameworks use low-level performance libraries to achieve best execution efficiency. As framework and libraries quickly evolve, integrating latest optimization from various AI hardware to framework has been a significant challenge. oneDNN Graph API extends oneDNN with a graph interface which eases the integration efforts for fusion optimization. The same oneDNN Graph integration can be reused for a variety of AI hardware including AI accelerators. The talk will cover key interface design considerations allowing various implementations for maximum deep learning performance.

Advanced Techniques to Accelerate Model Tuning

Discuss algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization.  We will first cover adaptations of black-box optimization methodologies to best serve our customers and their use cases. This will be followed by an overview of multi-objective problems and the variety of ways that these can be addressed, both conceptually and practically. Finally, we will present some of our ongoing model-specific work, as well as other future opportunities.

Learn more about our innovations in data science, machine learning, and AI engineering, and IoT at http://software.intel.com/ai.

Performance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.