3 Quick Practical Examples of OpenMP Offload to GPUs

OpenMP has been around since October 1997—an eternity for any software—and has long been the industry-wide parallel-programming model for high-performance computing.

And it continues to evolve in lockstep with the ever-expanding hardware landscape; the API now supports GPUs and other accelerators.

In this session, Intel Principal Engineer Xinman Tian will share three examples of how to develop code that exploits GPU resources using the latest OpenMP features, including:

  • An introduction to OpenMP and its GPU-offload support
  • Examples of OpenMP offloaded code to GPUs, including Intel Xe products
  • How to take advantage of the Intel® DevCloud for oneAPI to run code samples on the latest Intel® oneAPI hardware and software

Resources

  • Sign up for an Intel® DevCloud for oneAPI account—a free development sandbox with access to the latest Intel® hardware and oneAPI software.
  • Explore oneAPI, including developer opportunities and benefits
  • Subscribe to the POD—Code Together is an interview series that explores the challenges at the forefront of cross-architecture development. Each bi-weekly episode features industry VIPs who are blazing new trails through today’s data-centric world. Available wherever you get your podcasts.

 

Xinmin Tian, Intel Principal Engineer, Intel Corporation

Xinmin Tian is a Senior Principal Engineer and Compiler Architect responsible for driving compiler OpenMP, offloading, vectorization, and parallelization technologies into current and future Intel® architecture. His current focus is on programming languages for Intel® oneAPI Toolkits for CPU and Xe accelerators, compilers, and application performance tuning. Xinmin holds 27 U.S. patents, has authored over 60 technical papers, and co-authored 3 books that span his expertise. Xinmin holds a PhD from University of San Francisco.

Performance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.