Get Your Code Future-Ready with Free Webinars

If you’re looking to sharpen your technical skills, get expert answers to specific questions, or dive into an entirely new area of development, you’ve come to the right place.

Sign up today for the latest overviews, insights, and how-to’s on today's central topics—AI, DC, DL, HPC, IoT, ML, and other essential acronyms—that you can use right away.

Tuesday, December 3, 2019 9:00 am PST
#oneAPI

Introducing oneAPI: A Unified, Cross-Architecture Performance Programming Model

The future of programming is an ever-evolving event. Which is a good thing. A necessary thing. And a thing Intel is taking the lead on by driving new innovation to developers and the industry.

The drive for compute innovation is as old as computing itself, with each advancement built upon what came before. In 2019 and 2020, a primary focus of next-gen compute innovation is enabling increasingly complex workloads to run on multiple architectures. CPUs and GPUs for sure. But also FPGAs and a myriad of AI accelerators.

The biggest challenge? Programming.

Because historically, writing and deploying code for CPUs and accelerators has required different languages, libraries, and tools. Meaning each hardware platform required a separate software investment of time, resources, and/or money.

The oneAPI initiative was created to solve this problem.

Join Kent Moffat, software specialist and Intel senior product manager, to find out how, including:

  • An overview of oneAPI Beta—what it is, what it includes, and why it was created
  • How this Intel-driven initiative simplifies development through a common toolset that enables more code reuse
  • How developers can immediately take advantage of oneAPI Beta in their development, from free toolkits to the Intel® DevCloud environment

Come with your questions ready.

Register now.

Get Started

  • Learn more about Intel® oneAPI Beta—Visit the Beta website to learn about this initiative, including downloading free software toolkits like the essential Intel® oneAPI Base Toolkit.
  • Try your code in the Intel® DevCloud—Sign up to develop, test, and run your solution in this free development sandbox with access to the latest Intel® hardware and oneAPI software. No software downloads. No configuration steps. No installations.
Kent Moffat, Sr. Product Marketing Engineer, Intel Corporation

Kent Moffat is a senior product line manager responsible for marketing and driving adoption of software development and data science tools. His software expertise spans machine/deep learning, high performance computing, cloud computing, and IoT. Prior to joining Intel in 2008, Kent held several strategic sales and marketing roles in technology, including Mentor Graphics and MathStar.

Kent holds a Bachelor of Science in electrical engineering from Stanford University and a Bachelor of Art in physics from Willamette University.

Wednesday, December 4, 2019 9:00 am PST
#oneAPI

DPC++ Part 1: An Introduction to the New Programming Model

Get an overview of Data Parallel C++ (DPC++)—a new programming language that serves as the (open) backbone of Intel’s oneAPI initiative. This webinar unpacks what it is, what it does, and why you care.

We’re all familiar with C++. But DPC++?

Indeed.

Shorthand for Data Parallel C++, it’s the new direct programming language of oneAPI—an Intel-led initiative to unify and simplify application development across diverse computing architectures.

DPC++ is based on familiar (and industry-standard) C++, incorporates SYCL* specification 1.2.1 from The Khronos Group*, and includes language extensions developed using an open community process. Purposely designed as an open, cross-industry alternative to single-architecture, proprietary languages, DPC++ enables developers to more easily port their code across CPUs, GPUs, and FPGAs, and also tune performance for a specific accelerator.

Tune in for an overview of this new programming model with Intel software engineer and The Khronos Group contributor Michael Kinsner.

  • Get an introduction to the DPC++ programming model, including execution and memory
  • Dive into the fundamental building blocks of the DPC++ programming model, including default selection and queues, buffers, command group function objects, accessors, device kernels, and more
  • Learn how to use the DPC++ compiler to build heterogeneous applications
  • Explore Intel-specific extensions, such as unified shared memory and subgroups

Register today.

Get Started

  • Learn more about Intel® oneAPI Beta—Visit the Beta website to learn about this initiative, including downloading free software toolkits like the essential Intel® oneAPI Base Toolkit.
  • Try your code in the Intel® DevCloud—Sign up to develop, test, and run your solution in this free development sandbox with access to the latest Intel® hardware and oneAPI software. No software downloads. No configuration steps. No installations.
Michael Kinsner, Software Engineer, Intel Corporation

Michael Kinsner is a software engineer working on programming models and high-level design compilers for Intel. Additionally, he is an Intel representative within The Khronos Group, where he contributes to the SYCL* and OpenCL™ industry standards. You can find him at conferences talking about why FPGAs are of interest as accelerators, and why modern frameworks such as SYCL make programming both FPGAs and other devices so much easier.

Prior to joining Intel in 2015, Michael held several key technical engineering roles at Altera and Bristol Aerospace. He holds a Ph.D. in Computer Engineering from McMaster University, Ontario, Canada.

Wednesday, December 11, 2019 9:00 am PST
#oneAPI

DPC++ Part 2: Programming Best Practices

Dive deeper into programming in Data Parallel C++, including best practices you can put to use today.

In Part 2 of our Data Parallel C++ overview series, software engineer Anoop Madhusoodhanan Prabha will walk through best practices for using this language to program oneAPI applications.

To recap, the new language—part of the oneAPI initiative—provides an open, cross-industry alternative to single-architecture, proprietary languages. Based on familiar C++ and incorporating SYCL* from The Khronos Group*, DPC++ lets developers more easily port code across a variety of architectures from an existing application’s code base.

But with that capability comes unique considerations such as how data should be made available on the device side, and the need for synchronization points between compute kernels running across a host and devices to ensure accurate results and deterministic behavior.

Take the next step in learning DPC++ by joining Anoop as he:

  • Covers how to efficiently use buffers, sub-buffers, and unified shared memory
  • Dives into implicit synchronization points in DPC++
  • Explores atomics, mutex, work-group barriers, and work-group mem-fence

Register now.

Get Started

  • Learn more about Intel® oneAPI Beta—Visit the Beta website to learn about this initiative, including downloading free software toolkits like the essential Intel® oneAPI Base Toolkit.
  • Try your code in the Intel® DevCloud—Sign up to develop, test, and run your solution in this free development sandbox with access to the latest Intel® hardware and oneAPI software. No software downloads. No configuration steps. No installations.
Anoop Madhusoodhanan Prabha, Staff Software Engineer, Intel Corporation

With over 10 years’ experience as a software engineer, Anoop’s work has included application development, system analysis and design, database administration, data migrations, automations, function point analysis, and critical projects in the telecom domain. Since joining Intel in 2009, he has worked on optimizing various customer applications by enabling multi-threading, vectorization, and other micro architectural tunings. He has experience working with OpenMP, TBB, CUDA*, and more. Today, Anoop focuses on floating point reproducibility across various Intel® architectures, containerized solutions for Intel® Compiler-based workloads, and continuous integration/continuous deployment adoption.

Anoop holds a Master’s of Science in Electrical Engineering from State University of New York College at Buffalo with an emphasis in high-performance computing, and a Bachelor’s of Tech in Electronics and Communication Engineering from Malaviya National Institute of Technology in Jaipur.

Tuesday, December 17, 2019 9:00 am PST
#VisualComputing

Introducing a New Tool for Neural Network Profiling & Inference Experiments

Having a workbench is great. Especially when it’s fully stocked with tools and ready-to use. That’s what the new Deep Learning Workbench is for the OpenVINO™ toolkit. Tune in to find out more.

If you use the Intel® Distribution of OpenVINO™ toolkit (even if you don’t … yet), the latest release introduces a new profiler tool to more easily run and optimize deep learning models.

Called Deep Learning Workbench, this production-ready tool enables developers to visualize key performance metrics such as latency, throughput, and performance counters for neural network topologies and their layers. It also streamlines configuration for inference experiments including int8 calibration, accuracy check, and automatic detection of optimal performance settings.

Join senior software engineer Shubha Ramani for an overview and how-to demos of DL Workbench, where she’ll cover:

  • How to download, install, and get started with the tool
  • Its new features, including model analysis, int8 and Winograd optimizations, accuracy, and benchmark data
  • How to run experiments with key parameters such as batch size, parallel streams, and more to determine the most optimal configuration for your application.

Register now.

Get the software
Be sure to download the latest version of Intel® Distribution of OpenVINO™ toolkit so you can follow along during the webinar.

Shubha Ramani, Senior Software Engineer, Intel Corporation

Shubha is a senior software engineer whose specialties span all facets of deep learning and artificial intelligence. In her current role she focuses on the Intel® Distribution of OpenVINO™ toolkit, including helping customers use its full capabilities and building complex DL prototypes. Additionally, she helps customers embrace Intel’s world-class automotive driving SDKs and tools, and develops complex, real-world C++ samples using the Autonomous Driving Library for inclusion in Intel® GO™ automated driving solutions.

Shubha holds an MSEE in Embedded Systems Software from the University of Colorado at Boulder, and a BSEE in Electrical Engineering from Texas A&M University in College Station.

Enter your info to sign up

Welcome back

* All fields required

Please select at least one event.

By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to stay connected to the latest Intel technologies and industry trends by email and telephone. You can unsubscribe at any time. Intel’s web sites and communications are subject to our Privacy Notice and Terms of Use.

By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to use for this business request. Intel’s web sites and communications are subject to our Privacy Notice and Terms of Use.

For more complete information about compiler optimizations, see our Optimization Notice.