Gradient boosting has many real-world applications as a general-purpose machine learning technique for regression, classification, and page ranking problems. It’s a common choice for large problem sizes, yet training implementation of this method is quite complex because of the multiple kernel dependencies that impact execution time, irregular memory access, and many other issues.
If this resonates with you, register for this session to learn about Intel’s optimizations for XGBoost, with specific focus on:
- how to speed up your boosting algorithm workloads with the Intel® AI Analytics Toolkit, powered by oneAPI
- example training workloads that compare the performance of the latest XGBoost implementation on an end-to-end pipeline
Your hosts are Intel AI Technical Engineers Mecit Gungor and Rachel Oberman.
Watch the following demos:
Download the software
Get the Intel® AI Analytics Toolkit which features six powerful tools and frameworks for numerical, scientific, and machine learning applications.
- Sign up for an Intel® DevCloud for oneAPI account—a free development sandbox with access to the latest Intel® hardware and oneAPI software.
- Explore oneAPI, including developer opportunities and benefits
- Subscribe to the POD—Code Together is an interview series that explores the challenges at the forefront of cross-architecture development. Each bi-weekly episode features industry VIPs who are blazing new trails through today’s data-centric world. Available wherever you get your podcasts.