Probabilistic programming languages (PPLs) continue to receive attention for performing Bayesian inference in complex generative models. Trouble is, there remains a dearth of PPL-based science applications because it’s impractical to rewrite complex scientific simulators in a PPL, inference comes with high computational cost, and there is a lack of scalable implementations.
Enter Etalumis (“simulation” spelled backwards), a new system that uses Bayes inference to improve existing simulators via machine learning.
In this session, Lei Shao, Intel Deep Learning Software Engineer, presents the novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo methods and deep-learning-based inference compilation (IC) engines for tractable inference.
To guide IC inference, she:
- Performs distributed training of a dynamic 3D CNN-LSTM architecture with a PyTorch*-MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with global minibatch size of 128k (Result: achieves performance of 450 Tflop/s through PyTorch enhancements.
- Demonstrates a Large Hadron Collider use case with the C++ SHERPA (Simulator for Human Error Probability Analysis) simulator
- Achieves the largest-scale posterior inference in a Turing-complete PPL
Download the software
- Intel® Math Kernel Library for Deep Neural Networks
- Intel® Optimization for PyTorch*
- Intel® oneAPI AI Analytics Toolkit —includes PyTorch* to speed AI development with tools for DL training, inference, and data analytics
- Intel® MPI Library, one of five free Intel® Performance Libraries