Driving scientific AI performance on HPC systems with MLPerf benchmarks

Abstract

AI methods are powerful tools that promise to dramatically change the way we do science on high performance computational resources. Adoption of these techniques is growing across many domains, with important use-cases including analysis of experimental data, acceleration of expensive simulations, and the control or design of experiments. Meanwhile, the computational costs, particularly in the area of training deep neural network models, are growing dramatically as we adopt increasingly large and complex models, tasks, and datasets. To ensure performant solutions that enable tomorrow's scientific discoveries, there is thus a critical need to understand the computational characteristics and architectural requirements of these emerging workloads and to drive innovation in the algorithms, software, and hardware stacks. MLPerf benchmarks from MLCommons have in recent years solidified their position as the industry-standard measure of AI performance, featuring end-to-end application benchmarks for both training and inference workloads. More recently, these efforts have expanded to cover HPC and science AI workloads to address the aforementioned challenges in scientific AI performance. In this talk I will discuss MLPerf benchmarks, especially the MLPerf HPC benchmark suite, covering the methodologies and applications that allow us to characterize AI performance on large scale HPC systems. I will describe some insights that have already been learned from this and related efforts, and will conclude with some discussion of the remaining open challenges in this space and some thoughts on how we might address them.

Steven Farrell

Steven Farrell is a Machine Learning Engineer in the Data and Analytics Services group at the National Energy Research and Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL). He supports machine learning and deep learning workflows on the NERSC supercomputers and contributes to research developing and applying AI solutions to problems in high energy physics, biosciences, and other domains. He is also co-chair of the HPC working group in MLCommons which publishes the MLPerf HPC benchmark suite.