Keynote Speakers

Deep Learning In Cancer And Infectious Disease:
Novel Driver Problems For Future HPC Architecture

 Rick Stevens

Argonne National Laboratory 
The University of Chicago



The adoption of machine learning is proving to be an amazingly successful strategy in improving predictive models for cancer and infectious disease. In this talk I will discuss two projects my group is working on to advance biomedical research through the use of machine learning and HPC. In cancer, machine learning and in deep learning in particular, is used to advance our ability to diagnosis and classify tumors. Recently demonstrated automated systems are routinely out performing human expertise. Deep learning is also being used to predict patient response to cancer treatments and to screen for new anti-cancer compounds. In basic cancer research its being use to supervise large-scale multi-resolution molecular dynamics simulations used to explore cancer gene signaling pathways. In public health it’s being used to interpret millions of medical records to identify optimal treatment strategies. In infectious disease research machine learning methods are being used to predict antibiotic resistance and to identify novel antibiotic resistance mechanisms that might be present. More generally machine learning is emerging as a general tool to augment and extend mechanistic models in biology and many other fields. It’s becoming an important component of scientific workloads. From a computational architecture standpoint, deep neural network (DNN) based scientific applications have some unique requirements. They require high compute density to support matrix-matrix and matrix-vector operations, but they rarely require 64bit or even 32bits of precision, thus architects are creating new instructions and new design points to accelerate training. Most current DNNs rely on dense fully connected networks and convolutional networks and thus are reasonably matched to current HPC accelerators. However future DNNs may rely less on dense communication patterns. Like simulation codes power efficient DNNs require high-bandwidth memory be physically close to arithmetic units to reduce costs of data motion and a high-bandwidth communication fabric between (perhaps modest scale) groups of processors to support network model parallelism. DNNs in general do not have good strong scaling behavior, so to fully exploit large-scale parallelism they rely on a combination of model, data and search parallelism. Deep learning problems also require large-quantities of training data to be made available or generated at each node, thus providing opportunities for NVRAM. Discovering optimal deep learning models often involves a large-scale search of hyperparameters. It’s not uncommon to search a space of tens of thousands of model configurations. Naïve searches are outperformed by various intelligent searching strategies, including new approaches that use generative neural networks to manage the search space. HPC architectures that can support these large-scale intelligent search methods as well as efficient model training are needed.


Since 1999, Rick Stevens has been a professor at the University of Chicago and since 2004, an Associate Laboratory Director at Argonne National Laboratory. He is internationally known for work in high-performance computing, collaboration and visualization technology, and for building computational tools and web infrastructures to support large-scale genome and metagenome analysis for basic science and infectious disease research. He teaches and supervises students in the areas of computer systems and computational biology. He co-leads the DOE national laboratory group that has been developing the national initiative for Exascale computing. Stevens is principle investigator for the NIH/NIAID supported PATRIC Bioinformatics Resource Center which is developing comparative analysis tools for infectious disease research and serves a large user community. Stevens is also the PI of The Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer project through the Exascale Computing Project (ECP), which focuses on building a scalable deep neural network code called the CANcer Distributed Learning Environment (CANDLE) to address three top challenges of the National Cancer Institute. Stevens is also one of the PIs for the DOE-NCI Joint Design of Advanced Computing Solutions for Cancer project, part of the Cancer Moonshot initiative. In this role, he leads a pilot project on pre-clinical screening aimed at building machine learning models for cancer drug response that will integrate data from cell line screens and patient derived xenograft models to improve the range of therapies available to patients. Over the past twenty years, he and his colleagues have developed the SEED, RAST, MGRAST and ModelSEED genome analysis and bacterial modeling servers that have been used by tens of thousands of users to annotate and analyze more than 250,000 microbial genomes and metagenomic samples. At Argonne, Stevens leads the Computing, Environment and Life Sciences (CELS) Directorate that operates one of the top supercomputers in the world (a 10 Petaflops/s machine called MIRA). Prior to that role, he led the Mathematics and Computer Science Division for ten years and the Physical Sciences Directorate. He and his group have won R+D100 awards for developing advanced collaboration technology (Access Grid). He has published over 200 papers and book chapters and holds several patents. He lectures widely on the opportunities for large-scale computing to impact biological science.

China’s New R&D Project on High Performance Computing

Captura de pantalla 2017-02-04 a las 17.36.07

Qian Depei

Professor, Sun Yat-sen University and Beihang University,
Dean of the School of Data and Computer Science, Sun Yat-sen University


After a brief review on HPC research and development under China’s high-tech R&D program in the past years, this talk will introduce the new key project on high performance computing in the national key R&D Program of China in the 13th 5-year plan. The major challenges and technical issues in developing the exa-scale system will be discussed. The goal and the major activities, as well as the current status, of the new key project will be presented.


Qian Depei, professor at Sun Yat-sen university and Beihang University, dean of the School of Data and Computer Science of Sun Yat-sen University.
He has been working on computer architecture and computer networks for many years. His current research interests include high performance computer architecture and implementation technologies, distributed computing, network management and network performance measurement. He has published over 300 papers in journals and conferences.
Since 1996 he has been the member of the expert group and expert committee of the National High-tech Research & Development Program (the 863 program) in information technology. He was the chief scientist of three 863 key projects on high performance computing since 2002. Currently, he is the chief scientist of the 863 key project on high productivity computer and application service environment.


The Changing Face of Global Numerical Weather Prediction

Willem Deconinck

European Centre for Medium-Range Weather Forecasts (ECMWF)
Reading, United Kingdom


The algorithms underlying numerical weather prediction (NWP) and climate models that have been developed in the past few decades face an increasing challenge caused by the paradigm shift imposed by hardware vendors towards more energy-efficient devices. In order to provide a sustainable path to Exascale High Performance Computing (HPC), applications become increasingly restricted by energy consumption. As a result, the emerging diverse and complex hardware solutions have a large impact on the programming models traditionally used in NWP software, triggering a rethink of design choices for future massively parallel software frameworks. To this end, the ECMWF is leading the ESCAPE project, a European funded project involving regional NWP centres, Universities, HPC centres and hardware vendors. The aim is to combine interdisciplinary expertise for defining and co-designing the necessary steps towards affordable, Exascale high-performance simulations of weather and climate.


Willem Deconinck is working at the European Centre for Medium Range Weather Forecasts (ECMWF) in the Numerical Methods team of the Earth System Modelling section, which is researching and maintaining the dynamical core of the Integrated Forecasting System (IFS). Willem is an aerospace engineer by education at the Free University of Brussels, with expertise in solving partial differential equations (PDE’s) using unstructured meshes with high-order discontinuous schemes. At ECMWF Willem manages the scalability project specific to the IFS. His work involves the development of Atlas, a flexible object-oriented parallel data structure framework to be used as foundation for new developments at ECMWF targeted for extreme-scale numerical weather prediction.