APPLICATIONS PROGRAMMING FOR EXAESCALE - CHALLENGES & OPPORTUNITIES
As we advance toward exascale-class (HPC) machines, there is growing realization that major challenges and changes lay ahead for application developers. Some of the key assumptions central to the current HPC programming paradigm will not hold at this scale. Resilience issues and portable performance are likely to require paradigm shifts in the way we develop and run our applications. New programming models as well as machine and application abstractions are emerging to address these challenges. These new models and approaches may collectively represent a revolution in the way we develop, deploy, and run applications on our extreme-scale HPC machines. In this talk, we will examine the drivers, trends, and some recent work in order to inform and hopefully help guide the audience in their work toward exascale applications development.
Dr. Robert Clay is the manager of the Scalable Modeling and Analysis Systems Dept at Sandia National Laboratories in Livermore, CA. He is responsible for research and development in HPC systems resilience and programming models as part of the ASC and ASCR (exascale) programs. He also has responsibility for R&D in discrete system analysis (complex systems, formal methods), scalable data analysis, and engineering workflow and model building systems. In those roles he provides leadership in a broad range of activities with a core focus of scalable systems design and analysis.
Dr. Clay is a graduate of the Carnegie Mellon University (Ph.D.) and the University of Tennessee (B.S.), where he received degrees in Chemical Engineering. His graduate work focused on planning under uncertainty, where he worked on parallel stochastic programming methods and Bayesian inference schemes. Prior to working at Sandia National Labs, he worked at Exxon Research and Engineering in Florham Park, NJ, in the Systems Engineering Division. There he led projects in real time optimization, advanced computational control, and dynamic system modeling. Dr. Clay also served as Chief Scientist for Terascale LLC where he was involved in the development of parallel FEM tools, codes, and services.
THE ROLE OF HIGH PERFORMANCE (CLOUD) COMPUTING IN THE "NEXT BIG THING" THE PEOPLE ARE WAITING FOR: BUILDING THE WEB 3.X WITH X REALLY CLOSE TO 9 AND BEYOND
The availability of computing resources and the need for high quality services are rapidly evolving the vision about the acceleration of knowledge development, improvement and dissemination. Despite the fact the small need of computation resources need by the web 1.0, the Internet of Hyperlinks, during this era technologies as HPC and grid computing rise. According with the common shared knowledge, we are living the final part of the web 2.0 era, the Internet of Social, powered by the elastic cloud computing technology.
The internet of things is growing up giving the user the power of make its own services assembling basic tools without taking care about the needed computing power. The internet 3.x and beyond will be a playground where high performance cloud computing
will give the life support for astonishing user created (and the shared) component based application: the democratization of the content and the container will turn the key for the next big thing.
Raffaele Montella works as assistant professor, with tenure, in Science at Department of Applied Science, School of Science and Technology, University Parthenope of Naples, Italy since 2005. He got his degree (MSc equivalent) in (Marine) Environmental Science at the Parthenope University of Naples in 1998 defending a thesis about the "Development of a GIS system for marine applications" scoring with laude and an award mention to his study career. He defended his PhD thesis about "Environmental modeling and Grid Computing techniques" earning the PhD in Marine Science and Engineering at the University of Naples Federico II. The research main topics and the scientific production are focused on tools for high performance computing, such as grid, cloud and GPUs with applications in the field of computational environmental science (multidimensional data/distributed computing for modeling).
He cooperates with the Computation Institute of University of Chicago/Mathematics and Computer Science division of the Argonne National Laboratory and the Computer Architecture, Communications and Systems group - Department of Computer Science of the University Carlos III of Madrid.
TOWARDS EXAFLOP SUPERCOMPUTERS
Having recently surpassed the Petascale barrier, supercomputers designers and users are now facing the next challenge. A thousand fold performance increase that if the improvement rate of the last decades continues will be reached around 2018.
Being power the main constraint and facing many hardware challenges, software is probably the biggest one. Worldwide and cooperative initiatives are being started to perform research facing such objective.
The Barcelona Supercomputing Center is involved in such initiatives and carries out the MareIncognito research project aiming at developing some of the technologies that we consider will be of key relevance on the way to Exascale.
The talk will briefly discuss relevant issues, foreseen architectures and software approaches that will have to be developed in order to successfully install and operate such machines.
Mateo Valero is a professor in the Computer Architecture Department at UPC, in Barcelona. His research interests focuses on high performance architectures. He has published approximately 500 papers, has served in the organization of more than 200 International Conferences and he has given more than 300 invited talks. He is the director of the Barcelona Supercomputing Centre, the National Centre of Supercomputing in Spain.
Dr. Valero has been honoured with several awards. Among them, the Eckert-Mauchly Award, Harry Goode Award , the "King Jaime I" in research and two National Awards on Informatics and on Engineering. He has been named Honorary Doctor by the University of Chalmers, by the University of Belgrade, by the Universities of Las Palmas de Gran Canaria and Zaragoza in Spain and by the University of Veracruz in Mexico. "Hall of the Fame" member of the IST European Program (selected as one of the 25 most influents European researchers in IT during the period 1983-2008. Lyon, November 2008).
In December 1994, Professor Valero became a founding member of the Royal Spanish Academy of Engineering. In 2005 he was elected Correspondant Academic of the Spanish Royal Academy of Science, in 2006 member of the Royal Spanish Academy of Doctors and in 2008 member of the Academia Europaea. He is a Fellow of the IEEE, Fellow of the ACM and an Intel Distinguished Research Fellow.
PROF. GEOFFREY CHARLES FOX
Science Clouds and Their Use in Data Intensive Applications
We describe lessons from FutureGrid and commercial clouds on the use of clouds for science discussing both Infrastructure as a Service and MapReduce applied to bioinformatics applications. We first introduce clouds and discuss the characteristics of problems that run well on them. We try to answer when you need your own cluster; when you need a Grid; when a national supercomputer; and when a cloud. We compare "academic" and commercial clouds and the experience on FutureGrid with Nimbus, Eucalyptus, OpenStack and OpenNebula. We look at programming models especially MapReduce and Iterative Mapreduce and their use on data analytics. We compare with an Internet of Things application with a Sensor Grid controlled by a cloud infrastructure.
Geoffrey Charles Fox is a professor of Computer Science and Informatics at Indiana University.
Fox received a Ph.D. in Theoretical Physics from Cambridge University and is now distinguished professor of Informatics and Computing, and Physics at Indiana University where he is director of the Digital Science Center and Associate Dean for Research and Graduate Studies at the School of Informatics and Computing. He previously held positions at Caltech, Syracuse University and Florida State University. He has supervised the PhD of 64 students and published over 600 papers in physics and computer science with a hindex of 61 and over 19500 citations. He currently works in applying computer science to Bioinformatics, Defense, Earthquake and Ice-sheet Science, Particle Physics and Chemical Informatics. He is principal investigator of FutureGrid - a facility to enable development of new approaches to computing. He is involved in several projects to enhance the capabilities of Minority Serving Institutions.