Gatlinburg, Tennessee   Aug 31-Sept 2, 2015

Presentations

Click on the titles for the abstracts or download the presentation files (Power Point or PDF)

Keynote      
HPC: Powering Deep Learning Bryan Catanzaro   PDF
       
Session 1      
In-Silico Science and Technology: From Atoms to Cognitive Computing Alessandro Curioni   PDF
Computing Drivers for Wicked Problems Nadya Bliss   PDF
Quantum Chemistry on the Supercomputers of tomorrow Poul Jørgensen   PDF
Strategic Science: Role of HPC and HPC for Cancer Research and Cancer Imaging Larry Clarke & Eric Stahlberg   PDF
Landscape Dynamics, Geographic Data and Scalable Computing: The Oak Ridge Experience Budhendra Bhaduri    
       
Session 2      
The APEX collaboration: New Systems and Future Requirements Katie Antypas Power Point  
CORAL: the Nation’s Leadership Computers for Open Science from 2017 to 2022 and beyond Susan Coghlan   PDF
HPC Storage and IO Trends, Towards the Exascale Era and Beyond Gary Grider Power Point PDF
Trends in networking - implications for HPC Eli Dart Power Point  
       
Session 3      
Department of Energy Office of Science - Overview Bill Harrod   PDF
Data Centric Systems Jim Sexton   PDF
Architectural Directions for the Post Exascale Decade Al Gara   PDF
Accelerating Technologies for Exascale Computing Mike Schulte    
Moving *Forward: Pathfinding to Exascale Larry Kaplan   PDF
Challenges on the Road to Exascale Steve Keckler    
       
Session 4      
Programming Model Challenges for Extreme Scale Computing and Analytics Vivek Sarkar   PDF
Water, Water Everywhere and Not a Drop to Drink Deb Agarwal    
Parallel, High-Performance Min-Cut Solvers and Tall-and-Skinny Matrix-Based Data Analysis David Gleich   PDF
Zero Copy Architecture for In Situ Analytics and Burst Buffer Eng Lim Goh    
Cloud Dataflow for Embarrassing[ly Parallel] Problems and Its Impact on DOE HPC Ron Minnich    
Do You Know What Your I/O is Doing? Bill Gropp   PDF

 

Posters

Neuroscience-Inspired Dynamic Architectures PDF
Developing big data analytics for socioeconomic characterization PDF
Immersive VisualizaEon using the Oculus RiM: ApplicaEons in Material Science PDF
Quantum Computing for Science and Engineering Programming, Architecture, and Applications PDF
Unifying In Silico & Empirical Experiments in CADES PDF
Improving Large-scale ApplicaAon Performance with ADIOS and BPIO PDF
The Future of Rocket Engine Design PDF
Preparing OpenACC for the Next-Generation HPC systems PDF
Performance and Power Using Aspen PDF
Towards a High-Performance Tensor Algebra Package for Accelerators PDF
Enhancing scalability of Monte Carlo methods on high performance computers for materials science research PDF
Towards a High-Performance Tensor Algebra Package for Accelerators PDF
Improving Performance of the FLASH code on Summit and Other Architectures: First Steps PDF

HPC: Powering Deep Learning

During the past few years, Deep Learning has made incredible progress towards solving many previously difficult Artificial Intelligence tasks. Although the techniques behind deep learning have been studied for decades, they rely on large datasets and large computational resources, and so have only recently become practical for many problems. Training deep neural networks is very computationally intensive: training one of our models takes tens of exaflops of work, and so HPC techniques are key to creating these models. As in other fields, progress in Artificial Intelligence (AI) is iterative, building on previous ideas. This means that the turnaround time in training one of our models is a key bottleneck to progress in AI: the quicker we can realize an idea as a trainable model, train it on a large dataset, and test it, the quicker we find ways of improving our models. Accordingly, we care a great deal about scaling our model training, and in particular, we need to strongly scale the training process. In this talk, I will discuss the key insights that make deep learning work for many problems, describe the training problem, and detail our use of standard HPC techniques to allow us to rapidly iterate on our models. I will explain how HPC ideas are becoming increasingly central to progress in AI. I will also show several examples of how deep learning is helping us solve difficult AI problems.

In-Silico Science and Technology:  From Atoms to Cognitive Computing

In  the  past 20 years the steady increase of computing power made possible by more scalable, flexible, and energy efficient supercomputers has extended the reach and accuracy of physical based simulation methods, rendering Computational Science the third pillar of the scientific enterprise.  During my presentation I will showcase some examples of recent successes in the field of molecular simulations and how these applications will continue to drive the need for ever better super-computers. Finally, I will discuss the impact of Cognitive Computing and how it is bound to change the whole field of Computational Science.

Computing Drivers for Wicked Problems

Traditionally, computing has been driven by physical sciences with a reasonably recent addition of bio and biomedical informatics. With the need to analyze and make decisions about complex, interconnected scenarios (climate impact, world health, economic and political stability), increasingly, there is a need to bring state-of-the-art computational capabilities to social sciences, policy and, more importantly, interdisciplinary decision making. Enabling domain expertise at scale has the potential to transform how we think about, plan for, and anticipate the future - this requires bringing together HPC, interactive environments, applied psychology and stakeholder engagement along with domain experts.

Quantum Chemistry on Supercomputers of Tomorrow

The modeling and simulation of large molecular systems have become integral parts of experimental investigations in many areas of chemistry and will become increasingly more important in the future. To increase the reliability and usefulness of such simulations, better methods and more elaborate calculations have to be carried out. For this reason, it becomes important to be able to exploit the massively parallel architectures of modern supercomputers. However, this introduces algorithmic challenges, as the models that are currently being used in quantum chemistry are not designed for massive parallelism. In this talk it will be described how the “gold standard model of quantum chemistry (CCSD(T))“ can be redesigned for a massively parallel implementation. In fact the CCSD(T) method will be written in a form that is both linear scaling and designed for a massive parallelism. Furthermore, some of the challenges that we have encountered during our implementation of the massive parallel and linear scaling CCSD(T) code will be discussed, as will some of the wishes we, as code developers, have to the configuration of future exascale supercomputers and to the software that will be installed on these computers. .

Strategic Science: Role of HPC and HPC for Cancer Research and Cancer Imaging

High-performance Computing (HPC) is a rapidly growing area increasingly impacting cancer across areas that include basic research, prevention, diagnosis, and treatment. New opportunities are emerging for the role of HPC and HPC technologies to advance each of these areas, accelerating discovery, building academic and industry collaborations, and extending the science and understanding of cancer. This presentation provides a summary of the emerging HPC efforts supporting the NCI intramural research program, with a special discussion of opportunities for collaboration. A clinical example of a research area will also be presented, namely cancer imaging and particularly correlation of imaging phenotypes with genomics signatures, commonly referred to as “radiomics” or “ radiogenomics”. The latter computational methods has demonstrated a important role in prediction and measurement of response to drug or radiation therapy, including adaptive therapy trials using different drug combinations. A number of NCI imaging and related informatics initiatives will be presented that will demonstrate the potential emerging role of quantitative imaging within the content of clinical decision support and precision medicine, that includes best practices and standardized methods. . The research strategies being implemented  include the creation of open source informatics research resources and open access imaging archives and software tool sharing, that are being developed to encourage participation by both academia and industry in clinical decision making. These resources have the potential to position industry to adopt advanced imaging methods into clinical trials and eventually clinical practice and potentially ease the burden of regulatory approval.

Landscape Dynamics, Geographic Data and Scalable Computing: The Oak Ridge Experience

Understanding change through analysis and visualization of landscape processes often provide the most effective tool for decision support. Analysis of disparate and dynamic geographic data provides an effective component of an information extraction framework for multi-level reasoning, query, and extraction of geospatial-temporal features. With increasing temporal resolution of geographic data, there is a compelling motivation to couple the powerful modeling and analytical capability of a GIS to perform spatial-temporal analysis and visualization on dynamic data streams. However, the challenge in processing large volumes of high resolution earth observation and simulation data by traditional GIS has been compounded by the drive towards real-time applications and decision support. Drawing from our experience at Oak Ridge National Laboratory providing scientific and programmatic support for federal agencies, this presentation will highlight progress and challenges of some of the emerging computational approaches, including algorithms and high performance computing, illustrated with population and urban dynamics, sustainable energy and mobility, and climate change science.

The APEX collaboration: New Systems and Future Requirements

APEX is the Alliance for Performance at Extreme Scales, a partnership between Sandia, Los Alamos, and Lawrence Berkeley National Labs.   APEX is deploying two Cray systems based on the Intel Knights Landing manycore processor in 2016 and is also beginning a procurement for two 2020 systems now.  Between the 2016 and 2020 system deployments we expect to see processors with deeper memory hierarchies, more on-node concurrency, and systems with new storage technologies.  But using these new hardware features may not be easy, and users of these systems will need high performing software and programming models capable of utilizing the new technologies. 

CORAL: The Nation’s Leadership Computers for Open Science from 2017 to 2022 and Beyond

The Department of Energy’s Leadership Computing Facility has two centers - one at Oak Ridge National Laboratory and one at Argonne National Laboratory. The DOE Leadership Computing Facility contains the most powerful supercomputer in the nation and second fastest supercomputer in the world. This past year, a Collaboration of Oak Ridge, Argonne, and Lawrence Livermore National Laboratories (CORAL) ran a joint procurement and acquisition for three supercomputers to be delivered in 2017-2018 and run through 2022. This talk will describe the resulting set of two diverse architectures selected in the procurement, as well as touch on the second round of CORAL, called CORAL-2, as the nation’s first exascale systems and what they may look like.

HPC Storage and IO Trends, Towards the Exascale Era and Beyond

HPC Storage and IO approaches are undergoing an evolution.  The types and amounts of new non-volatile storage technology is bewildering and the Storage/IO/Persistence software layers are struggling to keep up.  Additionally, there is a blurring of the traditional memory tiers and the storage tiers occurring.  This talk will examine the drivers and directions in this HPC Storage and IO area including an introduction to Exascale related planning for this area.

Trends in Networking—Implications for HPC

This talk will describe several trends and changes taking place in networking, and the potential impact on those changes for HPC facilities and resources.

Department of Energy Office of Science Overview

Overview of DOE Office of Science.

Data Centric Systems

As we approach Exascale class systems, some key architecture drivers are changing the approach to systems design.   Data scales and sustained application performance are now the dominant element driving design decisions rather than traditional FLOPS.   This presentation will discuss the data challenges for emerging systems and describe IBM's data centric systems approach to addressing those challenges.

Architectural Directions for the Post Exascale Decade

New memory technologies are changing the way we architect systems and open up the possibility to dramatically address the bandwidth challenges we have in today’s system. To exploit this exciting opportunity, a refactoring of the memory+compute appears to be a promising direction.

It is a familiar story that we need to address power. Possible non-evolutionary architectural directions and concepts will be discussed that may enable us to realize better performance/Watt, enabling more performance within a reasonable power envelope. While there are hardware-only approaches to address this, additional options will also be discussed that become possible when we allow for new software directions together with hardware architecture innovations.

As we look for new ways to exploit continued improvements in manufacturing, we will move toward more heterogeneous architectures which can be leveraged to improve workload energy efficiency by utilizing the hardware solution which best optimizes the performance/Watt.

Accelerating Technologies for Exascale Computing

Significant increases in computer performance and programmer productivity are needed to enable key discoveries in diverse fields ranging from medical science to astrophysics and climate modeling. This talk presents AMD’s vision for exascale computing and some of the key technologies that AMD is accelerating with support from the Department of Energy’s FastForward and DesignForward programs. It also describes the expected impact of these technologies on future computing systems, and discusses co-design activities with researchers and computational scientists in the Department of Energy National Laboratories.

*Forward: Pathfinding to Exascale

Cray Inc. has participated in the Design Forward, Fast Forward 2, and Design Forward 2 DOE sponsored research efforts with exciting results for driving the path to Exascale.  Each program has provided specific guidance for future designs, with Design Forward focused on future network protocols and APIs, Fast Forward 2 on future ARM based node architectures, and Design Forward 2 on overall node and system architecture options for Exascale.  Taken together, these studies have provided valuable information on potential future designs across a variety of technologies and vendors.  This talk will provide a high-level overview of Cray's *Forward programs and how they might impact future exascale systems.

Challenges on the Road to Exascale

While with enough money and power an Exascale system could beassembled today, the true challenges lie in building such systems that are both economical and useful. Compared to today's high performance computers, Exascale systems are expected to require 50x more energy efficiency and the ability to exploit 1000x concurrency. Achieving the efficiency goals will demand hardware and software innovations that reduce unnecessary instruction overheads, optimize the movement of data, and reduce the cost of data movement. Enabling programmers to harness the massive concurrency will require software abstractions that allow the expression of all available parallelism, combined with automated mechanisms for mapping the concurrency to a particular machine instance.  This talk will present the efficiency and programming challenges and describe the research that NVIDIA is performing to enable capable Exascale high-performance computing systems.

Programming Model Challenges for Extreme Scale Computing and Analytics

It is widely recognized that radical changes are forthcoming in high-end systems for future scientific and commercial computing. These extreme scale systems will contain billions of processor cores/accelerators; their performance will be driven by parallelism, and constrained by energy and data movement; and they will be subject to frequent faults and failures. Unlike previous generations of hardware evolution, these extreme scale systems will have a profound impact on the software stack underlying future applications and algorithms. The software challenges are further compounded by the addition of new application requirements that include, most notably, data-intensive computing and analytics.

The challenges across the entire software stack for extreme scale systems are driven by programmability, portability and performance requirements, and impose new requirements on programming models, languages, compilers, runtime systems, and system software. In this talk, we will focus on the programming model challenges in enabling future applications on future hardware, and the changes needed in the software stack to address these challenges. Examples will be drawn from recent research experiences in the Habanero Extreme Scale Software Research project at Rice University, including the Habanero-C++ and Habanero-Java programming models for scientific and commercial software respectively.

Water, Water Everywhere and Not a Drop to Drink

Data science has emerged as the fourth paradigm in science. In this talk I will describe the data science ecosystem and challenges faced in addressing data science. This talk will introduce a variety of promising research directions addressing data science challenges in workflows, analysis, data management, and user interaction. Example data management and analysis frameworks from a variety of science disciplines at Berkeley Lab will be described.

Parallel, High-Performance Min-Cut Solvers and Tall-and-Skinny Matrix-Based Data Analysis

In this talk, I'll provide a few perspectives on parallel and high performance computing with respect to some classes of network data and the problems encountered. I'll go into more depth on a parallel min-cut solver we've recently developed and also discuss our experiences with MapReduce based data analysis for tall-and-skinny matrices.

Zero Copy Architecture for In Situ Analytics and Burst Buffer

In high performance computing, data will correspondingly be exascale and beyond. As such it is becoming too costly to have copies of that data and, perhaps more importantly, too energy intensive to move them. Thus, the novel Zero Copy Architecture -- ZCA -- was developed, where each compute node writes locally for performance, yet can access others globally. The result is the ability to perform burst buffer operations and in situ analytics and visualization without the need for a data copy or movement.

Cloud Dataflow for Embarrassing[ly Parallel] Problems and Its Impact on DOE HPC

Google is offering a new product, Cloud dataflow, that lets users write short, simple programs that are automatically parallelized and run on Google's infrastructure. In this talk, I'll show two dataflow apps, cat (yes, cat) and the Stanford Natural Language Parser. I'll use these examples to motivate a discussion of the possible impact on the DOE HPC model. Can cloud upend the economics of in-house HPC systems in the same way that clusters killed the vector machines in the 90s?

Do You Know What Your I/O is Doing?

Even though supercomputers are typically described in terms of their floating point performance, science applications also need significant I/O performance for all parts of the science workflow.  This ranges from reading input data, to writing simulation output, to conducting analysis across years of simulation data.  This talk presents recent data on the use of I/O at several supercomputing centers and what that suggests about the challenges and open problems in I/O on HPC systems. Some recent results in applying HPC concepts to an open source graph framework will also be presented, illustrating some of the opportunities in improving data handling capabilities in large-scale systems.