Advances in High Performance Computing: on the path to Exascale software

Advances in High Performance Computing: on the path to Exascale software

Advances in Engineering Software 111 (2017) 1–2 Contents lists available at ScienceDirect Advances in Engineering Software journal homepage: www.els...

193KB Sizes 0 Downloads 18 Views

Advances in Engineering Software 111 (2017) 1–2

Contents lists available at ScienceDirect

Advances in Engineering Software journal homepage: www.elsevier.com/locate/advengsoft

Editorial

Advances in High Performance Computing: on the path to Exascale software

Modelling and simulation on high-performance computers is being used to understand and develop solutions to many of the world’s most difficult and pressing social, scientific and industrial challenges. From climate change, to efficient use of limited global resources, to the healthy and active aging of our populations, modelling and simulation is being used as a key tool to develop our understanding of these challenges and to propose new products and services to meet them. Computing at the Exascale (meaning a single computer capable of delivering a peak performance of 1018 floating point calculations per second) is a much greater technical challenge than simply joining 1,0 0 0 Petaflop/s systems together. A typical Petascale computer today has 50,0 0 0-10 0,0 0 0 CPU cores and consumes around 1-2 Megawatts of power. By simple maths, an Exascale computer could therefore be built today with 100 million CPU cores but it would consume 1-2 Gigawatts of power and would be extremely unreliable. I/O would be a major, if not insurmountable, challenge. Current near-term and mid-term architecture trends also suggest that the first generation of Exascale computing systems will consist of distributed memory nodes, where each node is powerful, and contains a large number of (possibly heterogeneous) compute cores. The number of parallel threads of execution will likely be on the order of 108 or 109, split between multiple layers of hardware parallelism (e.g. nodes, cores, hardware thread and SIMD lanes). Programming such systems will be a real challenge, few applications today scale well to 10 0,0 0 0 cores or can cope with significant levels of heterogeneity. The Exascale goal therefore represents a tipping point in the development of HPC enabled modelling and simulation where the technologies (both hardware and software, including applications), which have become commonplace within the domain, will simply not scale to this next generation of computer systems and new and innovative solutions must be found at all levels. Traditionally, it has been commonplace for applications to be optimised for an existing HPC architecture and associated software stack. What is rarer is for the applications to be involved in guiding the development of these. The need for new and innovative hardware and software solutions at the extreme scale poses a significant risk that optimising applications after the hardware and software stacks have been developed is unlikely to be successful. Co-design methodologies have long been an important aspect in the realisation of embedded technologies. More recently, co-design methods have been applied within the HPC community and many

http://dx.doi.org/10.1016/j.advengsoft.2017.06.007 0965-9978/© 2017 Published by Elsevier Ltd.

have advocated this as an essential component of developing exascale technologies. The CRESTA project has been developing applications and “systemware” software for Exascale systems. This was one of the first projects to utilise co-design in practice and one of the first to deliver successful outcomes from this approach. Many other applications have followed, realising the importance of a codesign approach and many of the papers in this journal utilise codesign within their development. While the challenge of programming at the extreme scale cuts across the full software stack, programming models represent the interface between the application and the software stack and have proved an important and challenging development area for extreme scale programming. The “silver bullet” of parallel programming models is a single API that can address the huge number parallel threads across all the hardware layers with maximum performance, yet can also be highly productive and intuitive for the programmer. Given the timescales involved in inventing and standardising a programming API, producing robust implementations, and porting large-scale applications, achieving such a “silver bullet” is extremely challenging. For a small number of pioneer Exascale applications, it may be possible to invest the effort in porting them to highly specialised, low-level, system-specific APIs. However it is likely that the majority of Exascale applications will make use of combinations of existing, though possibly enhanced, programming APIs, where each API is well standardised, and is specific to one or two layers of hardware parallelism. This is already a well-established practise in the HPC community with many applications making use of combinations such as MPI and OpenMP, MPI and CUDA, or MPI and OpenACC. Developing, enhancing and ensuring the interoperability of high performance programming APIs is therefore important and is considered by various papers in the journal. Even if the challenge of executing an application on extreme parallel systems is solved, it is still important to be able to read the executable application and its data into the system and write the results (and any intermediate checkpoint data) out of the system. However as core-counts have increased the performance of I/O subsystems have struggled to keep up with computational performance and have, over the past few years, become a key bottleneck on today’s largest systems. At the Exascale, the performance demands of such high levels of parallelism (likely to be of the order of 10 0-50 0 million computational threads) require massive increases in bandwidth and I/O intensity, whilst at the same time the

2

Editorial / Advances in Engineering Software 111 (2017) 1–2

expected increases in quantities of data being produced and hence required capacity mean that today’s technologies simply will not deliver the required performance. Addressing the I/O challenge is therefore essential and this challenge is considered by certain papers in the journal. The need for exascale platforms is being driven by a set of significant scientific drivers, scientific challenges of global significance that cannot be solved on current large-scale systems. Each of the application papers in this journal represent a key community that has the need to compute at the exascale to deliver their scientific results. The set of papers within the journal cover a broad spectrum of scientific areas, this is representative of the strong set

of scientific drivers motivating efforts to address the challenges of computing at the extreme scale. Guest Editors Frédéric Magoulès CentraleSupélec, University Paris-Saclay, France. Mark Parsons The University of Edinburgh, United Kingdom. Lorna Smith The University of Edinburgh, United Kingdom.