scispace - formally typeset
Open AccessJournal ArticleDOI

The International Exascale Software Project roadmap

TLDR
The work of the community to prepare for the challenges of exascale computing is described, ultimately combing their efforts in a coordinated International Exascale Software Project.
Abstract
Over the last 20 years, the open-source community has provided more and more software on which the world’s high-performance computing systems depend for performance and productivity. The community has invested millions of dollars and years of effort to build key components. However, although the investments in these separate software elements have been tremendously valuable, a great deal of productivity has also been lost because of the lack of planning, coordination, and key integration of technologies necessary to make them work together smoothly and efficiently, both within individual petascale systems and between different systems. It seems clear that this completely uncoordinated development model will not provide the software needed to support the unprecedented parallelism required for peta/ exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such as transactional memory, speculative execution, and graphics processing units. This report describes the work of the community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated International Exascale Software Project.

read more

Content maybe subject to copyright    Report

http://hpc.sagepub.com/
Computing Applications
International Journal of High Performance
http://hpc.sagepub.com/content/25/1/3
The online version of this article can be found at:
DOI: 10.1177/1094342010391989
2011 25: 3 originally published online 6 January 2011International Journal of High Performance Computing Applications
der Steen, Jeffrey Vetter, Peg Williams, Robert Wisniewski and Kathy Yelick
Streitz, Bob Sugar, Shinji Sumimoto, William Tang, John Taylor, Rajeev Thakur, Anne Trefethen, Mateo Valero, Aad van
Papka, Dan Reed, Mitsuhisa Sato, Ed Seidel, John Shalf, David Skinner, Marc Snir, Thomas Sterling, Rick Stevens, Fred
Paul Messina, Peter Michielse, Bernd Mohr, Matthias S. Mueller, Wolfgang E. Nagel, Hiroshi Nakashima, Michael E
Keyes, Bill Kramer, Jesus Labarta, Alain Lichnewsky, Thomas Lippert, Bob Lucas, Barney Maccabe, Satoshi Matsuoka,
Heroux, Adolfy Hoisie, Koh Hotta, Zhong Jin, Yutaka Ishikawa, Fred Johnson, Sanjay Kale, Richard Kenway, David
Choudhary, Sudip Dosanjh, Thom Dunning, Sandro Fiore, Al Geist, Bill Gropp, Robert Harrison, Mark Hereld, Michael
Jean-Yves Berthou, Taisuke Boku, Bertrand Braunschweig, Franck Cappello, Barbara Chapman, Xuebin Chi, Alok
Jack Dongarra, Pete Beckman, Terry Moore, Patrick Aerts, Giovanni Aloisio, Jean-Claude Andre, David Barkai,
The International Exascale Software Project roadmap
Published by:
http://www.sagepublications.com
can be found at:International Journal of High Performance Computing ApplicationsAdditional services and information for
http://hpc.sagepub.com/cgi/alertsEmail Alerts:
http://hpc.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://hpc.sagepub.com/content/25/1/3.refs.htmlCitations:
What is This?
- Jan 6, 2011 OnlineFirst Version of Record
- Feb 11, 2011Version of Record >>
at Forschungszentrum Julich Gmbh on May 13, 2013hpc.sagepub.comDownloaded from

The International Exascale Software
Project roadmap
Jack Dongarra, Pete Beckman, Terry Moore, Patrick Aerts,
Giovanni Aloisio, Jean-Claude Andre, David Barkai,
Jean-Yves Berthou, Taisuke Boku, Bertrand Braunschweig,
Franck Cappello, Barbara Chapman, Xuebin Chi, Alok Choudhary, Sudip Dosanjh,
Thom Dunning, Sandro Fiore, Al Geist, Bill Gropp, Robert Harrison, Mark Hereld,
Michael Heroux, Adolfy Hoisie, Koh Hotta, Zhong Jin, Yutaka Ishikawa, Fred Johnson,
Sanjay Kale, Richard Kenway, David Keyes, Bill Kramer, Jesus Labarta, Alain Lichnewsky,
Thomas Lippert, Bob Lucas, Barney Maccabe, Satoshi Matsuoka, Paul Messina,
Peter Michielse, Bernd Mohr, Matthias S. Mueller, Wolfgang E. Nagel, Hiroshi Nakashima,
Michael E Papka, Dan Reed, Mitsuhisa Sato, Ed Seidel, John Shalf, David Skinner,
Marc Snir, Thomas Sterling, Rick Stevens, Fred Streitz, Bob Sugar, Shinji Sumimoto,
William Tang, John Taylor, Rajeev Thakur, Anne Trefethen, Mateo Valero,
Aad van der Steen, Jeffrey Vetter, Peg Williams, Robert Wisniewski and Kathy Yelick
Abstract
Over the last 20 years, the open-source community has provided more and more software on which the world’s high-
performance computing systems depend for performance and productivity. The community has invested millions of
dollars and years of effort to build key components. However, although the investments in these separate software ele-
ments have been tremendously valuable, a great deal of productivity has also been lost because of the lack of planning,
coordination, and key integration of technologies necessary to make them work together smoothly and efficiently, both
within individual petascale systems and between different systems. It seems clear that this completely uncoordinated
development model will not provide the software needed to support the unprecedented parallelism required for peta/
exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such
as transactional memory, speculative execution, and graphics processing units. This report describes the work of the
community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated Inter-
national Exascale Software Project.
Keywords
exascale computing, high-performance computing, software stack
Table of Contents
1. Introduction 6
2. Destination of the IESP Roadmap 7
3. Technology Trends and their Impact on Exascale 8
3.1 Technology Trends 8
3.2 Science Trends 9
University of Tennessee at Knoxville, USA
Corresponding author:
Jack Dongarra, University of Tennessee, at Knoxville, 1122 Volunteer Boulevard, Suite 203, Knoxville, TN 37996-3450, USA.
Email: dongarra@cs.utk.edu
The International Journal of High
Performance Computing Applications
25(1) 3–60
ª The Author(s) 2011
Reprints and permission:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/1094342010391989
hpc.sagepub.com
at Forschungszentrum Julich Gmbh on May 13, 2013hpc.sagepub.comDownloaded from

3.2.1 Energy Security 10
3.3 Key Requirements Imposed by Trends on the X-stack 10
3.4 Relevant Politico-economic Trends 11
4. Formulating Paths Forward for X-stack Component Technologies 11
4.1 System Software 12
4.1.1 Operating Systems 12
4.1.1.1 Technology Drivers for Operating Systems: Increasing Importance of Effective Management of
Increasingly Complex Resources 12
4.1.1.2 Alternative R&D Strategies for Operating Systems 12
4.1.1.3 Recommended Research Agenda for Operating Systems 12
4.1.2 Runtime Systems 13
4.1.2.1 Technology and Science Drivers for Runtime Systems 13
4.1.2.2 Alternative R&D Strategies for Runtime Systems 13
4.1.2.3 Recommended Research Agenda for Runtime Systems 13
4.1.2.4. Cross-cutting Considerations 15
4.1.3 I/O Systems 15
4.1.3.1 Technology and Science Drivers for I/O Systems 15
4.1.3.2 Alternative R&D Strategies for I/O Systems 16
4.1.3.3 Recommended Research Agenda for I/O Systems 17
4.1.3.4 Cross-cutting Considerations 17
4.1.4 Systems Management 17
4.1.4.1 Technology and Science Drivers for System Management 18
4.1.4.2 Alternative R&D Strategies for System Management 18
4.1.4.3 Recommended Research Agenda for System Management 18
4.1.4.4 Cross-cutting Considerations 19
4.1.5 External Environments 20
4.1.5.1 Technology and Science Drivers for External Environments 20
4.1.5.2 Alternative R&D Strategies for External Environments 21
4.1.5.3 Recommended Research Agenda for External Environments 22
4.1.5.4 Cross-cutting Considerations 22
4.2 Development Environments 23
4.2.1 Programming Models 23
4.2.1.1 Technology and Science Drivers for Programming Models 23
4.2.1.2 Alternative R&D Strategies for Programming Models 23
4.2.1.3 Recommended Research Agenda for Programming Models 23
4.2.1.4 Cross-cutting Considerations 23
4.2.2 Frameworks 24
4.2.2.1 Technology and Science Drivers for Frameworks 24
4.2.2.2 Alternative R&D Strategies for Frameworks 24
4.2.2.3 Recommended Research Agenda for Frameworks 24
4.2.2.4 Cross-cutting Considerations 26
4.2.3 Compilers 26
4.2.3.1 Technology and Science Drivers for Compilers 26
4.2.3.2. Alternative R&D Strategies for Compilers 26
4.2.3.3 Recommended Research Agenda for Compilers 26
4.2.3.4 Cross-cutting Considerations 27
4.2.4 Numerical Libraries 27
4.2.4.1 Technology and Science Drivers for Libraries 27
4.2.4.2 Alternative R&D Strategies for Libraries 27
4.2.4.3 Recommended Research Agenda for Libraries 27
4.2.4.4 Cross-cutting Considerations 28
4.2.5 Debugging 28
4.2.5.1 Technology Drivers for Debugging 28
4.2.5.2 Alternative R&D Strategies for Debugging 28
4.2.5.3 Recommended Research Agenda for Debugging 29
4.3 Applications 29
4.3.1 Application Element: Algorithms 29
4 The International Journal of High Performance Computing Applications 25(1)
at Forschungszentrum Julich Gmbh on May 13, 2013hpc.sagepub.comDownloaded from

4.3.1.1 Technology and Science Drivers for Algorithms 29
4.3.1.2 Alternative R&D Strategies for Algorithms 30
4.3.1.3 Recommended Research Agenda for Algorithms 30
4.3.1.4 Cross-cutting Considerations 31
4.3.2 Application Support: Data Analysis and Visualization 31
4.3.2.1 Technology and Science Drivers for Data Analysis and Visualization 31
4.3.2.2 Alternative R&D Strategies for Data Analysis and Visualization 32
4.3.2.3 Recommended Research Agenda for Data Analysis and Visualization 33
4.3.2.4 Cross-cutting Considerations 33
4.3.3 Application Support: Scientific Data Management 33
4.3.3.1 Technology and Science Drivers for Scientific Data Management 33
4.3.3.2 Alternative R&D Strategies for Scientific Data Management 34
4.3.3.3 Recommended Research Agenda for Scientific Data Management 35
4.3.3.4 Cross-cutting Considerations 35
4.4 Cross-cutting Dimensions 35
4.4.1 Resilience 35
4.4.1.1 Technology Drivers for Resilience 35
4.4.1.2 Gap Analysis 35
4.4.1.3 Alternative R&D Strategies 36
4.4.1.4 Recommended Research Agenda for Resilience 36
4.4.2 Power Management 36
4.4.2.1 Technology Drivers for Power Management 36
4.4.2.2 Alternative R&D Strategies for Power Management 37
4.4.2.3 Recommended Research Agenda for Power Management 38
4.4.3 Performance Optimization 39
4.4.3.1 Technology and Science Drivers for Performance Optimization 39
4.4.3.2 Alternative R&D Strategies for Performance Optimization 39
4.4.3.3 Recommended Research Agenda for Performance Optimization 40
4.4.3.4 Cross-cutting Considerations 40
4.4.4 Programmability 41
4.4.4.1 Technology and Science Drivers for Programmability 41
4.4.4.2 Alternative R&D Strategies for Programmability 41
4.4.4.3 Recommended Research Agenda for Programmability 41
4.4.4.4 Cross-cutting Considerations 45
4.5 Summary of X-Stack Priorities 45
5. Application Perspectives and Co-design Vehicles 46
5.1 From Here to Exascale: An Application Community View 46
5.2 IESP Application Co-design Vehicles 47
5.3 Initial Considerations for Co-design Vehicle Analysis 48
5.4 Representative Co-design Vehicles 48
5.4.1 High-energy Physics/QCD 49
5.4.2 Plasma Physics/Fusion Energy Sciences 49
5.4.3 Strategic Development of IESP CDVs 52
5.5 Matrix of Applications and Software Components Needs 52
6. Perspectives on Cooperation between IESP and HPC Vendor Communities 53
6.1 Challenging Issues for Vendor/Community Cooperation 53
6.2 Taxonomy of Development/Support Models 53
6.3 Requirements and Methods 54
6.4 Software Testing 56
6.5 Recommendations 57
7. IESP Organization and Governance 57
7.1 Importance of a Business Case 57
7.2 Application of Current Funding Mechanisms 58
7.3 Governance Model 58
7.4 Vendor Interaction 58
7.5 Timeline 59
Dongarra et al. 5
at Forschungszentrum Julich Gmbh on May 13, 2013hpc.sagepub.comDownloaded from

1. Introduction
The technology roadmap presented here is the result of
more than a year of coordinated effort within the global
software community for high-end scientific computing. It
is the product of a set of first steps taken to address a critical
challenge that now confronts modern science and is pro-
duced by a convergence of three factors: (1) the compelling
science case to be made, in both fields of deep intellectual
interest and fields of vital importance to humanity, for
increasing usable computing power by orders of magnitude
as quickly as possible; (2) the clear and widely recognized
inadequacy of the current high-end software infrastructure,
in all its component areas, for supporting this essential
escalation; and (3) the near complete lack of planning and
coordination in the global scientific software community in
overcoming the formidable obstacles that stand in the way
of replacing it. At the beginning of 2009, a large group of
collaborators from this worldwide community initiated the
International Exascale Software Project (IESP) to carry
out the planning and the organization building necessary
to solve this vitally important problem.
With seed funding from key government partners in the
United States, European Union (EU), and Japan, as well as
supplemental contributions from some industry stake-
holders, we formed the IESP around the following mission:
The guiding purpose of the IESP is to empower ultra-high
resolution and data-intensive science and engineering
research through the year 2020 by developing a plan for
(1) a common, high-quality computational environment for
petascale/exascale systems and (2) catalyzing, coordinat-
ing, and sustaining the effort of the international open-
source software community to create that environment as
quickly as possible.
There exist good reasons to think that such a plan is
urgently needed. First and foremost, the magnitude of the
technical challenges for software infrastructure that the
novel architectures and extreme scale of emerging systems
bring with them is daunting (Kogge et al., 2008; Sarkar
et al., 2009b). These problems, which are already appearing
on the leadership-class systems of the US National Science
Foundation (NSF) and Department of Energy (DOE), as
well as on systems in Europe and Asia, are more than suf-
ficient to require the wholesale redesign and replacement of
the operating systems (OSs), programming models,
libraries, and tools on which high-end computing necessa-
rily depends.
Secondly, the complex web of interdependencies and
side effects that exist among such software components
means that making sweeping changes to this infrastructure
will require a high degree of coordination and collabora-
tion. Failure to identify critical holes or potential conflicts
in the software environment, to spot opportunities for
beneficial integration, or to adequately specify component
requirements will tend to retard or disrupt everyone’s
progress, wasting time that can ill afford to be lost. Since
creating a software environment adapted for extreme-
scale systems (e.g. the NSF’s Blue Waters) will require the
collective effort of a broad community, this community
must have good mechanisms for internal coordination.
Thirdly, it seems clear that the scope of the effort must
be truly international. In terms of its rationale, scientists in
nearly every field now depend on the software infrastruc-
ture of high-end computing to open up new areas of inquiry
(e.g. the very small, very large, very hazardous, and very
complex), to dramatically increase their research produc-
tivity, and to amplify the social and economic impact of
their work. It serves global scientific communities who
need to work together on problems of global significance and
leverage distributed resources in transnational configura-
tions. In terms of feasibility, the dimensions of the task
totally redesigning and recreating, in the period of just a few
years, the massive software foundation of computational
science in order to meet the new realities of extreme-scale
computing are simply too large for any one country, or
small consortium of countries, to undertake on its own.
The IESP was formed to help achieve this goal. Begin-
ning in April 2009, we held a series of three international
workshops, one each in the United States, Europe, and
Asia, in order to work out a plan for doing so. Information
about, and the working products of all these meetings, can
be found at the project website, http://www.exascale.org. In
developing a plan for producing a new software infrastruc-
ture capable of supporting exascale applications, we
charted a path that moves through the following sequence
of objectives.
1. Make a thorough assessment of needs, issues and
strategies: a successful plan in this arena requires a
thorough assessment of the technology drivers for
future peta/exascale systems and of the short-term,
medium-term, and long-term needs of applications that
are expected to use them. The IESP workshops brought
together a strong and broad-based contingent of
experts in all areas of high-performance computing
(HPC) software infrastructure, as well as representa-
tives from application communities and vendors, to
provide these assessments. As described in more detail
below, we also leveraged the substantial number of
reports and other material on future science applica-
tions and HPC technology trends that different parts
of the community have created in the past three years.
2. Develop a coordinated software roadmap: the results
of the group’s analysis have been incorporated into a
draft of a coordinated roadmap intended to help guide
the open-source scientific software infrastructure effort
with better coordination and fewer missing compo-
nents. This document represents the current version
of that roadmap.
3. Provide a framework for organizing the software
research community: with a reasonably stable and
complete version of the roadmap in hand, we will
6 The International Journal of High Performance Computing Applications 25(1)
at Forschungszentrum Julich Gmbh on May 13, 2013hpc.sagepub.comDownloaded from

Citations
More filters
Journal ArticleDOI

Digital Twin: Values, Challenges and Enablers From a Modeling Perspective

TL;DR: This work reviews the recent status of methodologies and techniques related to the construction of digital twins mostly from a modeling perspective to provide a detailed coverage of the current challenges and enabling technologies along with recommendations and reflections for various stakeholders.
Journal ArticleDOI

Exascale computing and big data

TL;DR: This work unifies traditionally separated high-performance computing and big data analytics in one place to accelerate scientific discovery and engineering innovation and foster new ideas in science and engineering.
Journal ArticleDOI

Big-deep-smart data in imaging for guiding materials design.

TL;DR: New opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest are discussed.
Journal ArticleDOI

Multiphysics simulations: Challenges and opportunities

TL;DR: This study considers multiphysics applications from algorithmic and architectural perspectives, where “algorithmic” includes both mathematical analysis and computational complexity, and “architectural’ includes both software and hardware environments.
References
More filters
ReportDOI

Fundamentals of technology roadmapping

TL;DR: The second in a series on technology roadmapping, developed and documents this technology road mapping process, which can be used by Sandia, other national labs, universities, and industry.
Journal ArticleDOI

Software Challenges in Extreme Scale Systems

TL;DR: The implications of the concurrency and energy eciency challenges on future software for Extreme Scale Systems are discussed and the importance of software-hardware co- design in addressing the fundamental challenges for application enablement on Extreme Scale systems is discussed.
ReportDOI

Scientific Challenges for Understanding the Quantum Universe

TL;DR: A workshop called "Scientific Challenges for Understanding the Quantum Universe" was held December 9-11, 2008, at the Kavli Institute for Particle Astrophysics and Cosmology at the Stanford Linear Accelerator Center-National Accelerator Laboratory.
Related Papers (5)
Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "The international exascale software project roadmap" ?

Over the last 20 years, the open-source community has provided more and more software on which the world ’ s highperformance computing systems depend for performance and productivity. It seems clear that this completely uncoordinated development model will not provide the software needed to support the unprecedented parallelism required for peta/ exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such as transactional memory, speculative execution, and graphics processing units. This report describes the work of the community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated International Exascale Software Project.