scispace - formally typeset
Search or ask a question
Author

Aad J. van der Steen

Other affiliations: University of Tennessee
Bio: Aad J. van der Steen is an academic researcher from Utrecht University. The author has contributed to research in topics: Benchmark (computing) & Computational resource. The author has an hindex of 6, co-authored 10 publications receiving 890 citations. Previous affiliations of Aad J. van der Steen include University of Tennessee.

Papers
More filters
Journal ArticleDOI
01 Feb 2011
TL;DR: The work of the community to prepare for the challenges of exascale computing is described, ultimately combing their efforts in a coordinated International Exascale Software Project.
Abstract: Over the last 20 years, the open-source community has provided more and more software on which the world’s high-performance computing systems depend for performance and productivity. The community has invested millions of dollars and years of effort to build key components. However, although the investments in these separate software elements have been tremendously valuable, a great deal of productivity has also been lost because of the lack of planning, coordination, and key integration of technologies necessary to make them work together smoothly and efficiently, both within individual petascale systems and between different systems. It seems clear that this completely uncoordinated development model will not provide the software needed to support the unprecedented parallelism required for peta/ exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such as transactional memory, speculative execution, and graphics processing units. This report describes the work of the community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated International Exascale Software Project.

736 citations

Journal ArticleDOI
TL;DR: The paper gives a preview of the most interesting research results obtained so far in the DAS project.
Abstract: The Distributed ASCI Supercomputer (DAS) is a homogeneous wide-area distributed system consisting of four cluster computers at different locations. DAS has been used for research on communication software, parallel languages and programming systems, schedulers, parallel applications, and distributed applications. The paper gives a preview of the most interesting research results obtained so far in the DAS project.

129 citations

Journal ArticleDOI
TL;DR: Issues that are common to computational frameworks for compute-intensive modeling and either need to be resolved or require design decisions to arrive at a more uniform behavior of such frameworks are discussed.
Abstract: In this paper we discuss issues that are common to computational frameworks for compute-intensive modeling and either need to be resolved or require design decisions to arrive at a more uniform behavior of such frameworks. Copyright © 2005 John Wiley & Sons, Ltd.

14 citations

Journal ArticleDOI
TL;DR: From the results of an evaluation project on three Beowulf type clusters, answers are derived about the viability of using cluster systems routinely in a multi-user environment with comparable maintenance cost and effort to that of an integrated parallel machine.
Abstract: We report the results of an evaluation project on three Beowulf type clusters. The purpose of this study was to assess both the performance of the clusters and the availability and quality of the software for cluster management and management of the available resources. This last goal could hardly be achieved because at the time this project was undertaken much of the management software was either very immature or not yet available. However, it was possible to assess the cluster performance both from the point of view of single program execution as well as with respect to throughput by loading the systems according to a predefined schedule via the available batch systems. To this end a set of application programs, ranging from astronomy to quantum chemistry, together with a synthetic benchmark were employed. From the results we wanted to derive answers about the viability of using cluster systems routinely in a multi-user environment with comparable maintenance cost and effort to that of an integrated parallel machine.

11 citations

Journal ArticleDOI
TL;DR: The development of a program that aims at achieving the optimal integration of observed data in an oceanographic model describing the water transport phenomena in the Agulhas area at the tip of South Africa is described.
Abstract: In this paper we describe the development of a program that aims at the optimal integration of observed data in an oceanographic model describ

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work reviews the recent status of methodologies and techniques related to the construction of digital twins mostly from a modeling perspective to provide a detailed coverage of the current challenges and enabling technologies along with recommendations and reflections for various stakeholders.
Abstract: Digital twin can be defined as a virtual representation of a physical asset enabled through data and simulators for real-time prediction, optimization, monitoring, controlling, and improved decision making. Recent advances in computational pipelines, multiphysics solvers, artificial intelligence, big data cybernetics, data processing and management tools bring the promise of digital twins and their impact on society closer to reality. Digital twinning is now an important and emerging trend in many applications. Also referred to as a computational megamodel, device shadow, mirrored system, avatar or a synchronized virtual prototype, there can be no doubt that a digital twin plays a transformative role not only in how we design and operate cyber-physical intelligent systems, but also in how we advance the modularity of multi-disciplinary systems to tackle fundamental barriers not addressed by the current, evolutionary modeling practices. In this work, we review the recent status of methodologies and techniques related to the construction of digital twins mostly from a modeling perspective. Our aim is to provide a detailed coverage of the current challenges and enabling technologies along with recommendations and reflections for various stakeholders.

660 citations

Journal ArticleDOI
TL;DR: This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling and proposes an architectural framework that supports resource trading and quality of services based scheduling that enables the regulation of supply and demand for resources.
Abstract: Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economic-based systems for peer-to-peer distributed computing by developing users' quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications.

579 citations

Journal ArticleDOI
TL;DR: The Einstein Toolkit as mentioned in this paper is a community-driven, freely accessible computational infrastructure intended for use in numerical relativity, relativistic astrophysics, and other applications, which combines a core set of components needed to simulate astrophysical objects such as black holes, compact objects, and collapsing stars.
Abstract: We describe the Einstein Toolkit, a community-driven, freely accessible computational infrastructure intended for use in numerical relativity, relativistic astrophysics, and other applications. The toolkit, developed by a collaboration involving researchers from multiple institutions around the world, combines a core set of components needed to simulate astrophysical objects such as black holes, compact objects, and collapsing stars, as well as a full suite of analysis tools. The Einstein Toolkit is currently based on the Cactus framework for high-performance computing and the Carpet adaptive mesh refinement driver. It implements spacetime evolution via the BSSN evolution system and general relativistic hydrodynamics in a finite-volume discretization. The toolkit is under continuous development and contains many new code components that have been publicly released for the first time and are described in this paper. We discuss the motivation behind the release of the toolkit, the philosophy underlying its development, and the goals of the project. A summary of the implemented numerical techniques is included, as are results of numerical test covering a variety of sample astrophysical problems.

479 citations

Journal ArticleDOI
01 May 2014
TL;DR: This report presents a report produced by a workshop on ‘Addressing failures in exascale computing’ held in Park City, Utah, 4–11 August 2012, which summarizes and builds on discussions on resilience.
Abstract: We present here a report produced by a workshop on 'Addressing failures in exascale computing' held in Park City, Utah, 4-11 August 2012. The charter of this workshop was to establish a common taxonomy about resilience across all the levels in a computing system, discuss existing knowledge on resilience across the various hardware and software layers of an exascale system, and build on those results, examining potential solutions from both a hardware and software perspective and focusing on a combined approach. The workshop brought together participants with expertise in applications, system software, and hardware; they came from industry, government, and academia, and their interests ranged from theory to implementation. The combination allowed broad and comprehensive discussions and led to this document, which summarizes and builds on those discussions.

406 citations

Journal ArticleDOI
TL;DR: This work unifies traditionally separated high-performance computing and big data analytics in one place to accelerate scientific discovery and engineering innovation and foster new ideas in science and engineering.
Abstract: Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics.

373 citations