scispace - formally typeset
Search or ask a question
Author

Hasan Abbasi

Bio: Hasan Abbasi is an academic researcher from Oak Ridge National Laboratory. The author has contributed to research in topics: Analytics & Petascale computing. The author has an hindex of 22, co-authored 58 publications receiving 2076 citations. Previous affiliations of Hasan Abbasi include Georgia Institute of Technology & University of Tennessee.


Papers
More filters
Journal ArticleDOI
TL;DR: The startling observations made in the last half decade of I/O research and development are described, and some of the challenges that remain as the coming Exascale era are detailed.
Abstract: Applications running on leadership platforms are more and more bottlenecked by storage input/output I/O. In an effort to combat the increasing disparity between I/O throughput and compute capability, we created Adaptable IO System ADIOS in 2005. Focusing on putting users first with a service oriented architecture, we combined cutting edge research into new I/O techniques with a design effort to create near optimal I/O methods. As a result, ADIOS provides the highest level of synchronous I/O performance for a number of mission critical applications at various Department of Energy Leadership Computing Facilities. Meanwhile ADIOS is leading the push for next generation techniques including staging and data processing pipelines. In this paper, we describe the startling observations we have made in the last half decade of I/O research and development, and elaborate the lessons we have learned along this journey. We also detail some of the challenges that remain as we look toward the coming Exascale era. Copyright © 2013 John Wiley & Sons, Ltd.

201 citations

Journal ArticleDOI
TL;DR: Experimental evaluations of the flexible ‘DataStager’ framework establish both the necessity of intelligent data staging and the high performance of the approach, using the GTC fusion modeling code and benchmarks running on 1000+ processors.
Abstract: Known challenges for petascale machines are that (1) the costs of I/O for high performance applications can be substantial, especially for output tasks like checkpointing, and (2) noise from I/O actions can inject undesirable delays into the runtimes of such codes on individual compute nodes. This paper introduces the flexible `DataStager' framework for data staging and alternative services within that jointly address (1) and (2). Data staging services moving output data from compute nodes to staging or I/O nodes prior to storage are used to reduce I/O overheads on applications' total processing times, and explicit management of data staging offers reduced perturbation when extracting output data from a petascale machine's compute partition. Experimental evaluations of DataStager on the Cray XT machine at Oak Ridge National Laboratory establish both the necessity of intelligent data staging and the high performance of our approach, using the GTC fusion modeling code and benchmarks running on 1000+ processors.

199 citations

Proceedings ArticleDOI
10 Nov 2012
TL;DR: The lightweight, flexible framework allows scientists dealing with the data deluge at extreme scale to perform analyses at increased temporal resolutions, mitigate I/O costs, and significantly improve the time to insight.
Abstract: With the onset of extreme-scale computing, I/O constraints make it increasingly difficult for scientists to save a sufficient amount of raw simulation data to persistent storage. One potential solution is to change the data analysis pipeline from a post-process centric to a concurrent approach based on either in-situ or in-transit processing. In this context computations are considered in-situ if they utilize the primary compute resources, while in-transit processing refers to offloading computations to a set of secondary resources using asynchronous data transfers. In this paper we explore the design and implementation of three common analysis techniques typically performed on large-scale scientific simulations: topological analysis, descriptive statistics, and visualization. We summarize algorithmic developments, describe a resource scheduling system to coordinate the execution of various analysis workflows, and discuss our implementation using the DataSpaces and ADIOS frameworks that support efficient data movement between in-situ and in-transit computations. We demonstrate the efficiency of our lightweight, flexible framework by deploying it on the Jaguar XK6 to analyze data generated by S3D, a massively parallel turbulent combustion code. Our framework allows scientists dealing with the data deluge at extreme scale to perform analyses at increased temporal resolutions, mitigate I/O costs, and significantly improve the time to insight.

185 citations

Proceedings ArticleDOI
19 Apr 2010
TL;DR: PreDatA, short for Preparatory Data Analytics, is an approach to preparing and characterizing data while it is being produced by the large scale simulations running on peta-scale machines that enhances the scalability and flexibility of the current I/O stack on HEC platforms.
Abstract: Peta-scale scientific applications running on High End Computing (HEC) platforms can generate large volumes of data. For high performance storage and in order to be useful to science end users, such data must be organized in its layout, indexed, sorted, and otherwise manipulated for subsequent data presentation, visualization, and detailed analysis. In addition, scientists desire to gain insights into selected data characteristics ‘hidden’ or ‘latent’ in these massive datasets while data is being produced by simulations. PreDatA, short for Preparatory Data Analytics, is an approach to preparing and characterizing data while it is being produced by the large scale simulations running on peta-scale machines. By dedicating additional compute nodes on the machine as ‘staging’ nodes and by staging simulations' output data through these nodes, PreDatA can exploit their computational power to perform select data manipulations with lower latency than attainable by first moving data into file systems and storage. Such intransit manipulations are supported by the PreDatA middleware through asynchronous data movement to reduce write latency, application-specific operations on streaming data that are able to discover latent data characteristics, and appropriate data reorganization and metadata annotation to speed up subsequent data access. PreDatA enhances the scalability and flexibility of the current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and inspection, as well as for data exchange between concurrently running simulations.

173 citations

Proceedings ArticleDOI
11 Jun 2009
TL;DR: Experimental evaluations of DataStager on the Cray XT machine at Oak Ridge National Laboratory establish the necessity of intelligent data staging and the high performance of the approach, using the GTC fusion modeling code and benchmarks running on 1000+ processors.
Abstract: Known challenges for petascale machines are that (1) the costs of I/O for high performance applications can be substantial, especially for output tasks like checkpointing, and (2) noise from I/O actions can inject undesirable delays into the runtimes of such codes on individual compute nodes. This paper introduces the flexible 'DataStager' framework for data staging and alternative services within that jointly address (1) and (2). Data staging services moving output data from compute nodes to staging or I/O nodes prior to storage are used to reduce I/O overheads on applications' total processing times, and explicit management of data staging offers reduced perturbation when extracting output data from a petascale machine's compute partition. Experimental evaluations of DataStager on the Cray XT machine at Oak Ridge National Laboratory establish both the necessity of intelligent data staging and the high performance of our approach, using the GTC fusion modeling code and benchmarks running on 1000+ processors.

147 citations


Cited by
More filters
01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Journal ArticleDOI
TL;DR: This study aims to provide a common basis for CPM climate simulations by giving a holistic review of the topic, and presents the consolidated outcome of studies that addressed the added value of CPMClimate simulations compared to LSMs.
Abstract: Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing 10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. In this study, we aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.

833 citations

Posted Content
TL;DR: Analyzing this data to find the subtle effects missed by previous studies requires algorithms that can simultaneously deal with huge datasets and that can find very subtle effects --- finding both needles in the haystack and finding very small haystacks that were undetected in previous measurements.
Abstract: This is a thought piece on data-intensive science requirements for databases and science centers. It argues that peta-scale datasets will be housed by science centers that provide substantial storage and processing for scientists who access the data via smart notebooks. Next-generation science instruments and simulations will generate these peta-scale datasets. The need to publish and share data and the need for generic analysis and visualization tools will finally create a convergence on common metadata standards. Database systems will be judged by their support of these metadata standards and by their ability to manage and access peta-scale datasets. The procedural stream-of-bytes-file-centric approach to data analysis is both too cumbersome and too serial for such large datasets. Non-procedural query and analysis of schematized self-describing data is both easier to use and allows much more parallelism.

476 citations

Proceedings ArticleDOI
23 Jun 2008
TL;DR: The Adaptable IO System provides an API nearly as simple as POSIX IO that also provides developers with the flexibility of selection the optimal IO routines for a given platform, without recompilation.
Abstract: Scientific codes are all subject to variation in performance depending on the runtime platform and/or configuration, the output writing API employed, and the file system for output. Since changing the IO routines to match the optimal or desired configuration for a given system can be costly in terms of human time and machine resources, the Adaptable IO System provides an API nearly as simple as POSIX IO that also provides developers with the flexibility of selection the optimal IO routines for a given platform, without recompilation. As a side effect, we also gain the ability to transparently integrate more tightly with workflow systems like Kepler and Pegasus and visualization systems like Visit with no runtime impact. We achieve this through our library of highly tuned IO routines and other transport methods selected and configured in an XML file read only at startup. ADIOS-based IO has demonstrated high levels of performance and scalability. For example, we have achieved 20 GB/sec write performance using GTC on the Jaguar Cray XT4 system at Oak Ridge National Labs (about 50\% of peak performance). We can change GTC output among MPI-IO synchronous, MPI-IO collective, POSIX IO, no IO (for baseline testing), asynchronous IO using the Georgia Tech DataTap system, and Visit directly for in situ visualization with no changes to the source code. We designed this initial version of ADIOS based on the data requirements of 7 major scientific codes (GTC, Chimera, GTS, XGC1, XGC0, FLASH, and S3D) and have successfully adapted all of them to use ADIOS for all of their IO needs.

334 citations