scispace - formally typeset
Search or ask a question
Author

Patrick H. Worley

Bio: Patrick H. Worley is an academic researcher from Oak Ridge National Laboratory. The author has contributed to research in topics: Parallel algorithm & Massively parallel. The author has an hindex of 26, co-authored 86 publications receiving 4870 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The fourth version of the Community Climate System Model (CCSM4) was recently completed and released to the climate community as mentioned in this paper, which describes developments to all CCSM components, and documents fully coupled preindustrial control runs compared to the previous version.
Abstract: The fourth version of the Community Climate System Model (CCSM4) was recently completed and released to the climate community. This paper describes developments to all CCSM components, and documents fully coupled preindustrial control runs compared to the previous version, CCSM3. Using the standard atmosphere and land resolution of 1° results in the sea surface temperature biases in the major upwelling regions being comparable to the 1.4°-resolution CCSM3. Two changes to the deep convection scheme in the atmosphere component result in CCSM4 producing El Nino–Southern Oscillation variability with a much more realistic frequency distribution than in CCSM3, although the amplitude is too large compared to observations. These changes also improve the Madden–Julian oscillation and the frequency distribution of tropical precipitation. A new overflow parameterization in the ocean component leads to an improved simulation of the Gulf Stream path and the North Atlantic Ocean meridional overturning circulati...

2,835 citations

Journal ArticleDOI
Jean-Christophe Golaz1, Peter M. Caldwell1, Luke Van Roekel2, Mark R. Petersen2, Qi Tang1, Jonathan Wolfe2, G. W. Abeshu3, Valentine G. Anantharaj4, Xylar Asay-Davis2, David C. Bader1, Sterling Baldwin1, Gautam Bisht5, Peter A. Bogenschutz1, Marcia L. Branstetter4, Michael A. Brunke6, Steven R. Brus2, Susannah M. Burrows7, Philip Cameron-Smith1, Aaron S. Donahue1, Michael Deakin8, Michael Deakin9, Richard C. Easter7, Katherine J. Evans4, Yan Feng10, Mark Flanner11, James G. Foucar9, Jeremy Fyke2, Brian M. Griffin12, Cecile Hannay13, Bryce E. Harrop7, Mattthew J. Hoffman2, Elizabeth Hunke2, Robert Jacob10, Douglas W. Jacobsen2, Nicole Jeffery2, Philip W. Jones2, Noel Keen5, Stephen A. Klein1, Vincent E. Larson12, L. Ruby Leung7, Hongyi Li3, Wuyin Lin14, William H. Lipscomb13, William H. Lipscomb2, Po-Lun Ma7, Salil Mahajan4, Mathew Maltrud2, Azamat Mametjanov10, Julie L. McClean15, Renata B. McCoy1, Richard Neale13, Stephen Price2, Yun Qian7, Philip J. Rasch7, J. E. Jack Reeves Eyre6, William J. Riley5, Todd D. Ringler16, Todd D. Ringler2, Andrew Roberts2, Erika Louise Roesler9, Andrew G. Salinger9, Zeshawn Shaheen1, Xiaoying Shi4, Balwinder Singh7, Jinyun Tang5, Mark A. Taylor9, Peter E. Thornton4, Adrian K. Turner2, Milena Veneziani2, Hui Wan7, Hailong Wang7, Shanlin Wang2, Dean N. Williams1, Phillip J. Wolfram2, Patrick H. Worley4, Shaocheng Xie1, Yang Yang7, Jin-Ho Yoon17, Mark D. Zelinka1, Charles S. Zender18, Xubin Zeng6, Chengzhu Zhang1, Kai Zhang7, Yuying Zhang1, X. Zheng1, Tian Zhou7, Qing Zhu5 
TL;DR: Energy Exascale Earth System Model (E3SM) project as mentioned in this paper is a project of the U.S. Department of Energy that aims to develop and validate the E3SM model.
Abstract: Energy Exascale Earth System Model (E3SM) project - U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research; Climate Model Development and Validation activity - Office of Biological and Environmental Research in the US Department of Energy Office of Science; Regional and Global Modeling and Analysis Program of the U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research; National Research Foundation [NRF_2017R1A2b4007480]; Office of Science of the U.S. Department of Energy [DE-AC02-05CH11231]; DOE Office of Science User Facility [DE-AC05-00OR22725]; U.S. Department of Energy by Lawrence Livermore National Laboratory [DE-AC52-07NA27344]; DOE [DE-AC05-76RLO1830]; National Center for Atmospheric Research - National Science Foundation [1852977];[DE-SC0012778]

437 citations

Journal ArticleDOI
01 Feb 2012
TL;DR: This work gives scalability and performance results from CAM running with three different dynamical core options within the Community Earth System Model, using a pre-industrial time-slice configuration, and focuses on high-resolution simulations.
Abstract: The Community Atmosphere Model (CAM) version 5 includes a spectral element dynamical core option from NCAR's High-Order Method Modeling Environment. It is a continuous Galerkin spectral finite-element method designed for fully unstructured quadrilateral meshes. The current configurations in CAM are based on the cubed-sphere grid. The main motivation for including a spectral element dynamical core is to improve the scalability of CAM by allowing quasi-uniform grids for the sphere that do not require polar filters. In addition, the approach provides other state-of-the-art capabilities such as improved conservation properties. Spectral elements are used for the horizontal discretization, while most other aspects of the dynamical core are a hybrid of well-tested techniques from CAM's finite volume and global spectral dynamical core options. Here we first give an overview of the spectral element dynamical core as used in CAM. We then give scalability and performance results from CAM running with three different dynamical core options within the Community Earth System Model, using a pre-industrial time-slice configuration. We focus on high-resolution simulations, using 1/4 degree, 1/8 degree, and T341 spectral truncation horizontal grids.

331 citations

Journal ArticleDOI
TL;DR: A fully coupled global simulation using the Community Climate System Model (CCSM) was configured using grid resolutions of 0.1° for the ocean and sea-ice, and 0.25° for atmosphere and land, and was run under present-day greenhouse gas conditions for 20 years as discussed by the authors.

122 citations

Journal ArticleDOI
01 Oct 1995
TL;DR: This parallel model is functionally equivalent to the National Center for Atmospheric Research's Community Climate Model, CCM2, but is structured to exploit distributed memory multi-computers and incorporates parallel spectral transform, semi-Lagrangian transport, and load balancing algorithms.
Abstract: We describe the design of a parallel global atmospheric circulation model, PCCM2 This parallel model is functionally equivalent to the National Center for Atmospheric Research's Community Climate Model, CCM2, but is structured to exploit distributed memory multi-computers PCCM2 incorporates parallel spectral transform, semi-Lagrangian transport, and load balancing algorithms We present detailed performance results on the IBM SP2 and Intel Paragon These results provide insights into the scalability of the individual parallel algorithms and of the parallel model as a whole

97 citations


Cited by
More filters
01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Journal Article
TL;DR: In this article, the authors present a document, redatto, voted and pubblicato by the Ipcc -Comitato intergovernativo sui cambiamenti climatici - illustra la sintesi delle ricerche svolte su questo tema rilevante.
Abstract: Cause, conseguenze e strategie di mitigazione Proponiamo il primo di una serie di articoli in cui affronteremo l’attuale problema dei mutamenti climatici. Presentiamo il documento redatto, votato e pubblicato dall’Ipcc - Comitato intergovernativo sui cambiamenti climatici - che illustra la sintesi delle ricerche svolte su questo tema rilevante.

4,187 citations

Journal ArticleDOI
17 Aug 2008
TL;DR: This paper shows how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements and argues that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions.
Abstract: Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance.In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.

3,549 citations

Book Chapter
01 Jan 2013
TL;DR: The authors assesses long-term projections of climate change for the end of the 21st century and beyond, where the forced signal depends on the scenario and is typically larger than the internal variability of the climate system.
Abstract: This chapter assesses long-term projections of climate change for the end of the 21st century and beyond, where the forced signal depends on the scenario and is typically larger than the internal variability of the climate system. Changes are expressed with respect to a baseline period of 1986-2005, unless otherwise stated.

2,253 citations

Journal ArticleDOI
TL;DR: The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1) as discussed by the authors is an update from the previous versions including MEGAN1.0, which was described for isoprene emissions by Guenther et al. (2006) and MEGan2.02, which were described for monoterpene and sesquiterpene emissions by Sakulyanontvittaya et al (2008).
Abstract: . The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1) is a modeling framework for estimating fluxes of biogenic compounds between terrestrial ecosystems and the atmosphere using simple mechanistic algorithms to account for the major known processes controlling biogenic emissions. It is available as an offline code and has also been coupled into land surface and atmospheric chemistry models. MEGAN2.1 is an update from the previous versions including MEGAN2.0, which was described for isoprene emissions by Guenther et al. (2006) and MEGAN2.02, which was described for monoterpene and sesquiterpene emissions by Sakulyanontvittaya et al. (2008). Isoprene comprises about half of the total global biogenic volatile organic compound (BVOC) emission of 1 Pg (1000 Tg or 1015 g) estimated using MEGAN2.1. Methanol, ethanol, acetaldehyde, acetone, α-pinene, β-pinene, t-β-ocimene, limonene, ethene, and propene together contribute another 30% of the MEGAN2.1 estimated emission. An additional 20 compounds (mostly terpenoids) are associated with the MEGAN2.1 estimates of another 17% of the total emission with the remaining 3% distributed among >100 compounds. Emissions of 41 monoterpenes and 32 sesquiterpenes together comprise about 15% and 3%, respectively, of the estimated total global BVOC emission. Tropical trees cover about 18% of the global land surface and are estimated to be responsible for ~80% of terpenoid emissions and ~50% of other VOC emissions. Other trees cover about the same area but are estimated to contribute only about 10% of total emissions. The magnitude of the emissions estimated with MEGAN2.1 are within the range of estimates reported using other approaches and much of the differences between reported values can be attributed to land cover and meteorological driving variables. The offline version of MEGAN2.1 source code and driving variables is available from http://bai.acd.ucar.edu/MEGAN/ and the version integrated into the Community Land Model version 4 (CLM4) can be downloaded from http://www.cesm.ucar.edu/ .

2,141 citations