scispace - formally typeset
Search or ask a question

Showing papers in "Geoscientific Model Development in 2022"


Journal ArticleDOI
TL;DR: The Earth system model EC-Earth3 for contributions to CMIP6 is documented in this paper , with its flexible coupling framework, major model configurations, a methodology for ensuring the simulations are comparable across different high-performance computing (HPC) systems, and with the physical performance of base configurations over the historical period.
Abstract: Abstract. The Earth system model EC-Earth3 for contributions to CMIP6 is documented here, with its flexible coupling framework, major model configurations, a methodology for ensuring the simulations are comparable across different high-performance computing (HPC) systems, and with the physical performance of base configurations over the historical period. The variety of possible configurations and sub-models reflects the broad interests in the EC-Earth community. EC-Earth3 key performance metrics demonstrate physical behavior and biases well within the frame known from recent CMIP models. With improved physical and dynamic features, new Earth system model (ESM) components, community tools, and largely improved physical performance compared to the CMIP5 version, EC-Earth3 represents a clear step forward for the only European community ESM. We demonstrate here that EC-Earth3 is suited for a range of tasks in CMIP6 and beyond.

134 citations


Journal ArticleDOI
TL;DR: The root-mean-squared error (RMSE) and mean absolute error (MAE) are widely used metrics for evaluating models as mentioned in this paper , and there remains enduring confusion over their use, such that a standard practice is to present both, leaving it to the reader to decide which is more relevant.
Abstract: Abstract. The root-mean-squared error (RMSE) and mean absolute error (MAE) are widely used metrics for evaluating models. Yet, there remains enduring confusion over their use, such that a standard practice is to present both, leaving it to the reader to decide which is more relevant. In a recent reprise to the 200-year debate over their use, Willmott and Matsuura (2005) and Chai and Draxler (2014) give arguments for favoring one metric or the other. However, this comparison can present a false dichotomy. Neither metric is inherently better: RMSE is optimal for normal (Gaussian) errors, and MAE is optimal for Laplacian errors. When errors deviate from these distributions, other metrics are superior.

68 citations


Journal ArticleDOI
TL;DR: PyCO2SYS as discussed by the authors is a Python package for solving the marine carbonate system, which uses automatic differentiation to solve the CO2 system and calculate chemical buffer factors, ensuring that the effect of every modelled solute and reaction is accurately included in all its results.
Abstract: Abstract. Oceanic dissolved inorganic carbon (TC) is the largest pool of carbon that substantially interacts with the atmosphere on human timescales. Oceanic TC is increasing through uptake of anthropogenic carbon dioxide (CO2), and seawater pH is decreasing as a consequence. Both the exchange of CO2 between the ocean and atmosphere and the pH response are governed by a set of parameters that interact through chemical equilibria, collectively known as the marine carbonate system. To investigate these processes, at least two of the marine carbonate system's parameters are typically measured – most commonly, two from TC, total alkalinity (AT), pH, and seawater CO2 fugacity (fCO2; or its partial pressure, pCO2, or its dry-air mole fraction, xCO2) – from which the remaining parameters can be calculated and the equilibrium state of seawater solved. Several software tools exist to carry out these calculations, but no fully functional and rigorously validated tool written in Python, a popular scientific programming language, was previously available. Here, we present PyCO2SYS, a Python package intended to fill this capability gap. We describe the elements of PyCO2SYS that have been inherited from the existing CO2SYS family of software and explain subsequent adjustments and improvements. For example, PyCO2SYS uses automatic differentiation to solve the marine carbonate system and calculate chemical buffer factors, ensuring that the effect of every modelled solute and reaction is accurately included in all its results. We validate PyCO2SYS with internal consistency tests and comparisons against other software, showing that PyCO2SYS produces results that are either virtually identical or different for known reasons, with the differences negligible for all practical purposes. We discuss insights that guided the development of PyCO2SYS: for example, the fact that the marine carbonate system cannot be unambiguously solved from certain pairs of parameters. Finally, we consider potential future developments to PyCO2SYS and discuss the outlook for this and other software for solving the marine carbonate system. The code for PyCO2SYS is distributed via GitHub (https://github.com/mvdh7/PyCO2SYS, last access: 23 December 2021) under the GNU General Public License v3, archived on Zenodo (Humphreys et al., 2021), and documented online (https://pyco2sys.readthedocs.io/en/latest/, last access: 23 December 2021).

26 citations


Journal ArticleDOI
TL;DR: In this paper , the authors describe a simulation protocol developed by the Lake Sector of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) for simulating climate change impacts on lakes using an ensemble of lake models and climate change scenarios.
Abstract: Abstract. Empirical evidence demonstrates that lakes and reservoirs are warming across the globe. Consequently, there is an increased need to project future changes in lake thermal structure and resulting changes in lake biogeochemistry in order to plan for the likely impacts. Previous studies of the impacts of climate change on lakes have often relied on a single model forced with limited scenario-driven projections of future climate for a relatively small number of lakes. As a result, our understanding of the effects of climate change on lakes is fragmentary, based on scattered studies using different data sources and modelling protocols, and mainly focused on individual lakes or lake regions. This has precluded identification of the main impacts of climate change on lakes at global and regional scales and has likely contributed to the lack of lake water quality considerations in policy-relevant documents, such as the Assessment Reports of the Intergovernmental Panel on Climate Change (IPCC). Here, we describe a simulation protocol developed by the Lake Sector of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) for simulating climate change impacts on lakes using an ensemble of lake models and climate change scenarios for ISIMIP phases 2 and 3. The protocol prescribes lake simulations driven by climate forcing from gridded observations and different Earth system models under various representative greenhouse gas concentration pathways (RCPs), all consistently bias-corrected on a 0.5∘ × 0.5∘ global grid. In ISIMIP phase 2, 11 lake models were forced with these data to project the thermal structure of 62 well-studied lakes where data were available for calibration under historical conditions, and using uncalibrated models for 17 500 lakes defined for all global grid cells containing lakes. In ISIMIP phase 3, this approach was expanded to consider more lakes, more models, and more processes. The ISIMIP Lake Sector is the largest international effort to project future water temperature, thermal structure, and ice phenology of lakes at local and global scales and paves the way for future simulations of the impacts of climate change on water quality and biogeochemistry in lakes.

20 citations


Posted ContentDOI
TL;DR: The Lake Sector of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) as discussed by the authors is the largest effort to project future water temperature, thermal structure, and ice phenology of lakes at local and global scales and paves the way for future simulations of the impacts of climate change on water quality and biogeochemistry in lakes.
Abstract: Abstract. Empirical evidence demonstrates that lakes and reservoirs are warming across the globe. Consequently, there is an increased need to project future changes in lake thermal structure and resulting changes in lake biogeochemistry in order to plan for the likely impacts. Previous studies of the impacts of climate change on lakes have often relied on a single model forced with limited scenario-driven projections of future climate for a relatively small number of lakes. As a result, our understanding of the effects of climate change on lakes is fragmentary, based on scattered studies using different data sources and modelling protocols, and mainly focused on individual lakes or lake regions. This has precluded identification of the main impacts of climate change on lakes at global and regional scales and has likely contributed to the lack of lake water quality considerations in policy-relevant documents, such as the Assessment Reports of the Intergovernmental Panel on Climate Change (IPCC). Here, we describe a simulation protocol developed by the Lake Sector of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) for simulating climate change impacts on lakes using an ensemble of lake models and climate change scenarios. The protocol prescribes lake simulations driven by climate forcing from gridded observations and different Earth system models under various Representative Greenhouse Gas Concentration Pathways, all consistently bias-corrected on a 0.5° × 0.5° global grid. In ISIMIP phase 2, 11 lake models were forced with these data to project the thermal structure of 62 well-studied lakes where data were available for calibration under historical conditions, and for nearly 17,500 lakes using uncalibrated models and forcing data from the global grid where lakes are present. In ISIMIP phase 3, this approach was expanded to consider more lakes, more models, and more processes. The ISIMIP Lake Sector is the largest international effort to project future water temperature, thermal structure, and ice phenology of lakes at local and global scales and paves the way for future simulations of the impacts of climate change on water quality and biogeochemistry in lakes.

20 citations


Journal ArticleDOI
TL;DR: In this paper , a deep spatial transformer is added to the latent space of U-NET to capture rotation and scaling transformation in the space for spatiotemporal data, and a data-assimilation (DA) algorithm is used to ingest noisy observations and improve the initial conditions for next forecasts.
Abstract: Abstract. There is growing interest in data-driven weather prediction (DDWP), e.g., using convolutional neural networks such as U-NET that are trained on data from models or reanalysis. Here, we propose three components, inspired by physics, to integrate with commonly used DDWP models in order to improve their forecast accuracy. These components are (1) a deep spatial transformer added to the latent space of U-NET to capture rotation and scaling transformation in the latent space for spatiotemporal data, (2) a data-assimilation (DA) algorithm to ingest noisy observations and improve the initial conditions for next forecasts, and (3) a multi-time-step algorithm, which combines forecasts from DDWP models with different time steps through DA, improving the accuracy of forecasts at short intervals. To show the benefit and feasibility of each component, we use geopotential height at 500 hPa (Z500) from ERA5 reanalysis and examine the short-term forecast accuracy of specific setups of the DDWP framework. Results show that the spatial-transformer-based U-NET (U-STN) clearly outperforms the U-NET, e.g., improving the forecast skill by 45 %. Using a sigma-point ensemble Kalman (SPEnKF) algorithm for DA and U-STN as the forward model, we show that stable, accurate DA cycles are achieved even with high observation noise. This DDWP+DA framework substantially benefits from large (O(1000)) ensembles that are inexpensively generated with the data-driven forward model in each DA cycle. The multi-time-step DDWP+DA framework also shows promise; for example, it reduces the average error by factors of 2–3. These results show the benefits and feasibility of these three components, which are flexible and can be used in a variety of DDWP setups. Furthermore, while here we focus on weather forecasting, the three components can be readily adopted for other parts of the Earth system, such as ocean and land, for which there is a rapid growth of data and need for forecast and assimilation.

19 citations


Journal ArticleDOI
TL;DR: In this article , the authors evaluate the global mean temperature projections and effective radiative forcing (ERF) characteristics of climate emulators FaIRv1.6.2 and MAGICCv7.5.3 and compare the methods applied in AR6 with the methods used for the CICERO simple climate model (CICERO-SCM) for sensitivity analysis.
Abstract: Abstract. While the Intergovernmental Panel on Climate Change (IPCC) physical science reports usually assess a handful of future scenarios, the Working Group III contribution on climate mitigation to the IPCC's Sixth Assessment Report (AR6 WGIII) assesses hundreds to thousands of future emissions scenarios. A key task in WGIII is to assess the global mean temperature outcomes of these scenarios in a consistent manner, given the challenge that the emissions scenarios from different integrated assessment models (IAMs) come with different sectoral and gas-to-gas coverage and cannot all be assessed consistently by complex Earth system models. In this work, we describe the “climate-assessment” workflow and its methods, including infilling of missing emissions and emissions harmonisation as applied to 1202 mitigation scenarios in AR6 WGIII. We evaluate the global mean temperature projections and effective radiative forcing (ERF) characteristics of climate emulators FaIRv1.6.2 and MAGICCv7.5.3 and use the CICERO simple climate model (CICERO-SCM) for sensitivity analysis. We discuss the implied overshoot severity of the mitigation pathways using overshoot degree years and look at emissions and temperature characteristics of scenarios compatible with one possible interpretation of the Paris Agreement. We find that the lowest class of emissions scenarios that limit global warming to “1.5 ∘C (with a probability of greater than 50 %) with no or limited overshoot” includes 97 scenarios for MAGICCv7.5.3 and 203 for FaIRv1.6.2. For the MAGICCv7.5.3 results, “limited overshoot” typically implies exceedance of median temperature projections of up to about 0.1 ∘C for up to a few decades before returning to below 1.5 ∘C by or before the year 2100. For more than half of the scenarios in this category that comply with three criteria for being “Paris-compatible”, including net-zero or net-negative greenhouse gas (GHG) emissions, median temperatures decline by about 0.3–0.4 ∘C after peaking at 1.5–1.6 ∘C in 2035–2055. We compare the methods applied in AR6 with the methods used for SR1.5 and discuss their implications. This article also introduces a “climate-assessment” Python package which allows for fully reproducing the IPCC AR6 WGIII temperature assessment. This work provides a community tool for assessing the temperature outcomes of emissions pathways and provides a basis for further work such as extending the workflow to include downscaling of climate characteristics to a regional level and calculating impacts.

18 citations


Journal ArticleDOI
TL;DR: In this paper , the authors present a new modeling protocol to simulate a plausible deployment of stratospheric aerosol injection and evaluate the responses and impacts of solar climate intervention on the Earth system.
Abstract: Abstract. Solar climate intervention using stratospheric aerosol injection is a proposed method of reducing global mean temperatures to reduce the worst consequences of climate change. A detailed assessment of responses and impacts of such an intervention is needed with multiple global models to support societal decisions regarding the use of these approaches to help address climate change. We present a new modeling protocol aimed at simulating a plausible deployment of stratospheric aerosol injection and reproducibility of simulations using other Earth system models: Assessing Responses and Impacts of Solar climate intervention on the Earth system with stratospheric aerosol injection (ARISE-SAI). The protocol and simulations are aimed at enabling community assessment of responses of the Earth system to solar climate intervention. ARISE-SAI simulations are designed to be more policy-relevant than existing large ensembles or multi-model simulation sets. We describe in detail the first set of ARISE-SAI simulations, ARISE-SAI-1.5, which utilize a moderate emissions scenario, introduce stratospheric aerosol injection at ∼21.5 km in the year 2035, and keep global mean surface air temperature near 1.5 ∘C above the pre-industrial value utilizing a feedback or control algorithm. We present the detailed setup, aerosol injection strategy, and preliminary climate analysis from a 10-member ensemble of these simulations carried out with the Community Earth System Model version 2 with the Whole Atmosphere Community Climate Model version 6 as its atmospheric component.

17 citations


Journal ArticleDOI
TL;DR: In this article , the authors provide an update to regional carbon budgets over the last two decades based on observations for 10 regions covering the globe with a better harmonization than the precursor project, and define the different component fluxes of the net land atmosphere carbon exchange that should be reported by each research group in charge of each region.
Abstract: Abstract. Regional land carbon budgets provide insights into the spatial distribution of the land uptake of atmospheric carbon dioxide and can be used to evaluate carbon cycle models and to define baselines for land-based additional mitigation efforts. The scientific community has been involved in providing observation-based estimates of regional carbon budgets either by downscaling atmospheric CO2 observations into surface fluxes with atmospheric inversions, by using inventories of carbon stock changes in terrestrial ecosystems, by upscaling local field observations such as flux towers with gridded climate and remote sensing fields, or by integrating data-driven or process-oriented terrestrial carbon cycle models. The first coordinated attempt to collect regional carbon budgets for nine regions covering the entire globe in the RECCAP-1 project has delivered estimates for the decade 2000–2009, but these budgets were not comparable between regions due to different definitions and component fluxes being reported or omitted. The recent recognition of lateral fluxes of carbon by human activities and rivers that connect CO2 uptake in one area with its release in another also requires better definitions and protocols to reach harmonized regional budgets that can be summed up to a globe scale and compared with the atmospheric CO2 growth rate and inversion results. In this study, using the international initiative RECCAP-2 coordinated by the Global Carbon Project, which aims to be an update to regional carbon budgets over the last 2 decades based on observations for 10 regions covering the globe with a better harmonization than the precursor project, we provide recommendations for using atmospheric inversion results to match bottom-up carbon accounting and models, and we define the different component fluxes of the net land atmosphere carbon exchange that should be reported by each research group in charge of each region. Special attention is given to lateral fluxes, inland water fluxes, and land use fluxes.

17 citations


Journal ArticleDOI
TL;DR: In this paper , a unified framework for the process-based evaluation of atmospheric trajectories to infer source-receptor relationships of both moisture and heat is presented, which comprises three steps: diagnosing precipitation, surface evaporation, and sensible heat from the Lagrangian simulations and verifying the accuracy and reliability of flux detection criteria.
Abstract: Abstract. Despite the existing myriad of tools and models to assess atmospheric source–receptor relationships, their uncertainties remain largely unexplored and arguably stem from the scarcity of observations available for validation. Yet, Lagrangian models are increasingly used to determine the origin of precipitation and atmospheric heat by scrutinizing the changes in moisture and temperature along air parcel trajectories. Here, we present a unified framework for the process-based evaluation of atmospheric trajectories to infer source–receptor relationships of both moisture and heat. The framework comprises three steps: (i) diagnosing precipitation, surface evaporation, and sensible heat from the Lagrangian simulations and identifying the accuracy and reliability of flux detection criteria; (ii) establishing source–receptor relationships through the attribution of sources along multi-day backward trajectories; and (iii) performing a bias correction of source–receptor relationships. Applying this framework to simulations from the Lagrangian model FLEXPART, driven with ERA-Interim reanalysis data, allows us to quantify the errors and uncertainties associated with the resulting source–receptor relationships for three cities in different climates (Beijing, Denver, and Windhoek). Our results reveal large uncertainties inherent in the estimation of heat and precipitation origin with Lagrangian models, but they also demonstrate that a source and sink bias correction acts to reduce this uncertainty. The proposed framework paves the way for a cohesive assessment of the dependencies in source–receptor relationships.

15 citations


Journal ArticleDOI
TL;DR: In this paper , the authors present a chain of computationally efficient Earth system model (ESM) emulators that allow for the translation of any greenhouse gas emission pathway into spatially resolved annual mean temperature anomaly field time series, accounting for both forced climate response and natural variability uncertainty at local scale.
Abstract: Abstract. Producing targeted climate information at the local scale, including major sources of climate change projection uncertainty for diverse emissions scenarios, is essential to support climate change mitigation and adaptation efforts. Here, we present the first chain of computationally efficient Earth system model (ESM) emulators that allow for the translation of any greenhouse gas emission pathway into spatially resolved annual mean temperature anomaly field time series, accounting for both forced climate response and natural variability uncertainty at the local scale. By combining the global mean, emissions-driven emulator MAGICC with the spatially resolved emulator MESMER, ESM-specific and constrained probabilistic emulated ensembles can be derived. This emulator chain can hence build on and extend large multi-ESM ensembles such as the ones produced within the sixth phase of the Coupled Model Intercomparison Project (CMIP6). The main extensions are threefold. (i) A more thorough sampling of the forced climate response and the natural variability uncertainty is possible, with millions of emulated realizations being readily created. (ii) The same uncertainty space can be sampled for any emission pathway, which is not the case in CMIP6, where only a limited number of scenarios have been explored and some of the most societally relevant strong mitigation scenarios have been run by only a small number of ESMs. (iii) Other lines of evidence to constrain future projections, including observational constraints, can be introduced, which helps to refine projected ranges beyond the multi-ESM ensembles' estimates. In addition to presenting results from the coupled MAGICC–MESMER emulator chain, we carry out an extensive validation of MESMER, which is trained on and applied to multiple emission pathways for the first time in this study. By coupling MAGICC and MESMER, we pave the way for rapid assessments of any emission pathway's regional climate change consequences and the associated uncertainties.

Journal ArticleDOI
TL;DR: In this paper , a performance assessment of the historical global model experiments from CMIP5 and 6 based on recurring regional atmospheric circulation patterns, as defined by the Jenkinson-Collison approach, is presented.
Abstract: Abstract. Global climate models are a keystone of modern climate research. In most applications relevant for decision making, they are assumed to provide a plausible range of possible future climate states. However, these models have not been originally developed to reproduce the regional-scale climate, which is where information is needed in practice. To overcome this dilemma, two general efforts have been made since their introduction in the late 1960s. First, the models themselves have been steadily improved in terms of physical and chemical processes, parametrization schemes, resolution and implemented climate system components, giving rise to the term “Earth system model”. Second, the global models' output has been refined at the regional scale using limited area models or statistical methods in what is known as dynamical or statistical downscaling. For both approaches, however, it is difficult to correct errors resulting from a wrong representation of the large-scale circulation in the global model. Dynamical downscaling also has a high computational demand and thus cannot be applied to all available global models in practice. On this background, there is an ongoing debate in the downscaling community on whether to thrive away from the “model democracy” paradigm towards a careful selection strategy based on the global models' capacity to reproduce key aspects of the observed climate. The present study attempts to be useful for such a selection by providing a performance assessment of the historical global model experiments from CMIP5 and 6 based on recurring regional atmospheric circulation patterns, as defined by the Jenkinson–Collison approach. The latest model generation (CMIP6) is found to perform better on average, which can be partly explained by a moderately strong statistical relationship between performance and horizontal resolution in the atmosphere. A few models rank favourably over almost the entire Northern Hemisphere mid-to-high latitudes. Internal model variability only has a small influence on the model ranks. Reanalysis uncertainty is an issue in Greenland and the surrounding seas, the southwestern United States and the Gobi Desert but is otherwise generally negligible. Along the study, the prescribed and interactively simulated climate system components are identified for each applied coupled model configuration and a simple codification system is introduced to describe model complexity in this sense.

Journal ArticleDOI
TL;DR: In this updated version DINCAE 2.0, the code was rewritten in Julia and a new type of skip connection has been implemented which showed superior performance with respect to the previous version.
Abstract: Abstract. DINCAE (Data INterpolating Convolutional Auto-Encoder) is a neural network used to reconstruct missing data (e.g., obscured by clouds or gaps between tracks) in satellite data. Contrary to standard image reconstruction (in-painting) with neural networks, this application requires a method to handle missing data (or data with variable accuracy) already in the training phase. Instead of using a standard L2 (or L1) cost function, the neural network (U-Net type of network) is optimized by minimizing the negative log likelihood assuming a Gaussian distribution (characterized by a mean and a variance). As a consequence, the neural network also provides an expected error variance of the reconstructed field (per pixel and per time instance). In this updated version DINCAE 2.0, the code was rewritten in Julia and a new type of skip connection has been implemented which showed superior performance with respect to the previous version. The method has also been extended to handle multivariate data (an example will be shown with sea surface temperature, chlorophyll concentration and wind fields). The improvement of this network is demonstrated for the Adriatic Sea. Convolutional networks work usually with gridded data as input. This is however a limitation for some data types used in oceanography and in Earth sciences in general, where observations are often irregularly sampled. The first layer of the neural network and the cost function have been modified so that unstructured data can also be used as inputs to obtain gridded fields as output. To demonstrate this, the neural network is applied to along-track altimetry data in the Mediterranean Sea. Results from a 20-year reconstruction are presented and validated. Hyperparameters are determined using Bayesian optimization and minimizing the error relative to a development dataset.

Journal ArticleDOI
TL;DR: In this paper , a multi-model ensemble analysis based on the volc-pinatubo-full experiment performed within the Model Intercomparison Project on the climatic response to Volcanic forcing is presented.
Abstract: Abstract. This paper provides initial results from a multi-model ensemble analysis based on the volc-pinatubo-full experiment performed within the Model Intercomparison Project on the climatic response to Volcanic forcing (VolMIP) as part of the sixth phase of the Coupled Model Intercomparison Project (CMIP6). The volc-pinatubo-full experiment is based on an ensemble of volcanic forcing-only climate simulations with the same volcanic aerosol dataset across the participating models (the 1991–1993 Pinatubo period from the CMIP6-GloSSAC dataset). The simulations are conducted within an idealized experimental design where initial states are sampled consistently across models from the CMIP6-piControl simulation providing unperturbed preindustrial background conditions. The multi-model ensemble includes output from an initial set of six participating Earth system models (CanESM5, GISS-E2.1-G, IPSL-CM6A-LR, MIROC-E2SL, MPI-ESM1.2-LR and UKESM1). The results show overall good agreement between the different models on the global and hemispheric scales concerning the surface climate responses, thus demonstrating the overall effectiveness of VolMIP's experimental design. However, small yet significant inter-model discrepancies are found in radiative fluxes, especially in the tropics, that preliminary analyses link with minor differences in forcing implementation; model physics, notably aerosol–radiation interactions; the simulation and sampling of El Niño–Southern Oscillation (ENSO); and, possibly, the simulation of climate feedbacks operating in the tropics. We discuss the volc-pinatubo-full protocol and highlight the advantages of volcanic forcing experiments defined within a carefully designed protocol with respect to emerging modelling approaches based on large ensemble transient simulations. We identify how the VolMIP strategy could be improved in future phases of the initiative to ensure a cleaner sampling protocol with greater focus on the evolving state of ENSO in the pre-eruption period.

Journal ArticleDOI
TL;DR: In this paper , a three-dimensional variational (3DVAR) data assimilation (DA) system for aerosol optical properties, including aerosoloptical thickness (AOT) retrievals and lidar-based aerosol profiles, developed for the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) within the Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) model, is presented.
Abstract: Abstract. This paper presents a three-dimensional variational (3DVAR) data assimilation (DA) system for aerosol optical properties, including aerosol optical thickness (AOT) retrievals and lidar-based aerosol profiles, developed for the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) within the Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) model. For computational efficiency, 32 model variables in the MOSAIC_4bin scheme are lumped into 20 aerosol state variables that are representative of mass concentrations in the DA system. To directly assimilate aerosol optical properties, an observation operator based on the Mie scattering theory was employed, which was obtained by simplifying the optical module in WRF-Chem. The tangent linear (TL) and adjoint (AD) operators were then established and passed the TL/AD sensitivity test. The Himawari-8 derived AOT data were assimilated to validate the system and investigate the effects of assimilation on both AOT and PM2.5 simulations. Two comparative experiments were performed with a cycle of 24 h from 23 to 29 November 2018, during which a heavy air pollution event occurred in northern China. The DA performances of the model simulation were evaluated against independent aerosol observations, including the Aerosol Robotic Network (AERONET) AOT and surface PM2.5 measurements. The results show that Himawari-8 AOT assimilation can significantly improve model AOT analyses and forecasts. Generally, the control experiments without assimilation seriously underestimated AOTs compared with observed values and were therefore unable to describe real aerosol pollution. The analysis fields closer to observations improved AOT simulations, indicating that the system successfully assimilated AOT observations into the model. In terms of statistical metrics, assimilating Himawari-8 AOTs only limitedly improved PM2.5 analyses in the inner simulation domain (D02); however, the positive effect can last for over 24 h. Assimilation effectively enlarged the underestimated PM2.5 concentrations to be closer to the real distribution in northern China, which is of great value for studying heavy air pollution events.

Journal ArticleDOI
TL;DR: In this article , the authors used convolutional neural networks (CNNs) to predict the footprints of the warm conveyor belt (WCB) inflow, ascent, and outflow stages over the Northern Hemisphere from instantaneous gridded fields.
Abstract: Abstract. Physical processes on the synoptic scale are important modulators of the large-scale extratropical circulation. In particular, rapidly ascending airstreams in extratropical cyclones, so-called warm conveyor belts (WCBs), modulate the upper-tropospheric Rossby wave pattern and are sources and magnifiers of forecast uncertainty. Thus, from a process-oriented perspective, numerical weather prediction (NWP) and climate models should adequately represent WCBs. The identification of WCBs usually involves Lagrangian air parcel trajectories that ascend from the lower to the upper troposphere within 2 d. This requires expensive computations and numerical data with high spatial and temporal resolution, which are often not available from standard output. This study introduces a novel framework that aims to predict the footprints of the WCB inflow, ascent, and outflow stages over the Northern Hemisphere from instantaneous gridded fields using convolutional neural networks (CNNs). With its comparably low computational costs and relying on standard model output alone, the new diagnostic enables the systematic investigation of WCBs in large data sets such as ensemble reforecast or climate model projections, which are mostly not suited for trajectory calculations. Building on the insights from a logistic regression approach of a previous study, the CNNs are trained using a combination of meteorological parameters as predictors and trajectory-based WCB footprints as predictands. Validation of the networks against the trajectory-based data set confirms that the CNN models reliably replicate the climatological frequency of WCBs as well as their footprints at instantaneous time steps. The CNN models significantly outperform previously developed logistic regression models. Including time-lagged information on the occurrence of WCB ascent as a predictor for the inflow and outflow stages further improves the models' skill considerably. A companion study demonstrates versatile applications of the CNNs in different data sets including the verification of WCBs in ensemble forecasts. Overall, the diagnostic demonstrates how deep learning methods may be used to investigate the representation of weather systems and their related processes in NWP and climate models in order to shed light on forecast uncertainty and systematic biases from a process-oriented perspective.

Journal ArticleDOI
TL;DR: In this article , a widely used reservoir parametrization in the global river-routing model mizuRoute is implemented and evaluated by comparing them to simulations with natural lakes and simulations with reservoirs represented as natural lakes.
Abstract: Abstract. Human-controlled reservoirs have a large influence on the global water cycle. While global hydrological models use generic parameterizations to model dam operations, the representation of reservoir regulation is still lacking in many Earth system models. Here we implement and evaluate a widely used reservoir parametrization in the global river-routing model mizuRoute, which operates on a vector-based river network resolving individual lakes and reservoirs and is currently being coupled to an Earth system model. We develop an approach to determine the downstream area over which to aggregate irrigation water demand per reservoir. The implementation of managed reservoirs is evaluated by comparing them to simulations ignoring inland waters and simulations with reservoirs represented as natural lakes using (i) local simulations for 26 individual reservoirs driven by observed inflows and (ii) global-domain simulations driven by runoff from the Community Land Model. The local simulations show the clear added value of the reservoir parametrization, especially for simulating storage for large reservoirs with a multi-year storage capacity. In the global-domain application, the implementation of reservoirs shows an improvement in outflow and storage compared to the no-reservoir simulation, but a similar performance is found compared to the natural lake parametrization. The limited impact of reservoirs on skill statistics could be attributed to biases in simulated river discharge, mainly originating from biases in simulated runoff from the Community Land Model. Finally, the comparison of modelled monthly streamflow indices against observations highlights that including dam operations improves the streamflow simulation compared to ignoring lakes and reservoirs. This study overall underlines the need to further develop and test runoff simulations and water management parameterizations in order to improve the representation of anthropogenic interference of the terrestrial water cycle in Earth system models.

Journal ArticleDOI
TL;DR: SciKit-GStat is an open-source Python package for variogram estimation that fits well into established frameworks for scientific computing and puts the focus on the variogram before more sophisticated methods are about to be applied.
Abstract: Abstract. Geostatistical methods are widely used in almost all geoscientific disciplines, i.e., for interpolation, rescaling, data assimilation or modeling. At its core, geostatistics aims to detect, quantify, describe, analyze and model spatial covariance of observations. The variogram, a tool to describe this spatial covariance in a formalized way, is at the heart of every such method. Unfortunately, many applications of geostatistics focus on the interpolation method or the result rather than the quality of the estimated variogram. Not least because estimating a variogram is commonly left as a task for computers, and some software implementations do not even show a variogram to the user. This is a miss, because the quality of the variogram largely determines whether the application of geostatistics makes sense at all. Furthermore, the Python programming language was missing a mature, well-established and tested package for variogram estimation a couple of years ago. Here I present SciKit-GStat, an open-source Python package for variogram estimation that fits well into established frameworks for scientific computing and puts the focus on the variogram before more sophisticated methods are about to be applied. SciKit-GStat is written in a mutable, object-oriented way that mimics the typical geostatistical analysis workflow. Its main strength is the ease of use and interactivity, and it is therefore usable with only a little or even no knowledge of Python. During the last few years, other libraries covering geostatistics for Python developed along with SciKit-GStat. Today, the most important ones can be interfaced by SciKit-GStat. Additionally, established data structures for scientific computing are reused internally, to keep the user from learning complex data models, just for using SciKit-GStat. Common data structures along with powerful interfaces enable the user to use SciKit-GStat along with other packages in established workflows rather than forcing the user to stick to the author's programming paradigms. SciKit-GStat ships with a large number of predefined procedures, algorithms and models, such as variogram estimators, theoretical spatial models or binning algorithms. Common approaches to estimate variograms are covered and can be used out of the box. At the same time, the base class is very flexible and can be adjusted to less common problems, as well. Last but not least, it was made sure that a user is aided in implementing new procedures or even extending the core functionality as much as possible, to extend SciKit-GStat to uncovered use cases. With broad documentation, a user guide, tutorials and good unit-test coverage, SciKit-GStat enables the user to focus on variogram estimation rather than implementation details.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a metric to assess the added value of the EURO-CORDEX simulations for the maximum and minimum temperature over the Iberian Peninsula.
Abstract: Abstract. In the recent past, an increase in computation resources led to the development of regional climate models with increasing domains and resolutions, spanning larger temporal periods. A good example is the World Climate Research Program – Coordinated Regional Climate Downscaling Experiment for the European domain (EURO-CORDEX). This set of regional models encompasses the entire European continent for a 130-year common period until the end of the 21st century, while having a 12 km horizontal resolution. Such simulations are computationally demanding, while at the same time not always showing added value. This study considers a recently proposed metric in order to assess the added value of the EURO-CORDEX hindcast (1989–2008) and historical (1971–2005) simulations for the maximum and minimum temperature over the Iberian Peninsula. This approach allows an evaluation of the higher against the driving lower resolutions relative to the performance of the whole or partial probability density functions by having an observational regular gridded dataset as a reference. Overall, the gains for maximum temperature are more relevant in comparison to minimum temperature, partially due to known problems derived from the snow–albedo–atmosphere feedback. For more local scales, areas near the coast reveal higher added value in comparison with the interior, which displays limited gains and sometimes notable detrimental effects with values around −30 %. At the same time, the added value for temperature extremes reveals a similar range, although with larger gains in coastal regions and in locations from the interior for maximum temperature, contrasting with the losses for locations in the interior of the domain for the minimum temperature.

Journal ArticleDOI
TL;DR: The Integrated Methane Inversion optimal estimation workflow (IMI 1.0) as mentioned in this paper is a cloud-based facility for quantifying methane emissions with 0.25∘∉× 0.3125∘ (≈ 25 km) resolution by inverse analysis of satellite observations from the TROPOspheric Monitoring Instrument (TROPOMI).
Abstract: Abstract. We present a user-friendly, cloud-based facility for quantifying methane emissions with 0.25∘ × 0.3125∘ (≈ 25 km × 25 km) resolution by inverse analysis of satellite observations from the TROPOspheric Monitoring Instrument (TROPOMI). The facility is built on an Integrated Methane Inversion optimal estimation workflow (IMI 1.0) and supported for use on the Amazon Web Services (AWS) cloud. It exploits the GEOS-Chem chemical transport model and TROPOMI data already resident on AWS, thus avoiding cumbersome big-data download. Users select a region and period of interest, and the IMI returns an analytical solution for the Bayesian optimal estimate of period-average emissions on the 0.25∘ × 0.3125∘ grid including error statistics, information content, and visualization code for inspection of results. The inversion uses an advanced research-grade algorithm fully documented in the literature. An out-of-the-box inversion with rectilinear grid and default prior emission estimates can be conducted with no significant learning curve. Users can also configure their inversions to infer emissions for irregular regions of interest, swap in their own prior emission inventories, and modify inversion parameters. Inversion ensembles can be generated at minimal additional cost once the Jacobian matrix for the analytical inversion has been constructed. A preview feature allows users to determine the TROPOMI information content for their region and time period of interest before actually performing the inversion. The IMI is heavily documented and is intended to be accessible by researchers and stakeholders with no expertise in inverse modelling or high-performance computing. We demonstrate the IMI's capabilities by applying it to estimate methane emissions from the US oil-producing Permian Basin in May 2018.

Journal ArticleDOI
TL;DR: In this article , a set of residual deep neural networks (ResDNNs) with a strong nonlinear fitting ability is designed to emulate a superparameterization (SP) with different outputs in a hybrid ML-physicalgeneral circulation model (GCM).
Abstract: Abstract. In climate models, subgrid parameterizations of convection and clouds are one of the main causes of the biases in precipitation and atmospheric circulation simulations. In recent years, due to the rapid development of data science, machine learning (ML) parameterizations for convection and clouds have been demonstrated to have the potential to perform better than conventional parameterizations. Most previous studies were conducted on aqua-planet and idealized models, and the problems of simulation instability and climate drift still exist. Developing an ML parameterization scheme remains a challenging task in realistically configured models. In this paper, a set of residual deep neural networks (ResDNNs) with a strong nonlinear fitting ability is designed to emulate a super-parameterization (SP) with different outputs in a hybrid ML–physical general circulation model (GCM). It can sustain stable simulations for over 10 years under real-world geographical boundary conditions. We explore the relationship between the accuracy and stability by validating multiple deep neural network (DNN) and ResDNN sets in prognostic runs. In addition, there are significant differences in the prognostic results of the stable ResDNN sets. Therefore, trial and error is used to acquire the optimal ResDNN set for both high skill and long-term stability, which we name the neural network (NN) parameterization. In offline validation, the neural network parameterization can emulate the SP in mid- to high-latitude regions with a high accuracy. However, its prediction skill over tropical ocean areas still needs improvement. In the multi-year prognostic test, the hybrid ML–physical GCM simulates the tropical precipitation well over land and significantly improves the frequency of the precipitation extremes, which are vastly underestimated in the Community Atmospheric Model version 5 (CAM5), with a horizontal resolution of 1.9∘ × 2.5∘. Furthermore, the hybrid ML–physical GCM simulates the robust signal of the Madden–Julian oscillation with a more reasonable propagation speed than CAM5. However, there are still substantial biases with the hybrid ML–physical GCM in the mean states, including the temperature field in the tropopause and at high latitudes and the precipitation over tropical oceanic regions, which are larger than those in CAM5. This study is a pioneer in achieving multi-year stable climate simulations using a hybrid ML–physical GCM under actual land–ocean boundary conditions that become sustained over 30 times faster than the target SP. It demonstrates the emerging potential of using ML parameterizations in climate simulations.

Posted ContentDOI
TL;DR: This work provides an efficient numerical implementation of pseudo-transient solvers on graphical processing units (GPUs) using the Julia language and shows that this method is well suited to tackle strongly nonlinear problems such as shear-banding in a visco-elasto-plastic medium.
Abstract: Abstract. The development of highly efficient, robust and scalable numerical algorithms lags behind the rapid increase in massive parallelism of modern hardware. We address this challenge with the accelerated pseudo-transient iterative method and present here a physically motivated derivation. We analytically determine optimal iteration parameters for a variety of basic physical processes and confirm the validity of theoretical predictions with numerical experiments. We provide an efficient numerical implementation of pseudo-transient solvers on graphical processing units (GPUs) using the Julia language. We achieve a parallel efficiency over 96 % on 2197 GPUs in distributed memory parallelisation weak scaling benchmarks. 2197 GPUs allow for unprecedented terascale solutions of 3D variable viscosity Stokes flow on 49953 grid cells involving over 1.2 trillion degrees of freedom. We verify the robustness of the method by handling contrasts up to 9 orders of magnitude in material parameters such as viscosity, and arbitrary distribution of viscous inclusions for different flow configurations. Moreover, we show that this method is well suited to tackle strongly nonlinear problems such as shear-banding in a visco-elasto-plastic medium. A GPU-based implementation can outperform CPU-based direct-iterative solvers in terms of wall-time even at relatively low resolution. We additionally motivate the accessibility of the method by its conciseness, flexibility, physically motivated derivation and ease of implementation. This solution strategy has thus a great potential for future high-performance computing applications, and for paving the road to exascale in the geosciences and beyond.

Journal ArticleDOI
TL;DR: In this article , the distribution added value (DAV) metric is used to assess the precipitation of all available EuroO-CORDEX hindcast (1989-2008) and historical (1971-2005) simulations.
Abstract: Abstract. Over the years, higher-resolution regional climate model simulations have emerged owing to the large increase in computational resources. The 12 km resolution from the Coordinated Regional Climate Downscaling Experiment for the European domain (EURO-CORDEX) is a reference, which includes a larger multi-model ensemble at a continental scale while spanning at least a 130-year period. These simulations are computationally demanding but do not always reveal added value. In this study, a recently developed regular gridded dataset and a new metric for added value quantification, the distribution added value (DAV), are used to assess the precipitation of all available EURO-CORDEX hindcast (1989–2008) and historical (1971–2005) simulations. This approach enables a direct comparison between the higher-resolution regional model runs against their forcing global model or ERA-Interim reanalysis with respect to their probability density functions. This assessment is performed for the Iberian Peninsula. Overall, important gains are found for most cases, particularly in precipitation extremes. Most hindcast models reveal gains above 15 %, namely for wintertime, while for precipitation extremes values above 20 % are reached for the summer and autumn. As for the historical models, although most pairs display gains, regional models forced by two general circulation models (GCMs) reveal losses, sometimes around −5 % or lower, for the entire year. However, the spatialization of the DAV is clear in terms of added value for precipitation, particularly for precipitation extremes with gains well above 100 %.

Journal ArticleDOI
TL;DR: In this article , a significantly improved global atmospheric simulation can be achieved by focusing on the realism of process assumptions in cloud calibration and subgrid effects using the Energy PhysicExascale Earth System Model (E3SM) Atmosphere Model version 1 (EAMv1).
Abstract: Abstract. Realistic simulation of the Earth's mean-state climate remains a major challenge, and yet it is crucial for predicting the climate system in transition. Deficiencies in models' process representations, propagation of errors from one process to another, and associated compensating errors can often confound the interpretation and improvement of model simulations. These errors and biases can also lead to unrealistic climate projections and incorrect attribution of the physical mechanisms governing past and future climate change. Here we show that a significantly improved global atmospheric simulation can be achieved by focusing on the realism of process assumptions in cloud calibration and subgrid effects using the Energy Exascale Earth System Model (E3SM) Atmosphere Model version 1 (EAMv1). The calibration of clouds and subgrid effects informed by our understanding of physical mechanisms leads to significant improvements in clouds and precipitation climatology, reducing common and long-standing biases across cloud regimes in the model. The improved cloud fidelity in turn reduces biases in other aspects of the system. Furthermore, even though the recalibration does not change the global mean aerosol and total anthropogenic effective radiative forcings (ERFs), the sensitivity of clouds, precipitation, and surface temperature to aerosol perturbations is significantly reduced. This suggests that it is possible to achieve improvements to the historical evolution of surface temperature over EAMv1 and that precise knowledge of global mean ERFs is not enough to constrain historical or future climate change. Cloud feedbacks are also significantly reduced in the recalibrated model, suggesting that there would be a lower climate sensitivity when it is run as part of the fully coupled E3SM. This study also compares results from incremental changes to cloud microphysics, turbulent mixing, deep convection, and subgrid effects to understand how assumptions in the representation of these processes affect different aspects of the simulated atmosphere as well as its response to forcings. We conclude that the spectral composition and geographical distribution of the ERFs and cloud feedback, as well as the fidelity of the simulated base climate state, are important for constraining the climate in the past and future.

Journal ArticleDOI
TL;DR: In this article , a simple recurrent neural network with convolutional filters, called ConvLSTM, and an advanced generative network, the Stochastic Adversarial Video Prediction (SAVP) model, are applied to create hourly forecasts of the 2'm'temperature for the next 12'h over Europe.
Abstract: Abstract. Numerical weather prediction (NWP) models solve a system of partial differential equations based on physical laws to forecast the future state of the atmosphere. These models are deployed operationally, but they are computationally very expensive. Recently, the potential of deep neural networks to generate bespoke weather forecasts has been explored in a couple of scientific studies inspired by the success of video frame prediction models in computer vision. In this study, a simple recurrent neural network with convolutional filters, called ConvLSTM, and an advanced generative network, the Stochastic Adversarial Video Prediction (SAVP) model, are applied to create hourly forecasts of the 2 m temperature for the next 12 h over Europe. We make use of 13 years of data from the ERA5 reanalysis, of which 11 years are utilized for training and 1 year each is used for validating and testing. We choose the 2 m temperature, total cloud cover, and the 850 hPa temperature as predictors and show that both models attain predictive skill by outperforming persistence forecasts. SAVP is superior to ConvLSTM in terms of several evaluation metrics, confirming previous results from computer vision that larger, more complex networks are better suited to learn complex features and to generate better predictions. The 12 h forecasts of SAVP attain a mean squared error (MSE) of about 2.3 K2, an anomaly correlation coefficient (ACC) larger than 0.85, a structural similarity index (SSIM) of around 0.72, and a gradient ratio (rG) of about 0.82. The ConvLSTM yields a higher MSE (3.6 K2), a smaller ACC (0.80) and SSIM (0.65), and a slightly larger rG (0.84). The superior performance of SAVP in terms of MSE, ACC, and SSIM can be largely attributed to the generator. A sensitivity study shows that a larger weight of the generative adversarial network (GAN) component in the SAVP loss leads to even better preservation of spatial variability at the cost of a somewhat increased MSE (2.5 K2). Including the 850 hPa temperature as an additional predictor enhances the forecast quality, and the model also benefits from a larger spatial domain. By contrast, adding the total cloud cover as predictor or reducing the amount of training data to 8 years has only small effects. Although the temperature forecasts obtained in this way are still less powerful than contemporary NWP models, this study demonstrates that sophisticated deep neural networks may achieve considerable forecast quality beyond the nowcasting range in a purely data-driven way.

Journal ArticleDOI
TL;DR: The Water Ecosystems Tool (WET) as mentioned in this paper is a modularized aquatic ecosystem model developed in the syntax of the Framework for Aquatic Biogeochemical Models (FABM), which enables coupling to multiple physical models ranging from zero to three dimensions.
Abstract: Abstract. We present the Water Ecosystems Tool (WET) – a new generation of open-source, highly customizable aquatic ecosystem model. WET is a completely modularized aquatic ecosystem model developed in the syntax of the Framework for Aquatic Biogeochemical Models (FABM), which enables coupling to multiple physical models ranging from zero to three dimensions, and is based on the FABM–PCLake model. The WET model has been extensively modularized, empowering users with flexibility of food web configurations, and incorporates model features from other state-of-the-art models, with new options for nitrogen fixation and vertical migration. With the new structure, features and flexible customization options, WET is suitable in a wide range of aquatic ecosystem applications. We demonstrate these new features and their impacts on model behavior for a temperate lake for which a model calibration of the FABM–PCLake model was previously published and discuss the benefits of the new model.

Journal ArticleDOI
TL;DR: In this paper , a dynamic vegetation model LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) is used to simulate the legume N fixation process.
Abstract: Abstract. Biological nitrogen fixation (BNF) from grain legumes is of significant importance in global agricultural ecosystems. Crops with BNF capability are expected to support the need to increase food production while reducing nitrogen (N) fertilizer input for agricultural sustainability, but quantification of N fixing rates and BNF crop yields remains inadequate on a global scale. Here we incorporate two legume crops (soybean and faba bean) with BNF into a dynamic vegetation model LPJ-GUESS (Lund–Potsdam–Jena General Ecosystem Simulator). The performance of this new implementation is evaluated against observations from a range of water and N management trials. LPJ-GUESS generally captures the observed response to these management practices for legume biomass production, soil N uptake, and N fixation, despite some deviations from observations in some cases. Globally, simulated BNF is dominated by soil moisture and temperature, as well as N fertilizer addition. Annual inputs through BNF are modeled to be 11.6±2.2 Tg N for soybean and 5.6±1.0 Tg N for all pulses, with a total fixation of 17.2±2.9 Tg N yr−1 for all grain legumes during the period 1981–2016 on a global scale. Our estimates show good agreement with some previous statistical estimates but are relatively high compared to some estimates for pulses. This study highlights the importance of accounting for legume N fixation process when modeling C–N interactions in agricultural ecosystems, particularly when it comes to accounting for the combined effects of climate and land-use change on the global terrestrial N cycle.

Journal ArticleDOI
TL;DR: This paper compares four trackers with very different formulations in detail and shows that different traditional metrics can be very sensitive to the particular choice of tracker, which is particularly true for the TC frequencies and their durations.
Abstract: Abstract. The assessment of tropical cyclone (TC) statistics requires the direct, objective, and automatic detection and tracking of TCs in reanalyses and model simulations. Research groups have independently developed numerous algorithms during recent decades in order to answer that need. Today, there is a large number of trackers that aim to detect the positions of TCs in gridded datasets. The questions we ask here are the following: does the choice of tracker impact the climatology obtained? And, if it does, how should we deal with this issue? This paper compares four trackers with very different formulations in detail. We assess their performances by tracking TCs in the ERA5 reanalysis and by comparing the outcome to the IBTrACS observations database. We find typical detection rates of the trackers around 80 %. At the same time, false alarm rates (FARs) greatly vary across the four trackers and can sometimes exceed the number of genuine cyclones detected. Based on the finding that many of these false alarms (FAs) are extra-tropical cyclones (ETCs), we adapt two existing filtering methods common to all trackers. Both post-treatments dramatically impact FARs, which range from 9 % to 36 % in our final catalogs of TC tracks. We then show that different traditional metrics can be very sensitive to the particular choice of tracker, which is particularly true for the TC frequencies and their durations. By contrast, all trackers identify a robust negative bias in ERA5 TC intensities, a result already noted in previous studies. We conclude by advising against using as many trackers as possible and averaging the results. A more efficient approach would involve selecting one or a few trackers with well-known and complementary properties.

Journal ArticleDOI
TL;DR: In this paper , the authors used an atmospheric component with tropospheric and stratospheric chemistry (CAM6-Chem) of the state-of-the-art global climate model CESM2 to simulate the observed spatial distribution of total gaseous mercury (TGM) in both polluted and non-polluted regions.
Abstract: Abstract. Most global atmospheric mercury models use offline and reanalyzed meteorological fields, which has the advantages of higher accuracy and lower computational cost compared to online models. However, these meteorological products need past and/or near-real-time observational data and cannot predict the future. Here, we use an atmospheric component with tropospheric and stratospheric chemistry (CAM6-Chem) of the state-of-the-art global climate model CESM2, adding new species of mercury and simulating atmospheric mercury cycling. Our results show that the newly developed online model is able to simulate the observed spatial distribution of total gaseous mercury (TGM) in both polluted and non-polluted regions with high correlation coefficients in eastern Asia (r=0.67) and North America (r=0.57). The calculated lifetime of TGM against deposition is 5.3 months and reproduces the observed interhemispheric gradient of TGM with a peak value at northern mid-latitudes. Our model reproduces the observed spatial distribution of HgII wet deposition over North America (r=0.80) and captures the magnitude of maximum in the Florida Peninsula. The simulated wet deposition fluxes in eastern Asia present a spatial distribution pattern of low in the northwest and high in the southeast. The online model is in line with the observed seasonal variations of TGM at northern mid-latitudes as well as the Southern Hemisphere, which shows lower amplitude. We further go into the factors that affect the seasonal variations of atmospheric mercury and find that both Hg0 dry deposition and HgII dry/wet depositions contribute to it.

Journal ArticleDOI
TL;DR: In this paper , the authors describe and evaluate a new downscaling scheme that specifically addresses the need for hillslope-scale atmospheric-forcing time series for modelling the local impact of regional climate change projections on the land surface in complex terrain.
Abstract: Abstract. This study describes and evaluates a new downscaling scheme that specifically addresses the need for hillslope-scale atmospheric-forcing time series for modelling the local impact of regional climate change projections on the land surface in complex terrain. The method has a global scope in that it does not rely directly on surface observations and is able to generate the full suite of model forcing variables required for hydrological and land surface modelling in hourly time steps. It achieves this by utilizing the previously published TopoSCALE scheme to generate synthetic observations of the current climate at the hillslope scale while accounting for a broad range of surface–atmosphere interactions. These synthetic observations are then used to debias (downscale) CORDEX climate variables using the quantile mapping method. A further temporal disaggregation step produces sub-daily fields. This approach has the advantages of other empirical–statistical methods, including speed of use, while it avoids the need for ground data, which are often limited. It is therefore a suitable method for a wide range of remote regions where ground data is absent, incomplete, or not of sufficient length. The approach is evaluated using a network of high-elevation stations across the Swiss Alps, and a test application in which the impacts of climate change on Alpine snow cover are modelled.