scispace - formally typeset
Search or ask a question

Showing papers by "Geophysical Fluid Dynamics Laboratory published in 2023"


Posted ContentDOI
30 Jan 2023
TL;DR: In this paper , the authors proposed to increase the value of minimum thickness depth, an arbitrary parameter introduced to prevent the occurrence of extremely thin model layers, thus numerical instability, which solved the issue of the model's numerical instability with a small impact on forecast skills.
Abstract: Abstract. The National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) version 16 encountered a few model instability failures during the pre-operational real-time parallel runs. The model forecasts failed when an extremely small thickness depth appeared at the model’s lowest layer when strong tropical cyclones made landfall. A quick solution was to increase the value of minimum thickness depth, an arbitrary parameter introduced to prevent the occurrence of extremely thin model layers, thus numerical instability. This modification solved the issue of the model's numerical instability with a small impact on forecast skills. It was adopted in GFSv16 to help implement this version of the operational system as planned. Further investigation showed that the extremely small thickness depth occurred after the advection of geopotential heights at the interfaces of model layers. In the FV3 dynamic core, the horizontal winds at interfaces for advection are calculated from the layer-mean values by solving a tridiagonal system of equations in the entire vertical column based on the Parabolic Spline Method (PSM) with high-order boundary conditions (BCs). We replaced the high-order BCs with zero-gradient BCs for the interface-wind reconstruction. The impact of the zero-gradient BCs was investigated by performing sensitivity experiments with GFSv16, idealized mountain ridge tests, and the Rapid Refresh Forecast System (RRFS). The results showed that zero-gradient BCs can fundamentally solve the instability and have little impact on the forecast performances and the numerical solution of idealized mountain tests. This option has been added to FV3 and will be utilized in the GFS (GFSv17/GEFSv13) and RRFS for operations in 2024.

1 citations


Posted ContentDOI
24 Mar 2023
TL;DR: In this article , the authors used the decadal trend and variability of observed OA in the southeast US, combined with a global chemistry-climate model, to better constrain AIBS.
Abstract: Abstract. Biogenic secondary organic aerosols (SOA) contribute to a large fraction of fine aerosols globally, impacting air quality and climate. The formation of biogenic SOA depends on not only emissions of biogenic volatile organic compounds (BVOCs) but also anthropogenic pollutants including primary organic aerosol, sulfur dioxide (SO2), and nitrogen oxides (NOx). However, the anthropogenic impact on biogenic SOA production (AIBS) remains unclear. Here we use the decadal trend and variability of observed OA in the southeast US, combined with a global chemistry-climate model, to better constrain AIBS. We show that the reduction in SO2 emissions can only explain 40 % of the decreasing decadal trend of OA in this region, constrained by the low summertime month-to-month variability of surface OA. We hypothesize that the rest of OA decreasing trend is largely due to reduction in NOx emissions. By implementing a scheme for monoterpene SOA with enhanced sensitivity to NOx, our model can reproduce the decadal trend and variability of OA in this region. Extending to centennial scale, our model shows that global SOA production increases by 36 % despite BVOC reductions from preindustrial period to present day, largely amplified by AIBS. Our work suggests a strong coupling between anthropogenic and biogenic emissions in biogenic SOA production that is missing from current climate models.

Peer ReviewDOI
13 Apr 2023
TL;DR: In this paper , the authors proposed to increase the value of minimum thickness depth, an arbitrary parameter introduced to prevent the occurrence of extremely thin model layers, thus numerical instability, which solved the issue of the model's numerical instability with a small impact on forecast skills.
Abstract: Abstract. The National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) version 16 encountered a few model instability failures during the pre-operational real-time parallel runs. The model forecasts failed when an extremely small thickness depth appeared at the model’s lowest layer when strong tropical cyclones made landfall. A quick solution was to increase the value of minimum thickness depth, an arbitrary parameter introduced to prevent the occurrence of extremely thin model layers, thus numerical instability. This modification solved the issue of the model's numerical instability with a small impact on forecast skills. It was adopted in GFSv16 to help implement this version of the operational system as planned. Further investigation showed that the extremely small thickness depth occurred after the advection of geopotential heights at the interfaces of model layers. In the FV3 dynamic core, the horizontal winds at interfaces for advection are calculated from the layer-mean values by solving a tridiagonal system of equations in the entire vertical column based on the Parabolic Spline Method (PSM) with high-order boundary conditions (BCs). We replaced the high-order BCs with zero-gradient BCs for the interface-wind reconstruction. The impact of the zero-gradient BCs was investigated by performing sensitivity experiments with GFSv16, idealized mountain ridge tests, and the Rapid Refresh Forecast System (RRFS). The results showed that zero-gradient BCs can fundamentally solve the instability and have little impact on the forecast performances and the numerical solution of idealized mountain tests. This option has been added to FV3 and will be utilized in the GFS (GFSv17/GEFSv13) and RRFS for operations in 2024.


Posted ContentDOI
15 May 2023
TL;DR: In this article , Chen et al. proposed a new edge/corner handling method to greatly reduce the grid imprinting in GFDL's dynamical core FV3, which is especially useful for coarse-grid models such as climate models in which the cube edges are most noticeable.
Abstract: The current edge handling of the cubed sphere grid introduces numerical errors in simulations and creates grid imprinting due to the discontinuity of the great­-circle grid lines between two adjacent tiles. In this work, we implement a new edge/corner handling method to greatly reduce the grid imprinting in GFDL's dynamical core FV3. First, we extend on the duo-grid method (Chen 2021) to support halo updates of staggered variables. Second, we implement a corner handling algorithm to fill the corner regions using a lagrangian polynomial interpolation. Results of idealized shallow water test cases show that the new halo update methods are able to reduce the numerical noise at the edges/corners and thus reduce the grid imprinting in the numerical solution. This improvement is especially useful for coarse-grid models such as climate models in which the cube edges are most noticeable.

Peer ReviewDOI
03 Apr 2023
TL;DR: In this article , the sensitivity of modelled tropospheric hydroxyl (OH) concentration trends to meteorology and near-term climate forcers (NTCFs), namely methane (CH4); nitrogen oxides (NOx = NO 2 + NO); carbon monoxide (CO); non-methane volatile organic compounds (NMVOCs); and ozone-depleting substances (ODS) using the Geophysical Fluid Dynamics Laboratory’s (GFDL) atmospheric chemistry-climate model, Atmospheric Model version 4.1 (AM4.1) driven by emissions inventories developed for the Sixth Coupled Model Intercomparison Project (CMIP6) and forced by observed sea surface temperatures and sea ice prepared in support of the CMIP6 Atmospheric Model Interposition Project (AMIP) simulations.
Abstract: Abstract. We explore the sensitivity of modelled tropospheric hydroxyl (OH) concentration trends to meteorology and near-term climate forcers (NTCFs), namely methane (CH4); nitrogen oxides (NOx = NO2 + NO); carbon monoxide (CO); non-methane volatile organic compounds (NMVOCs); and ozone-depleting substances (ODS) using the Geophysical Fluid Dynamics Laboratory’s (GFDL) atmospheric chemistry-climate model, Atmospheric Model version 4.1 (AM4.1) driven by emissions inventories developed for the Sixth Coupled Model Intercomparison Project (CMIP6) and forced by observed sea surface temperatures and sea ice prepared in support of the CMIP6 Atmospheric Model Intercomparison Project (AMIP) simulations. We find that the modelled tropospheric airmass-weighted mean [OH] has increased by ~5 % globally from 1980 to 2014. We find that NOx emissions and CH4 concentrations dominate the modelled global trend, while CO emissions and meteorology were also important in driving regional trends. Modelled tropospheric NO2 column trends are largely consistent with those retrieved from the Ozone Monitoring Instrument (OMI) satellite, but simulated CO column trends generally overestimate those retrieved from the Measurements of Pollution in The Troposphere (MOPITT) satellite, possibly reflecting biases in input anthropogenic emission inventories, especially over China and South Asia.

Posted ContentDOI
08 May 2023
TL;DR: In this paper , a scalable approach that leverages advances in machine learning, radiative transfer modeling, and in-situ observations to assimilate satellite observations into unstructured tile-based land surface models is introduced.
Abstract: <p>Due to soil moisture and vegetation's critical role in controlling land-atmosphere interactions, detailed and accurate hydrological and ecological information is essential to understand, monitor, and predict hydroclimate extremes (e.g., droughts and floods), natural hazards (e.g., wildfires and landslides), irrigation demands, weather, and climate dynamics. While in-situ soil moisture and vegetation biomass measurements can provide detailed information, their representativeness is limited, and networks of sensors are not widely available. Multispectral satellite observations offer global coverage, but retrievals can be infrequent or too coarse to capture the local extremes. This observation data gap limits the use of such information to adequately represent land surface processes and their initialization conditions for seasonal to sub-seasonal (S2S) prediction models. To bridge this gap, the assimilation of remote sensing observations into land surface models at hyper-resolution spatial scales (< 100 meters) provides a pathway forward to (i) reconcile model and observation scales and (ii) enhance S2S hydroclimate predictability in Earth System Models.</p> <p>To this aim, we introduce a scalable approach that leverages advances in machine learning, radiative transfer modeling, and in-situ observations to assimilate satellite observations into unstructured tile-based land surface models. In this approach, a machine learning model is trained to harness information from big environmental datasets and in-situ observations to learn how the physical model and satellite biases are related to specific hydrologic conditions and landscape characteristics and how these biases evolve over time. We demonstrate the added value of this approach for improving soil moisture and vegetation dynamics at the hyper-resolution scales by assimilating MODIS Leaf Area Index and NASA’s SMAP brightness temperature observations into the LM4.0 – the land model component of the NOAA-GFDL Earth System Model. To this end, we performed stand-alone LM4.0 simulations between 2000 to 2021 over the Continental United States, with the MODIS and SMAP assimilation performed from 2002 and 2015, respectively, until the present day. Soil moisture estimates are evaluated against independent in-situ observations. To quantify the approach added value for S2S predictability, we compare the impact of soil moisture and vegetation data assimilation on root zone soil moisture, runoff, vegetation biomass, surface temperature, and evapotranspiration.</p>

Posted ContentDOI
15 May 2023
TL;DR: In this paper , a model with 25 km horizontal resolution facilitates a much more realistic simulation of extreme precipitation than comparable models with 50 or 100 km resolution, based on ensembles generated by GFDL (Geophysical Fluid Dynamics Laboratory) SPEAR (Seamless System for Prediction and EArth System Research) models.
Abstract: The Northeast United States (NEUS) has faced the most rapidly increasing occurrences of extreme precipitation within the US in the past few decades. Understanding the physics leading to long-term trends in regional extreme precipitation is essential to adaptation and mitigation planning. Simulating regional extreme precipitation, however, remains challenging, partially limited by climate models’ horizontal resolution. Our recent work shows that a model with 25 km horizontal resolution facilitates a much more realistic simulation of extreme precipitation than comparable models with 50 or 100 km resolution, including frequency, amplitude, and temporal variability, based on ensembles generated by GFDL (Geophysical Fluid Dynamics Laboratory) SPEAR (Seamless System for Prediction and EArth System Research) models. The 25-km GFDL-SPEAR ensemble also simulates the trend of NEUS extreme precipitation quantitatively consistent with observed trend over recent decades, as the observed trend is within the ensemble spread. We therefore leverage multiple ensembles and various simulations (with historical radiative forcing and projected forcing following the SSP2-4.5 and SSP5-8.5 scenarios) to detect and project the trend of extreme precipitation. The 10-ensemble member GFDL-SPEAR 25-km simulations project unprecedented rainfall events over the NEUS, driven by increasing anthropogenic radiative forcing and distinguishable from natural variability, by the mid-21st century. Furthermore, very extreme events (99.9th percentile events) may be six times more likely by 2100 than in the early 21st century. We further conduct a process-oriented study, assessing the physical factors that have contributed to the increasing extreme precipitation over the NEUS. We categorize September to November extreme precipitation days based on daily cumulative precipitation over the NEUS into weather types, including atmospheric river (AR), tropical cyclone (TC), and others. In observations, the most precipitation days were AR days or/and TC days. The number of extreme precipitation days related to pure AR events (without any TC-related event in the vicinity) had increased slightly from 1959 to 2020. The greater contribution to the increasing extreme precipitation was caused by TC-related events, especially the influences from extratropical transitions. The extreme precipitation days related to extratropical transitions were 2.5 times more frequent for the 1990 to 2020 period compared to the 1959 to 1989 period. We apply the same analysis to the GFDL-SPEAR 25-km simulations. Similar to observations, the increasing extreme precipitation days were mainly caused by TC-related events, with a smaller influence from pure AR events. However, the increasing number of TC-related days was dominated by hurricane and tropical storm events, while the number of extratropical transitions near the NEUS changed very little from 1959 to 2020. These results are different from the observational results. Ongoing work focuses on the discrepancy between observations and SPEAR simulations. For example, we are assessing whether the prominent increasing extratropical transitions since the 1990s in observations were the results of limited sample size or caused by decadal variability.

Peer ReviewDOI
07 Mar 2023
TL;DR: In this paper , the authors proposed to increase the value of minimum thickness depth, an arbitrary parameter introduced to prevent the occurrence of extremely thin model layers, thus numerical instability, which solved the issue of the model's numerical instability with a small impact on forecast skills.
Abstract: Abstract. The National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) version 16 encountered a few model instability failures during the pre-operational real-time parallel runs. The model forecasts failed when an extremely small thickness depth appeared at the model’s lowest layer when strong tropical cyclones made landfall. A quick solution was to increase the value of minimum thickness depth, an arbitrary parameter introduced to prevent the occurrence of extremely thin model layers, thus numerical instability. This modification solved the issue of the model's numerical instability with a small impact on forecast skills. It was adopted in GFSv16 to help implement this version of the operational system as planned. Further investigation showed that the extremely small thickness depth occurred after the advection of geopotential heights at the interfaces of model layers. In the FV3 dynamic core, the horizontal winds at interfaces for advection are calculated from the layer-mean values by solving a tridiagonal system of equations in the entire vertical column based on the Parabolic Spline Method (PSM) with high-order boundary conditions (BCs). We replaced the high-order BCs with zero-gradient BCs for the interface-wind reconstruction. The impact of the zero-gradient BCs was investigated by performing sensitivity experiments with GFSv16, idealized mountain ridge tests, and the Rapid Refresh Forecast System (RRFS). The results showed that zero-gradient BCs can fundamentally solve the instability and have little impact on the forecast performances and the numerical solution of idealized mountain tests. This option has been added to FV3 and will be utilized in the GFS (GFSv17/GEFSv13) and RRFS for operations in 2024.

Peer ReviewDOI
13 Apr 2023
TL;DR: In this article , the authors proposed to increase the value of minimum thickness depth, an arbitrary parameter introduced to prevent the occurrence of extremely thin model layers, thus numerical instability, which solved the issue of the model's numerical instability with a small impact on forecast skills.
Abstract: Abstract. The National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) version 16 encountered a few model instability failures during the pre-operational real-time parallel runs. The model forecasts failed when an extremely small thickness depth appeared at the model’s lowest layer when strong tropical cyclones made landfall. A quick solution was to increase the value of minimum thickness depth, an arbitrary parameter introduced to prevent the occurrence of extremely thin model layers, thus numerical instability. This modification solved the issue of the model's numerical instability with a small impact on forecast skills. It was adopted in GFSv16 to help implement this version of the operational system as planned. Further investigation showed that the extremely small thickness depth occurred after the advection of geopotential heights at the interfaces of model layers. In the FV3 dynamic core, the horizontal winds at interfaces for advection are calculated from the layer-mean values by solving a tridiagonal system of equations in the entire vertical column based on the Parabolic Spline Method (PSM) with high-order boundary conditions (BCs). We replaced the high-order BCs with zero-gradient BCs for the interface-wind reconstruction. The impact of the zero-gradient BCs was investigated by performing sensitivity experiments with GFSv16, idealized mountain ridge tests, and the Rapid Refresh Forecast System (RRFS). The results showed that zero-gradient BCs can fundamentally solve the instability and have little impact on the forecast performances and the numerical solution of idealized mountain tests. This option has been added to FV3 and will be utilized in the GFS (GFSv17/GEFSv13) and RRFS for operations in 2024.

Peer ReviewDOI
03 Apr 2023
TL;DR: In this article , the sensitivity of modelled tropospheric hydroxyl (OH) concentration trends to meteorology and near-term climate forcers (NTCFs), namely methane (CH4); nitrogen oxides (NOx = NO 2 + NO); carbon monoxide (CO); non-methane volatile organic compounds (NMVOCs); and ozone-depleting substances (ODS) using the Geophysical Fluid Dynamics Laboratory’s (GFDL) atmospheric chemistry-climate model, Atmospheric Model version 4.1 (AM4.1) driven by emissions inventories developed for the Sixth Coupled Model Intercomparison Project (CMIP6) and forced by observed sea surface temperatures and sea ice prepared in support of the CMIP6 Atmospheric Model Interposition Project (AMIP) simulations.
Abstract: Abstract. We explore the sensitivity of modelled tropospheric hydroxyl (OH) concentration trends to meteorology and near-term climate forcers (NTCFs), namely methane (CH4); nitrogen oxides (NOx = NO2 + NO); carbon monoxide (CO); non-methane volatile organic compounds (NMVOCs); and ozone-depleting substances (ODS) using the Geophysical Fluid Dynamics Laboratory’s (GFDL) atmospheric chemistry-climate model, Atmospheric Model version 4.1 (AM4.1) driven by emissions inventories developed for the Sixth Coupled Model Intercomparison Project (CMIP6) and forced by observed sea surface temperatures and sea ice prepared in support of the CMIP6 Atmospheric Model Intercomparison Project (AMIP) simulations. We find that the modelled tropospheric airmass-weighted mean [OH] has increased by ~5 % globally from 1980 to 2014. We find that NOx emissions and CH4 concentrations dominate the modelled global trend, while CO emissions and meteorology were also important in driving regional trends. Modelled tropospheric NO2 column trends are largely consistent with those retrieved from the Ozone Monitoring Instrument (OMI) satellite, but simulated CO column trends generally overestimate those retrieved from the Measurements of Pollution in The Troposphere (MOPITT) satellite, possibly reflecting biases in input anthropogenic emission inventories, especially over China and South Asia.

Posted ContentDOI
24 May 2023
TL;DR: Tang et al. as discussed by the authors compiled a database of nitrification rates and nitrifier abundance in the global ocean from published literature and unpublished datasets, including 2393 and 1006 measurements of ammonia oxidation and nitrite oxidation rates, and 2187 and 631 quantifications of ammonia oxidizers and nitrit oxidizers, respectively.
Abstract: Abstract. As a key biogeochemical pathway in the marine nitrogen cycle, nitrification (ammonia oxidation and nitrite oxidation) converts the most reduced form of nitrogen – ammonium/ammonia (NH4+/ NH3) into the oxidized species nitrite (NO2−) and nitrate (NO3−). In the ocean, these processes are mainly performed by ammonia-oxidizing archaea (AOA) and bacteria (AOB), and nitrite-oxidizing bacteria (NOB). By transforming nitrogen speciation and providing substrates for nitrogen removal, nitrification affects microbial community structure, marine productivity (including chemoautotrophic carbon fixation) and the production of a powerful greenhouse gas, nitrous oxide (N2O). Nitrification is hypothesized to be regulated by temperature, oxygen, light, substrate concentration, substrate flux, pH, and other environmental factors. Although the number of field observations from various oceanic regions has increased considerably over the last few decades, a global synthesis is lacking, and understanding how environmental factors control nitrification remains elusive. Therefore, we have compiled a database of nitrification rates and nitrifier abundance in the global ocean from published literature and unpublished datasets. This database includes 2393 and 1006 measurements of ammonia oxidation and nitrite oxidation rates, and 2187 and 631 quantifications of ammonia oxidizers and nitrite oxidizers, respectively. This community effort confirms and enhances our understanding of the spatial distribution of nitrification and nitrifiers, and their corresponding drivers such as the important role of substrate concentration in controlling nitrification rates and nitrifier abundance. Some conundrums are also revealed including the inconsistent observations of light limitation and high rates of nitrite oxidation reported from anoxic waters. This database can be used to constrain the distribution of marine nitrification, to evaluate and improve biogeochemical models of nitrification, and to quantify the impact of nitrification on ecosystem functions like marine productivity and N2O production. This database additionally sets a baseline for comparison with future observations and guides future exploration (e.g., measurements in the poorly sampled regions such as the Indian Ocean; method comparison/standardization). The database is publicly available at Zenodo repository: https://doi.org/10.5281/zenodo.7942922 (Tang et al., 2023).

Posted ContentDOI
27 Feb 2023
TL;DR: In this article , the authors revisited the notion of radiative-convective equilibrium (RCE) and presented a simple model of RCE suitable for blackboard exposition, and combined this model with simplified treatments of CO2 and H2O radiative transfer to obtain analytic formulae for the radiative forcing from CO2, as well as the water vapor feedback, thus enabling a chalkboard estimate of climate sensitivity.
Abstract: Earth's climate sensitivity, or its temperature response to a given change in radiative forcing , is a central quantity in climate science and governs the severity of global warming. For decades it has been estimated at around 3 K per doubling of CO2, primarily using a succession of numerical models whose complexity has increased dramatically over time. It was first credibly estimated, however, by Manabe and Wetherald in 1967 using a relatively simple one-dimensional representation of Earth's climate known as radiative-convective equilibrium. It was largely for this work that Manabe received part of the 2021 Nobel Prize in Physics. Here we revisit the notion of radiative-convective equilibrium (RCE), and present a simple model of RCE suitable for blackboard exposition. We then combine this RCE model with simplified treatments of CO2 and H2O radiative transfer to obtain analytic formulae for the radiative forcing from CO2, as well as the water vapor feedback, thus enabling a chalkboard estimate of climate sensitivity in the context of radiative-convective equilibrium. Along the way we introduce key paradigms in climate dynamics and greenhouse gas physics, including the emission level approximation, the forcing-feedback decomposition of climate sensitivity, and 'Simpson's Law', which states that thermal emission from atmospheric water vapor is insensitive to surface temperature. We close by discussing the many important phenomena unaccounted for in this radiative-convective equilibrium framework, such as clouds, changes in absorbed solar radiation, and carbon cycle dynamics, which may be ripe for their own chalkboard treatments.

Posted ContentDOI
15 May 2023
TL;DR: The Köppen-Geiger climate classification maps for historical and future climate conditions are presented in this paper , where the authors evaluate 64 climate models from the Coupled Model Intercomparison Project phase 6 (CMIP6) and keep a subset of 40 with the most plausible CO2-induced warming rates.
Abstract: We present Version 2 of our widely used 1-km Köppen-Geiger climate classification maps for historical and future climate conditions. The historical maps (1901–1930, 1931–1960, 1961–1990, 1991–2020) are based on high-resolution, observation-based climatologies, while the future maps (2041–2070 and 2071–2099) are based on downscaled and bias-corrected climate projections for seven shared socio-economic pathways (SSPs). We evaluated 64 climate models from the Coupled Model Intercomparison Project phase 6 (CMIP6) and kept a subset of 40 with the most plausible CO2-induced warming rates. Under the “middle of the road” scenario SSP2-4.5, the global land surface area (excluding Antarctica) with suitable climatic conditions for tropical, arid, temperate, cold, and polar vegetation is projected to show a net change of +9 %, +3 %, −3 %, −2 %, −33 %, respectively, in 2071–2099 (with respect to 1991–2020). The Köppen-Geiger maps, including associated confidence estimates, the underlying monthly air temperature and precipitation data, and sensitivity metrics for CMIP6 climate models are available at www.gloh2o.org/koppen.

Posted ContentDOI
15 May 2023
TL;DR: In this article , the authors introduce a new process-oriented phase space that reduces the dimensionality of the problem but preserves (and emphasizes) the mechanistic relations between variables, and show that simulations from 16 different CMIP models exhibit coherent patterns of change in the climatological aridity index (AI) and daily soil moisture (SM) percentiles.
Abstract: Climate model predictions of land hydroclimate changes show large geographic heterogeneity, and differences between models are large. We introduce a new process-oriented phase space that reduces the dimensionality of the problem but preserves (and emphasizes) the mechanistic relations between variables. This transform from geographical space to climatological aridity index (AI) and daily soil moisture (SM) percentiles allows for interpretation of local, daily mechanistic relations between the key hydroclimatic variables in the context of time-mean and/or global-mean energetic constraints and the wet-get-wetter/dry-get-drier paradigm. Focusing on the tropics (30S-30N), we show that simulations from 16 different CMIP models exhibit coherent patterns of change in the AI/SM phase space that are aligned with the established soil-moisture/evapotranspiration regimes. Results indicate the need to introduce an active-rain regime as a special case of the energy-limited regime. In response to CO2-induced warming, rainfall only increases in this regime, and this temporal rainfall repartitioning is reflected in an overall decrease in soil moisture. Consequently, the regimes where SM constrains evapotranspiration become more frequently occupied, and hydroclimatic changes align with the position of the critical soil moisture value in the AI/SM phase space. Analysis of land hydroclimate changes in CMIP6 historical simulations in the AI/SM phase space reveal the very different impact of CO2 forcing and aerosol forcing. CESM2 Single Forcing Large Ensemble Experiments are used to understand their roles.

Posted ContentDOI
05 May 2023
TL;DR: Smith et al. as mentioned in this paper used the assessment methods used in the AR6 Working Group One (WGI) report, updating the monitoring datasets and producing updated estimates for key climate indicators including emissions, greenhouse gas concentrations, radiative forcing, surface temperature changes, Earth's energy imbalance, warming attributed to human activities, the remaining carbon budget and estimates of global temperature extremes.
Abstract: Abstract. Intergovernmental Panel on Climate Change (IPCC) assessments are the trusted source of scientific evidence for climate negotiations taking place under the United Nations Framework Convention on Climate Change (UNFCCC), including the first global stocktake under the Paris Agreement that will conclude at COP28 in December 2023. Evidence-based decision making needs to be informed by up-to-date and timely information on key indicators of the state of the climate system and of the human influence on the global climate system. However, successive IPCC reports are published at intervals of 5–10 years, creating potential for an information gap between report cycles. We base this update on the assessment methods used in the IPCC Sixth Assessment Report (AR6) Working Group One (WGI) report, updating the monitoring datasets and to produce updated estimates for key climate indicators including emissions, greenhouse gas concentrations, radiative forcing, surface temperature changes, the Earth’s energy imbalance, warming attributed to human activities, the remaining carbon budget and estimates of global temperature extremes. The purpose of this effort, grounded in an open data, open science approach, is to make annually updated reliable global climate indicators available in the public domain (https://doi.org/10.5281/zenodo.7883758, Smith et al., 2023). As they are traceable and consistent with IPCC report methods, they can be trusted by all parties involved in UNFCCC negotiations and help convey wider understanding of the latest knowledge of the climate system and its direction of travel. The indicators show that human induced warming reached 1.14 [0.9 to 1.4] °C over the 2013–2022 period and 1.26 [1.0 to 1.6] °C in 2022. Human induced warming is increasing at an unprecedented rate of over 0.2 °C per decade. This high rate of warming is caused by a combination of greenhouse gas emissions being at an all-time high of 57 ± 5.6 GtCO2e over the last decade, as well as reductions in the strength of aerosol cooling. Despite this, there are signs that emission levels are starting to stabilise, and we can hope that a continued series of these annual updates might track a real-world change of direction for the climate over this critical decade.