scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Geophysical Research in 2012"


Journal ArticleDOI
TL;DR: EGM2008 as mentioned in this paper is a spherical harmonic model of the Earth's gravitational potential, developed by a least squares combination of the ITG-GRACE03S gravitational model and its associated error covariance matrix, with the gravitational information obtained from a global set of area-mean free-air gravity anomalies defined on a 5 arc-minute equiangular grid.
Abstract: [1] EGM2008 is a spherical harmonic model of the Earth's gravitational potential, developed by a least squares combination of the ITG-GRACE03S gravitational model and its associated error covariance matrix, with the gravitational information obtained from a global set of area-mean free-air gravity anomalies defined on a 5 arc-minute equiangular grid This grid was formed by merging terrestrial, altimetry-derived, and airborne gravity data Over areas where only lower resolution gravity data were available, their spectral content was supplemented with gravitational information implied by the topography EGM2008 is complete to degree and order 2159, and contains additional coefficients up to degree 2190 and order 2159 Over areas covered with high quality gravity data, the discrepancies between EGM2008 geoid undulations and independent GPS/Leveling values are on the order of ±5 to ±10 cm EGM2008 vertical deflections over USA and Australia are within ±11 to ±13 arc-seconds of independent astrogeodetic values These results indicate that EGM2008 performs comparably with contemporary detailed regional geoid models EGM2008 performs equally well with other GRACE-based gravitational models in orbit computations Over EGM96, EGM2008 represents improvement by a factor of six in resolution, and by factors of three to six in accuracy, depending on gravitational quantity and geographic area EGM2008 represents a milestone and a new paradigm in global gravity field modeling, by demonstrating for the first time ever, that given accurate and detailed gravimetric data, asingle global model may satisfy the requirements of a very wide range of applications

1,755 citations


Journal ArticleDOI
TL;DR: HadCRUT4 as mentioned in this paper is a new data set of global and regional temperature evolution from 1850 to the present, which includes the addition of newly digitized measurement data, both over land and sea, new sea-surface temperature bias adjustments and a more comprehensive error model for describing uncertainties in sea surface temperature measurements.
Abstract: [1] Recent developments in observational near-surface air temperature and sea-surface temperature analyses are combined to produce HadCRUT4, a new data set of global and regional temperature evolution from 1850 to the present. This includes the addition of newly digitized measurement data, both over land and sea, new sea-surface temperature bias adjustments and a more comprehensive error model for describing uncertainties in sea-surface temperature measurements. An ensemble approach has been adopted to better describe complex temporal and spatial interdependencies of measurement and bias uncertainties and to allow these correlated uncertainties to be taken into account in studies that are based upon HadCRUT4. Climate diagnostics computed from the gridded data set broadly agree with those of other global near-surface temperature analyses. Fitted linear trends in temperature anomalies are approximately 0.07°C/decade from 1901 to 2010 and 0.17°C/decade from 1979 to 2010 globally. Northern/southern hemispheric trends are 0.08/0.07°C/decade over 1901 to 2010 and 0.24/0.10°C/decade over 1979 to 2010. Linear trends in other prominent near-surface temperature analyses agree well with the range of trends computed from the HadCRUT4 ensemble members.

1,311 citations


Journal ArticleDOI
TL;DR: Slab1.0.0 as mentioned in this paper describes the detailed, non-planar, three-dimensional geometry of approximately 85% of subduction zones worldwide, where the model focuses on the detailed form of each slab from their trenches through the seismogenic zone, where it combines data sets from active source and passive seismology, providing a uniform approach to the definition of the entire seismically active slab geometry.
Abstract: [1] We describe and present a new model of global subduction zone geometries, called Slab1.0. An extension of previous efforts to constrain the two-dimensional non-planar geometry of subduction zones around the focus of large earthquakes, Slab1.0 describes the detailed, non-planar, three-dimensional geometry of approximately 85% of subduction zones worldwide. While the model focuses on the detailed form of each slab from their trenches through the seismogenic zone, where it combines data sets from active source and passive seismology, it also continues to the limits of their seismic extent in the upper-mid mantle, providing a uniform approach to the definition of the entire seismically active slab geometry. Examples are shown for two well-constrained global locations; models for many other regions are available and can be freely downloaded in several formats from our new Slab1.0 website, http://on.doi.gov/d9ARbS. We describe improvements in our two-dimensional geometry constraint inversion, including the use of ‘average’ active source seismic data profiles in the shallow trench regions where data are otherwise lacking, derived from the interpolation between other active source seismic data along-strike in the same subduction zone. We include several analyses of the uncertainty and robustness of our three-dimensional interpolation methods. In addition, we use the filtered, subduction-related earthquake data sets compiled to build Slab1.0 in a reassessment of previous analyses of the deep limit of the thrust interface seismogenic zone for all subduction zones included in our global model thus far, concluding that the width of these seismogenic zones is on average 30% larger than previous studies have suggested.

865 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used the structure-from-motion (SfM) and multi-view-stereo (MVS) algorithms to estimate erosion rates along a 50m-long coastal cliff.
Abstract: Topographic measurements for detailed studies of processes such as erosion or mass movement are usually acquired by expensive laser scanners or rigorous photogrammetry. Here, we test and use an alternative technique based on freely available computer vision software which allows general geoscientists to easily create accurate 3D models from field photographs taken with a consumer-grade camera. The approach integrates structure-from-motion (SfM) and multi-view-stereo (MVS) algorithms and, in contrast to traditional photogrammetry techniques, it requires little expertise and few control measurements, and processing is automated. To assess the precision of the results, we compare SfM-MVS models spanning spatial scales of centimeters (a hand sample) to kilometers (the summit craters of Piton de la Fournaise volcano) with data acquired from laser scanning and formal close-range photogrammetry. The relative precision ratio achieved by SfM-MVS (measurement precision : observation distance) is limited by the straightforward camera calibration model used in the software, but generally exceeds 1:1000 (i.e. centimeter-level precision over measurement distances of 10s of meters). We apply SfM-MVS at an intermediate scale, to determine erosion rates along a ~50-m-long coastal cliff. Seven surveys carried out over a year indicate an average retreat rate of 0.70±0.05 m a-1. Sequential erosion maps (at ~0.05 m grid resolution) highlight the spatio-temporal variability in the retreat, with semivariogram analysis indicating a correlation between volume loss and length scale. Compared with a laser scanner survey of the same site, SfM-MVS produced comparable data and reduced data collection time by ~80%.

859 citations


Journal ArticleDOI
TL;DR: In this paper, an extensive revision of the Climatic Research Unit (CRU) land station temperature database has been used to produce a grid-box data set of 5° latitude × 5° longitude temperature anomalies.
Abstract: [1] This study is an extensive revision of the Climatic Research Unit (CRU) land station temperature database that has been used to produce a grid-box data set of 5° latitude × 5° longitude temperature anomalies. The new database (CRUTEM4) comprises 5583 station records of which 4842 have enough data for the 1961–1990 period to calculate or estimate the average temperatures for this period. Many station records have had their data replaced by newly homogenized series that have been produced by a number of studies, particularly from National Meteorological Services (NMSs). Hemispheric temperature averages for land areas developed with the new CRUTEM4 data set differ slightly from their CRUTEM3 equivalent. The inclusion of much additional data from the Arctic (particularly the Russian Arctic) has led to estimates for the Northern Hemisphere (NH) being warmer by about 0.1°C for the years since 2001. The NH/Southern Hemisphere (SH) warms by 1.12°C/0.84°C over the period 1901–2010. The robustness of the hemispheric averages is assessed by producing five different analyses, each including a different subset of 20% of the station time series and by omitting some large countries. CRUTEM4 is also compared with hemispheric averages produced by reanalyses undertaken by the European Centre for Medium-Range Weather Forecasts (ECMWF): ERA-40 (1958–2001) and ERA-Interim (1979–2010) data sets. For the NH, agreement is good back to 1958 and excellent from 1979 at monthly, annual, and decadal time scales. For the SH, agreement is poorer, but if the area is restricted to the SH north of 60°S, the agreement is dramatically improved from the mid-1970s.

821 citations


Journal ArticleDOI
TL;DR: The second phase of the NLDAS-2 research partnership is presented in this article, where four land surface models (Noah, Variable Infiltration Capacity, Sacramento Soil Moisture Accounting, and Mosaic) are executed over the conterminous U.S. (CONUS) in real-time and retrospective modes.
Abstract: [1] Results are presented from the second phase of the multiinstitution North American Land Data Assimilation System (NLDAS-2) research partnership. In NLDAS, the Noah, Variable Infiltration Capacity, Sacramento Soil Moisture Accounting, and Mosaic land surface models (LSMs) are executed over the conterminous U.S. (CONUS) in realtime and retrospective modes. These runs support the drought analysis, monitoring and forecasting activities of the National Integrated Drought Information System, as well as efforts to monitor large-scale floods. NLDAS-2 builds upon the framework of the first phase of NLDAS (NLDAS-1) by increasing the accuracy and consistency of the surface forcing data, upgrading the land surface model code and parameters, and extending the study from a 3-year (1997–1999) to a 30-year (1979–2008) time window. As the first of two parts, this paper details the configuration of NLDAS-2, describes the upgrades to the forcing, parameters, and code of the four LSMs, and explores overall model-to-model comparisons of land surface water and energy flux and state variables over the CONUS. Focusing on model output rather than on observations, this study seeks to highlight the similarities and differences between models, and to assess changes in output from that seen in NLDAS-1. The second part of the two-part article focuses on the validation of model-simulated streamflow and evaporation against observations. The results depict a higher level of agreement among the four models over much of the CONUS than was found in the first phase of NLDAS. This is due, in part, to recent improvements in the parameters, code, and forcing of the NLDAS-2 LSMs that were initiated following NLDAS-1. However, large inter-model differences still exist in the northeast, Lake Superior, and western mountainous regions of the CONUS, which are associated with cold season processes. In addition, variations in the representation of sub-surface hydrology in the four LSMs lead to large differences in modeled evaporation and subsurface runoff. These issues are important targets for future research by the land surface modeling community. Finally, improvement from NLDAS-1 to NLDAS-2 is summarized by comparing the streamflow measured from U.S. Geological Survey stream gauges with that simulated by four NLDAS models over 961 small basins.

804 citations


Journal ArticleDOI
TL;DR: In this paper, a color index (CI) was proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters.
Abstract: A new empirical algorithm is proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters (approximately 77% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote sensing reflectance (R(sub rs), sr(sup -1) in the green and a reference formed linearly between R(sub rs) in the blue and red. For low Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band-ratios and Chl, which was further validated using global data collected concurrently by ship-borne and SeaWiFS satellite instruments. Model simulations showed that for low Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient, and performed similarly for different relative contributions of non-phytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time-series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over MERIS and CZCS data indicate that the new approach should be generally applicable to all existing and future ocean color instruments.

684 citations


Journal ArticleDOI
TL;DR: In this paper, a method for combining 1-km thermal anomalies (active fires) and 500 m burned area observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) was developed to estimate the influence of these fires.
Abstract: In several biomes, including croplands, wooded savannas, and tropical forests, many small fires occur each year that are well below the detection limit of the current generation of global burned area products derived from moderate resolution surface reflectance imagery. Although these fires often generate thermal anomalies that can be detected by satellites, their contributions to burned area and carbon fluxes have not been systematically quantified across different regions and continents. Here we developed a preliminary method for combining 1-km thermal anomalies (active fires) and 500 m burned area observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) to estimate the influence of these fires. In our approach, we calculated the number of active fires inside and outside of 500 m burn scars derived from reflectance data. We estimated small fire burned area by computing the difference normalized burn ratio (dNBR) for these two sets of active fires and then combining these observations with other information. In a final step, we used the Global Fire Emissions Database version 3 (GFED3) biogeochemical model to estimate the impact of these fires on biomass burning emissions. We found that the spatial distribution of active fires and 500 m burned areas were in close agreement in ecosystems that experience large fires, including savannas across southern Africa and Australia and boreal forests in North America and Eurasia. In other areas, however, we observed many active fires outside of burned area perimeters. Fire radiative power was lower for this class of active fires. Small fires substantially increased burned area in several continental-scale regions, including Equatorial Asia (157%), Central America (143%), and Southeast Asia (90%) during 2001-2010. Globally, accounting for small fires increased total burned area by approximately by 35%, from 345 Mha/yr to 464 Mha/yr. A formal quantification of uncertainties was not possible, but sensitivity analyses of key model parameters caused estimates of global burned area increases from small fires to vary between 24% and 54%. Biomass burning carbon emissions increased by 35% at a global scale when small fires were included in GFED3, from 1.9 Pg C/yr to 2.5 Pg C/yr. The contribution of tropical forest fires to year-to-year variability in carbon fluxes increased because small fires amplified emissions from Central America, South America and Southeast Asia-regions where drought stress and burned area varied considerably from year to year in response to El Nino-Southern Oscillation and other climate modes.

577 citations


Journal ArticleDOI
TL;DR: The method uses Bayesian transdimensional Markov Chain Monte Carlo and allows a wide range of possible thermal history models to be considered as general prior information on time, temperature (and temperature offset for multiple samples in a vertical profile).
Abstract: [1] A new approach for inverse thermal history modeling is presented. The method uses Bayesian transdimensional Markov Chain Monte Carlo and allows us to specify a wide range of possible thermal history models to be considered as general prior information on time, temperature (and temperature offset for multiple samples in a vertical profile). We can also incorporate more focused geological constraints in terms of more specific priors. The Bayesian approach naturally prefers simpler thermal history models (which provide an adequate fit to the observations), and so reduces the problems associated with over interpretation of inferred thermal histories. The output of the method is a collection or ensemble of thermal histories, which quantifies the range of accepted models in terms a (posterior) probability distribution. Individual models, such as the best data fitting (maximum likelihood) model or the expected model (effectively the weighted mean from the posterior distribution) can be examined. Different data types (e.g., fission track, U-Th/He, 40Ar/39Ar) can be combined, requiring just a data-specific predictive forward model and data fit (likelihood) function. To demonstrate the main features and implementation of the approach, examples are presented using both synthetic and real data.

514 citations


Journal ArticleDOI
TL;DR: There is no correct baseline determination technique since there does not have a set of ground-truth observations required to make an objective evaluation, so the user must keep in mind the assumptions on which the baseline was determined and draw conclusions accordingly.
Abstract: [1] In this paper I outline the data processing technique which is used in the SuperMAG initiative. SuperMAG is a worldwide collaboration of organizations and national agencies that currently operate more than 300 ground based magnetometers. SuperMAG provides easy access to validated ground magnetic field perturbations in the same coordinate system, identical time resolution and with a common baseline removal approach. The purpose of SuperMAG is to provide scientists, teachers, students and the general public easy access to measurements of the magnetic field at the surface of the Earth. Easy access to data, plots and derived products maximizes the utilization of this unique data set. It is outlined how SuperMAG processes the observations obtained by the individual data provider. Data are rotated into a local magnetic coordinate system by determining a time dependent declination angle. This angle displays a slow gradual change and a yearly periodic variation attributed to changes in the Earth main field and season temperature variations. The baseline is determined from the data itself in a three step process: (1) a daily baseline, (2) a yearly trend, and (3) a residual offset. This technique does not require so-called quiet days and thus it avoids all the well-known problems associated with their identification. The residual offset for the N- and Z-components shows a distinct latitudinal dependence while the E-component is independent of the latitude. This result is interpreted as being due to a weak ring current (likely asymmetric) which is present even during official quiet days. For the purpose of M-I research using 1-min data I find no difference between observatories and variometers. I finally argue that there is no correct baseline determination technique since we do not have a set of ground-truth observations required to make an objective evaluation. Instead, the user must keep in mind the assumptions on which the baseline was determined and draw conclusions accordingly.

506 citations


Journal ArticleDOI
TL;DR: In this article, the authors represent these and other depth-varying seismic characteristics with four distinct failure domains extending along the megathrust from the trench to the downdip edge of the seismogenic zone.
Abstract: Subduction zone plate boundary megathrust faults accommodate relative plate motions with spatially varying sliding behavior. The 2004 Sumatra-Andaman (M_w 9.2), 2010 Chile (Mw 8.8), and 2011 Tohoku (M_w 9.0) great earthquakes had similar depth variations in seismic wave radiation across their wide rupture zones – coherent teleseismic short-period radiation preferentially emanated from the deeper portion of the megathrusts whereas the largest fault displacements occurred at shallower depths but produced relatively little coherent short-period radiation. We represent these and other depth-varying seismic characteristics with four distinct failure domains extending along the megathrust from the trench to the downdip edge of the seismogenic zone. We designate the portion of the megathrust less than 15 km below the ocean surface as domain A, the region of tsunami earthquakes. From 15 to ∼35 km deep, large earthquake displacements occur over large-scale regions with only modest coherent short-period radiation, in what we designate as domain B. Rupture of smaller isolated megathrust patches dominate in domain C, which extends from ∼35 to 55 km deep. These isolated patches produce bursts of coherent short-period energy both in great ruptures and in smaller, sometimes repeating, moderate-size events. For the 2011 Tohoku earthquake, the sites of coherent teleseismic short-period radiation are close to areas where local strong ground motions originated. Domain D, found at depths of 30–45 km in subduction zones where relatively young oceanic lithosphere is being underthrust with shallow plate dip, is represented by the occurrence of low-frequency earthquakes, seismic tremor, and slow slip events in a transition zone to stable sliding or ductile flow below the seismogenic zone.

Journal ArticleDOI
TL;DR: In this paper, the authors used GPS times series from 30 stations in Nepal and southern Tibet, in addition to previously published campaign GPS points and leveling data and determine the pattern of interseismic coupling on the Main Himalayan Thrust fault (MHT).
Abstract: We document geodetic strain across the Nepal Himalaya using GPS times series from 30 stations in Nepal and southern Tibet, in addition to previously published campaign GPS points and leveling data and determine the pattern of interseismic coupling on the Main Himalayan Thrust fault (MHT). The noise on the daily GPS positions is modeled as a combination of white and colored noise, in order to infer secular velocities at the stations with consistent uncertainties. We then locate the pole of rotation of the Indian plate in the ITRF 2005 reference frame at longitude = − 1.34° ± 3.31°, latitude = 51.4° ± 0.3° with an angular velocity of Ω = 0.5029 ± 0.0072°/Myr. The pattern of coupling on the MHT is computed on a fault dipping 10° to the north and whose strike roughly follows the arcuate shape of the Himalaya. The model indicates that the MHT is locked from the surface to a distance of approximately 100 km down dip, corresponding to a depth of 15 to 20 km. In map view, the transition zone between the locked portion of the MHT and the portion which is creeping at the long term slip rate seems to be at the most a few tens of kilometers wide and coincides with the belt of midcrustal microseismicity underneath the Himalaya. According to a previous study based on thermokinematic modeling of thermochronological and thermobarometric data, this transition seems to happen in a zone where the temperature reaches 350°C. The convergence between India and South Tibet proceeds at a rate of 17.8 ± 0.5 mm/yr in central and eastern Nepal and 20.5 ± 1 mm/yr in western Nepal. The moment deficit due to locking of the MHT in the interseismic period accrues at a rate of 6.6 ± 0.4 × 10^(19) Nm/yr on the MHT underneath Nepal. For comparison, the moment released by the seismicity over the past 500 years, including 14 M_W ≥ 7 earthquakes with moment magnitudes up to 8.5, amounts to only 0.9 × 10^(19) Nm/yr, indicating a large deficit of seismic slip over that period or very infrequent large slow slip events. No large slow slip event has been observed however over the 20 years covered by geodetic measurements in the Nepal Himalaya. We discuss the magnitude and return period of M > 8 earthquakes required to balance the long term slip budget on the MHT.

Journal ArticleDOI
TL;DR: In this paper, a multispecies analysis of daily air samples collected at the NOAA Boulder Atmospheric Observatory (BAO) in Weld County in northeastern Colorado since 2007 shows highly correlated alkane enhancements caused by a regionally distributed mix of sources in the Denver-Julesburg Basin.
Abstract: [1] The multispecies analysis of daily air samples collected at the NOAA Boulder Atmospheric Observatory (BAO) in Weld County in northeastern Colorado since 2007 shows highly correlated alkane enhancements caused by a regionally distributed mix of sources in the Denver-Julesburg Basin. To further characterize the emissions of methane and non-methane hydrocarbons (propane, n-butane, i-pentane, n-pentane and benzene) around BAO, a pilot study involving automobile-based surveys was carried out during the summer of 2008. A mix of venting emissions (leaks) of raw natural gas and flashing emissions from condensate storage tanks can explain the alkane ratios we observe in air masses impacted by oil and gas operations in northeastern Colorado. Using the WRAP Phase III inventory of total volatile organic compound (VOC) emissions from oil and gas exploration, production and processing, together with flashing and venting emission speciation profiles provided by State agencies or the oil and gas industry, we derive a range of bottom-up speciated emissions for Weld County in 2008. We use the observed ambient molar ratios and flashing and venting emissions data to calculate top-down scenarios for the amount of natural gas leaked to the atmosphere and the associated methane and non-methane emissions. Our analysis suggests that the emissions of the species we measured are most likely underestimated in current inventories and that the uncertainties attached to these estimates can be as high as a factor of two.

Journal ArticleDOI
TL;DR: In this article, the authors present a new thermomechanical finite element model of ice flow named ISSM (Ice Sheet System Model) that includes higher-order stresses, high spatial resolution capability and data assimilation techniques to better capture ice dynamics and produce realistic simulations of ice sheet flow at the continental scale.
Abstract: Ice flow models used to project the mass balance of ice sheets in Greenland and Antarctica usually rely on the Shallow Ice Approximation (SIA) and the Shallow-Shelf Approximation (SSA), sometimes combined into so-called hybrid models. Such models, while computationally efficient, are based on a simplified set of physical assumptions about the mechanical regime of the ice flow, which does not uniformly apply everywhere on the ice sheet/ice shelf system, especially near grounding lines, where rapid changes are taking place at present. Here, we present a new thermomechanical finite element model of ice flow named ISSM (Ice Sheet System Model) that includes higher-order stresses, high spatial resolution capability and data assimilation techniques to better capture ice dynamics and produce realistic simulations of ice sheet flow at the continental scale. ISSM includes several approximations of the momentum balance equations, ranging from the two-dimensional SSA to the three-dimensional full-Stokes formulation. It also relies on a massively parallelized architecture and state-of-the-art scalable tools. ISSM employs data assimilation techniques, at all levels of approximation of the momentum balance equations, to infer basal drag at the ice-bed interface from satellite radar interferometry-derived observations of ice motion. Following a validation of ISSM with standard benchmarks, we present a demonstration of its capability in the case of the Greenland Ice Sheet. We show ISSM is able to simulate the ice flow of an entire ice sheet realistically at a high spatial resolution, with higher-order physics, thereby providing a pathway for improving projections of ice sheet evolution in a warming climate. Copyright 2012 by the American Geophysical Union.

Journal ArticleDOI
TL;DR: In this paper, a generalized two-phase debris flow model is proposed that includes many essential physical phenomena, including viscous drag, buoyancy, and virtual mass, and the model employs the Mohr-Coulomb plasticity for the solid stress and the fluid stress is modeled as a solid-volume fraction-gradient-enhanced non-Newtonian viscous stress.
Abstract: [1] This paper presents a new, generalized two-phase debris flow model that includes many essential physical phenomena. The model employs the Mohr-Coulomb plasticity for the solid stress, and the fluid stress is modeled as a solid-volume-fraction-gradient-enhanced non-Newtonian viscous stress. The generalized interfacial momentum transfer includes viscous drag, buoyancy, and virtual mass. A new, generalized drag force is proposed that covers both solid-like and fluid-like contributions, and can be applied to drag ranging from linear to quadratic. Strong coupling between the solid- and the fluid-momentum transfer leads to simultaneous deformation, mixing, and separation of the phases. Inclusion of the non-Newtonian viscous stresses is important in several aspects. The evolution, advection, and diffusion of the solid-volume fraction plays an important role. The model, which includes three innovative, fundamentally new, and dominant physical aspects (enhanced viscous stress, virtual mass, generalized drag) constitutes the most generalized two-phase flow model to date, and can reproduce results from most previous simple models that consider single- and two-phase avalanches and debris flows as special cases. Numerical results indicate that the model can adequately describe the complex dynamics of subaerial two-phase debris flows, particle-laden and dispersive flows, sediment transport, and submarine debris flows and associated phenomena.

Journal ArticleDOI
TL;DR: This work presents a novel method for joint inversion of receiver functions and surface wave dispersion data, using a transdimensional Bayesian formulation and shows that the Hierarchical Bayes procedure is a powerful tool in this situation, able to evaluate the level of information brought by different data types in the misfit, thus removing the arbitrary choice of weighting factors.
Abstract: We present a novel method for joint inversion of receiver functions and surface wave dispersion data, using a transdimensional Bayesian formulation. This class of algorithm treats the number of model parameters (e.g. number of layers) as an unknown in the problem. The dimension of the model space is variable and a Markov chain Monte Carlo (McMC) scheme is used to provide a parsimonious solution that fully quantifies the degree of knowledge one has about seismic structure (i.e constraints on the model, resolution, and trade-offs). The level of data noise (i.e. the covariance matrix of data errors) effectively controls the information recoverable from the data and here it naturally determines the complexity of the model (i.e. the number of model parameters). However, it is often difficult to quantify the data noise appropriately, particularly in the case of seismic waveform inversion where data errors are correlated. Here we address the issue of noise estimation using an extended Hierarchical Bayesian formulation, which allows both the variance and covariance of data noise to be treated as unknowns in the inversion. In this way it is possible to let the data infer the appropriate level of data fit. In the context of joint inversions, assessment of uncertainty for different data types becomes crucial in the evaluation of the misfit function. We show that the Hierarchical Bayes procedure is a powerful tool in this situation, because it is able to evaluate the level of information brought by different data types in the misfit, thus removing the arbitrary choice of weighting factors. After illustrating the method with synthetic tests, a real data application is shown where teleseismic receiver functions and ambient noise surface wave dispersion measurements from the WOMBAT array (South-East Australia) are jointly inverted to provide a probabilistic 1D model of shear-wave velocity beneath a given station.

Journal ArticleDOI
TL;DR: In this paper, the relationship between dissolved organic carbon (DOC) concentration and chromophoric dissolved organic matter (CDOM) parameters was measured over a range of discharge in 30 U.S. rivers.
Abstract: [1] Dissolved organic carbon (DOC) concentration and chromophoric dissolved organic matter (CDOM) parameters were measured over a range of discharge in 30 U.S. rivers, covering a diverse assortment of fluvial ecosystems in terms of watershed size and landscape drained. Relationships between CDOM absorption at a range of wavelengths (a254, a350, a440) and DOC in the 30 watersheds were found to correlate strongly and positively for the majority of U.S. rivers. However, four rivers (Colorado, Colombia, Rio Grande and St. Lawrence) exhibited statistically weak relationships between CDOM absorption and DOC. These four rivers are atypical, as they either drain from the Great Lakes or experience significant impoundment of water within their watersheds, and they exhibited values for dissolved organic matter (DOM) parameters indicative of autochthonous or anthropogenic sources or photochemically degraded allochthonous DOM and thus a decoupling between CDOM and DOC. CDOM quality parameters in the 30 rivers were found to be strongly correlated to DOM compositional metrics derived via XAD fractionation, highlighting the potential for examining DOM biochemical quality from CDOM measurements. This study establishes the ability to derive DOC concentration from CDOM absorption for the majority of U.S. rivers, describes characteristics of riverine systems where such an approach is not valid, and emphasizes the possibility of examining DOM composition and thus biogeochemical function via CDOM parameters. Therefore, the usefulness of CDOM measurements, both laboratory-based analyses and in situ instrumentation, for improving spatial and temporal resolution of DOC fluxes and DOM dynamics in future studies is considerable in a range of biogeochemical studies.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the accuracy of cloud water content (CWC) and water vapor mixing ratio (H2O) outputs from 19 climate models submitted to the Phase 5 of Coupled Model Intercomparison Project (CMIP5), and assess improvements relative to their counterparts for the earlier CMIP3.
Abstract: [1] Using NASA's A-Train satellite measurements, we evaluate the accuracy of cloud water content (CWC) and water vapor mixing ratio (H2O) outputs from 19 climate models submitted to the Phase 5 of Coupled Model Intercomparison Project (CMIP5), and assess improvements relative to their counterparts for the earlier CMIP3. We find more than half of the models show improvements from CMIP3 to CMIP5 in simulating column-integrated cloud amount, while changes in water vapor simulation are insignificant. For the 19 CMIP5 models, the model spreads and their differences from the observations are larger in the upper troposphere (UT) than in the lower or middle troposphere (L/MT). The modeled mean CWCs over tropical oceans range from ~3% to ~15× of the observations in the UT and 40% to 2× of the observations in the L/MT. For modeled H2Os, the mean values over tropical oceans range from ~1% to 2× of the observations in the UT and within 10% of the observations in the L/MT. The spatial distributions of clouds at 215 hPa are relatively well-correlated with observations, noticeably better than those for the L/MT clouds. Although both water vapor and clouds are better simulated in the L/MT than in the UT, there is no apparent correlation between the model biases in clouds and water vapor. Numerical scores are used to compare different model performances in regards to spatial mean, variance and distribution of CWC and H2O over tropical oceans. Model performances at each pressure level are ranked according to the average of all the relevant scores for that level. © 2012. American Geophysical Union.

Journal ArticleDOI
TL;DR: In this paper, spectral aerosol optical depth (tau) and single scattering albedo (omega (sub 0) ) from AERONET measurements are used to form absorption and size relationships to infer dominant aerosol types.
Abstract: Partitioning of mineral dust, pollution, smoke, and mixtures using remote sensing techniques can help improve accuracy of satellite retrievals and assessments of the aerosol radiative impact on climate. Spectral aerosol optical depth (tau) and single scattering albedo (omega (sub 0) ) from Aerosol Robotic Network (AERONET) measurements are used to form absorption [i.e., omega (sub 0) and absorption Angstrom exponent (alpha(sub abs))] and size [i.e., extinction Angstrom exponent (alpha(sub ext)) and fine mode fraction of tau] relationships to infer dominant aerosol types. Using the long-term AERONET data set (1999-2010), 19 sites are grouped by aerosol type based on known source regions to: (1) determine the average omega (sub 0) and alpha(sub abs) at each site (expanding upon previous work); (2) perform a sensitivity study on alpha(sub abs) by varying the spectral omega (sub 0); and (3) test the ability of each absorption and size relationship to distinguish aerosol types. The spectral omega (sub 0) averages indicate slightly more aerosol absorption (i.e., a 0.0 < delta omega (sub 0) <= 0.02 decrease) than in previous work and optical mixtures of pollution and smoke with dust show stronger absorption than dust alone. Frequency distributions of alpha(sub abs) show significant overlap among aerosol type categories and at least 10% of the alpha(sub abs) retrievals in each category are below 1.0. Perturbing the spectral omega (sub 0) by +/- 0.03 induces significant alpha(sub abs) changes from the unperturbed value by at least approx. +/- 0.6 for Dust, approx. +/-0.2 for Mixed, and approx. +/-0.1 for Urban/Industrial and Biomass Burning. The omega (sub 0)440nm and alpha(sub ext) 440-870nm relationship shows the best separation among aerosol type clusters, providing a simple technique for determining aerosol type from surface- and future space-based instrumentation.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated simulated, daily average gross primary productivity (GPP) from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States and Canada.
Abstract: [1] Accurately simulating gross primary productivity (GPP) in terrestrial ecosystem models is critical because errors in simulated GPP propagate through the model to introduce additional errors in simulated biomass and other fluxes. We evaluated simulated, daily average GPP from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States and Canada. None of the models in this study match estimated GPP within observed uncertainty. On average, models overestimate GPP in winter, spring, and fall, and underestimate GPP in summer. Models overpredicted GPP under dry conditions and for temperatures below 0°C. Improvements in simulated soil moisture and ecosystem response to drought or humidity stress will improve simulated GPP under dry conditions. Adding a low-temperature response to shut down GPP for temperatures below 0°C will reduce the positive bias in winter, spring, and fall and improve simulated phenology. The negative bias in summer and poor overall performance resulted from mismatches between simulated and observed light use efficiency (LUE). Improving simulated GPP requires better leaf-to-canopy scaling and better values of model parameters that control the maximum potential GPP, such asemax (LUE), Vcmax (unstressed Rubisco catalytic capacity) or Jmax (the maximum electron transport rate).

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the validation of simulated streamflow from four land surface models (Noah, Mosaic, Sacramento Soil Moisture Accounting (SAC-SMA), and Variable Infiltration Capacity (VIC) and their ensemble mean.
Abstract: [1] This is the second part of a study on continental-scale water and energy flux analysis and validation conducted in phase 2 of the North American Land Data Assimilation System project (NLDAS-2). The first part concentrates on a model-by-model comparison of mean annual and monthly water fluxes, energy fluxes and state variables. In this second part, the focus is on the validation of simulated streamflow from four land surface models (Noah, Mosaic, Sacramento Soil Moisture Accounting (SAC-SMA), and Variable Infiltration Capacity (VIC) models) and their ensemble mean. Comparisons are made against 28-years (1 October 1979–30 September 2007) of United States Geological Survey observed streamflow for 961 small basins and 8 major basins over the conterminous United States (CONUS). Relative bias, anomaly correlation and Nash-Sutcliffe Efficiency (NSE) statistics at daily to annual time scales are used to assess model-simulated streamflow. The Noah (the Mosaic) model overestimates (underestimates) mean annual runoff and underestimates (overestimates) mean annual evapotranspiration. The SAC-SMA and VIC models simulate the mean annual runoff and evapotranspiration well when compared with the observations. The ensemble mean is closer to the mean annual observed streamflow for both the 961 small basins and the 8 major basins than is the mean from any individual model. All of the models, as well as the ensemble mean, have large daily, weekly, monthly, and annual streamflow anomaly correlations for most basins over the CONUS, implying strong simulation skill. However, the daily, weekly, and monthly NSE analysis results are not necessarily encouraging, in particular for daily streamflow. The Noah and Mosaic models are useful (NSE > 0.4) only for about 10% of the 961 small basins, the SAC-SMA and VIC models are useful for about 30% of the 961 small basins, and the ensemble mean is useful for about 42% of the 961 small basins. As the time scale increases, the NSE increases as expected. However, even for monthly streamflow, the ensemble mean is useful for only 75% of the 961 small basins.

Journal ArticleDOI
TL;DR: In this article, a physically based approach for calculating glacier ice thickness distribution and volume is presented and applied to all glaciers and ice caps worldwide, combining glacier outlines of the globally complete Randolph Glacier Inventory with terrain elevation models (Shuttle Radar Topography Mission/Advanced Spaceborne Thermal Emission and Reflection Radiometer).
Abstract: [1] A new physically based approach for calculating glacier ice thickness distribution and volume is presented and applied to all glaciers and ice caps worldwide. Combining glacier outlines of the globally complete Randolph Glacier Inventory with terrain elevation models (Shuttle Radar Topography Mission/Advanced Spaceborne Thermal Emission and Reflection Radiometer), we use a simple dynamic model to obtain spatially distributed thickness of individual glaciers by inverting their surface topography. Results are validated against a comprehensive set of thickness observations for 300 glaciers from most glacierized regions of the world. For all mountain glaciers and ice caps outside of the Antarctic and Greenland ice sheets we find a total ice volume of 170 × 103 ± 21 × 103 km3, or 0.43 ± 0.06 m of potential sea level rise.

Journal ArticleDOI
TL;DR: In this article, a new global moving hot spot reference frame (GMHRF) was defined using a comprehensive set of radiometric dates from arguably the best-studied hot spot tracks, refined plate circuit reconstructions, a new plate polygon model, and an iterative approach for estimating hot spot motions from numerical models of whole mantle convection and advection of plume conduits in the mantle flow that ensures their consistency with surface plate motions.
Abstract: [1] We defined a new global moving hot spot reference frame (GMHRF), using a comprehensive set of radiometric dates from arguably the best-studied hot spot tracks, refined plate circuit reconstructions, a new plate polygon model, and an iterative approach for estimating hot spot motions from numerical models of whole mantle convection and advection of plume conduits in the mantle flow that ensures their consistency with surface plate motions. Our results show that with the appropriate choice of a chain of relative motion linking the Pacific plate to the plates of the Indo-Atlantic hemisphere, the observed geometries and ages of the Pacific and Indo-Atlantic hot spot tracks were accurately reproduced by a combination of absolute plate motion and hot spot drift back to the Late Cretaceous (∼80 Ma). Similarly good fits were observed for Indo-Atlantic tracks for earlier time (to ∼130 Ma). In contrast, attempts to define a fixed hot spot frame resulted in unacceptable misfits for the Late Cretaceous to Paleogene (80–50 Ma), highlighting the significance of relative motion between the Pacific and Indo-Atlantic hot spots during this period. A comparison of absolute reconstructions using the GMHRF and the most recent global paleomagnetic frame reveals substantial amounts of true polar wander at rates varying between ∼0.1°/Ma and 1°/Ma. Two intriguing, nearly equal and antipodal rotations of the Earth relative to its spin axis are suggested for the 90–60 Ma and 60–40 Ma intervals (∼9° at a 0.3–0.5°/Ma rate); these predictions have yet to be tested by geodynamic models.

Journal ArticleDOI
TL;DR: In this article, a multimodel ensemble projection of midlatitude storm track changes has been examined, quantified by temporal variance of meridional wind and sea level pressure (psl), as well as cyclone track statistics.
Abstract: [1] CMIP5 multimodel ensemble projection of midlatitude storm track changes has been examined. Storm track activity is quantified by temporal variance of meridional wind and sea level pressure (psl), as well as cyclone track statistics. For the Southern Hemisphere (SH), CMIP5 models project clear poleward migration, upward expansion, and intensification of the storm track. For the Northern Hemisphere (NH), the models also project some poleward shift and upward expansion of the storm track in the upper troposphere/lower stratosphere, but mainly weakening of the storm track toward its equatorward flank in the troposphere. Consistent with these, CMIP5 models project significant increase in the frequency of extreme cyclones during the SH cool season, but significant decrease in such events in the NH. Comparisons with CMIP3 projections indicate high degrees of consistency for SH projections, but significant differences are found in the NH. Overall, CMIP5 models project larger decrease in storm track activity in the NH troposphere, especially over North America in winter, where psl variance as well as cyclone frequency and amplitude are all projected to decrease significantly. In terms of climatology, similar to CMIP3, most CMIP5 models simulate storm tracks that are too weak and display equatorward biases in their latitude. These biases have also been related to future projections. In the NH, the strength of a model's climatological storm track is negatively correlated with its projected amplitude change under global warming, while in the SH, models with large equatorward biases in storm track latitude tend to project larger poleward shifts.

Journal ArticleDOI
TL;DR: In this paper, a large number of isotopic data sets (four satellite, sixteen ground-based remote-sensing, five surface in situ and three aircraft data sets) are analyzed to determine how H2O and HDO measurements in water vapor can be used to detect and diagnose biases in the representation of processes controlling tropospheric humidity in atmospheric general circulation models (GCMs).
Abstract: The goal of this study is to determine how H2O and HDO measurements in water vapor can be used to detect and diagnose biases in the representation of processes controlling tropospheric humidity in atmospheric general circulation models (GCMs). We analyze a large number of isotopic data sets (four satellite, sixteen ground-based remote-sensing, five surface in situ and three aircraft data sets) that are sensitive to different altitudes throughout the free troposphere. Despite significant differences between data sets, we identify some observed HDO/H2O characteristics that are robust across data sets and that can be used to evaluate models. We evaluate the isotopic GCM LMDZ, accounting for the effects of spatiotemporal sampling and instrument sensitivity. We find that LMDZ reproduces the spatial patterns in the lower and mid troposphere remarkably well. However, it underestimates the amplitude of seasonal variations in isotopic composition at all levels in the subtropics and in midlatitudes, and this bias is consistent across all data sets. LMDZ also underestimates the observed meridional isotopic gradient and the contrast between dry and convective tropical regions compared to satellite data sets. Comparison with six other isotope-enabled GCMs from the SWING2 project shows that biases exhibited by LMDZ are common to all models. The SWING2 GCMs show a very large spread in isotopic behavior that is not obviously related to that of humidity, suggesting water vapor isotopic measurements could be used to expose model shortcomings. In a companion paper, the isotopic differences between models are interpreted in terms of biases in the representation of processes controlling humidity. Copyright © 2012 by the American Geophysical Union.

Journal ArticleDOI
TL;DR: A global perspective is developed on a number of high impact climate extremes in 2010 through diagnostic studies of the anomalies, diabatic heating and global energy and water cycles that demonstrate relationships among variables and across events as mentioned in this paper.
Abstract: [1] A global perspective is developed on a number of high impact climate extremes in 2010 through diagnostic studies of the anomalies, diabatic heating, and global energy and water cycles that demonstrate relationships among variables and across events. Natural variability, especially ENSO, and global warming from human influences together resulted in very high sea surface temperatures (SSTs) in several places that played a vital role in subsequent developments. Record high SSTs in the Northern Indian Ocean in May 2010, the Gulf of Mexico in August 2010, the Caribbean in September 2010, and north of Australia in December 2010 provided a source of unusually abundant atmospheric moisture for nearby monsoon rains and flooding in Pakistan, Colombia, and Queensland. The resulting anomalous diabatic heating in the northern Indian and tropical Atlantic Oceans altered the atmospheric circulation by forcing quasi-stationary Rossby waves and altering monsoons. The anomalous monsoonal circulations had direct links to higher latitudes: from Southeast Asia to southern Russia, and from Colombia to Brazil. Strong convection in the tropical Atlantic in northern summer 2010 was associated with a Rossby wave train that extended into Europe creating anomalous cyclonic conditions over the Mediterranean area while normal anticyclonic conditions shifted downstream where they likely interacted with an anomalously strong monsoon circulation, helping to support the persistent atmospheric anticyclonic regime over Russia. This set the stage for the “blocking” anticyclone and associated Russian heat wave and wild fires. Attribution is limited by shortcomings in models in replicating monsoons, teleconnections and blocking.

Journal ArticleDOI
TL;DR: In this article, the authors presented a second generation of homogenized monthly mean surface air temperature data set for Canadian climate trend analysis, which was used to detect non-climatic shifts in de-seasonalised monthly mean temperatures: a multiple linear regression based test and a penalized maximal t test.
Abstract: [1] This study presents a second generation of homogenized monthly mean surface air temperature data set for Canadian climate trend analysis. Monthly means of daily maximum and of daily minimum temperatures were examined at 338 Canadian locations. Data from co-located observing sites were sometimes combined to create longer time series for use in trend analysis. Time series of observations were then adjusted to account for nation-wide change in observing time in July 1961, affecting daily minimum temperatures recorded at 120 synoptic stations; these were adjusted using hourly temperatures at the same sites. Next, homogeneity testing was performed to detect and adjust for other discontinuities. Two techniques were used to detect non-climatic shifts in de-seasonalized monthly mean temperatures: a multiple linear regression based test and a penalized maximal t test. These discontinuities were adjusted using a recently developed quantile-matching algorithm: the adjustments were estimated with the use of a reference series. Based on this new homogenized temperature data set, annual and seasonal temperature trends were estimated for Canada for 1950–2010 and Southern Canada for 1900–2010. Overall, temperature has increased at most locations. For 1950–2010, the annual mean temperature averaged over the country shows a positive trend of 1.5°C for the past 61 years. This warming is slightly more pronounced in the minimum temperature than in the maximum temperature; seasonally, the greatest warming occurs in winter and spring. The results are similar for Southern Canada although the warming is considerably greater in the minimum temperature compared to the maximum temperature over the period 1900–2010.

Journal ArticleDOI
TL;DR: In this paper, an optimal regional scaling algorithm for CTMs to fit the lightning NOxsource to the satellite lightning data in a way that preserves the coupling to deep convective transport was presented.
Abstract: [1] Nitrogen oxides (NOx ≡ NO + NO2) produced by lightning make a major contribution to the global production of tropospheric ozone and OH. Lightning distributions inferred from standard convective parameterizations in global chemical transport models (CTMs) fail to reproduce observations from the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) satellite instruments. We present an optimal regional scaling algorithm for CTMs to fit the lightning NOxsource to the satellite lightning data in a way that preserves the coupling to deep convective transport. We show that applying monthly scaling factors over ∼37 regions globally significantly improves the tropical ozone simulation in the GEOS-Chem CTM as compared to a simulation unconstrained by the satellite data and performs equally well to a simulation with local scaling. The coarse regional scaling preserves sufficient statistics in the satellite data to constrain the interannual variability (IAV) of lightning. After processing the LIS data to remove their diurnal sampling bias, we construct a monthly time series of lightning flash rates for 1998–2010 and 35°S–35°N. We find a correlation of IAV in total tropical lightning with El Nino but not with the solar cycle or the quasi-biennial oscillation. The global lightning NOxsource ± IAV standard deviation in GEOS-Chem is 6.0 ± 0.5 Tg N yr−1, compared to 5.5 ± 0.8 Tg N yr−1 for the biomass burning source. Lightning NOx could have a large influence on the IAV of tropospheric ozone because it is released in the upper troposphere where ozone production is most efficient.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated six reanalysis products (i.e., MERRA, NCEP/NCAR-1, CFSR, ERA-40 and GLDAS) using in situ measurements at 63 weather stations over the Tibetan plateau from the Chinese Meteorological Administration (CMA) for 1992-2001 and at nine stations from field campaigns (CAMP/Tibet) for 2002-2004.
Abstract: [1] As the highest plateau in the world, the Tibetan Plateau (TP) strongly affects regional weather and climate as well as global atmospheric circulations. Here six reanalysis products (i.e., MERRA, NCEP/NCAR-1, CFSR, ERA-40, ERA-Interim, and GLDAS) are evaluated using in situ measurements at 63 weather stations over the TP from the Chinese Meteorological Administration (CMA) for 1992–2001 and at nine stations from field campaigns (CAMP/Tibet) for 2002–2004. The measurement variables include daily and monthly precipitation and air temperature at all CMA and CAMP/Tibet stations as well as radiation (downward and upward shortwave and longwave), wind speed, humidity, and surface pressure at CAMP stations. Four statistical quantities (correlation coefficient, ratio of standard deviations, standard deviation of differences, and bias) are computed, and a ranking approach is also utilized to quantify the relative performance of reanalyses with respect to each variable and each statistical quantity. Compared with measurements at the 63 CMA stations, ERA-Interim has the best overall performance in both daily and monthly air temperatures, while MERRA has a high correlation with observations. GLDAS has the best overall performance in both daily and monthly precipitation because it is primarily based on the merged precipitation product from surface measurements and satellite remote sensing, while ERA-40 and MERRA have the highest correlation coefficients for daily and monthly precipitation, respectively. Compared with measurements at the nine CAMP stations, CFSR shows the best overall performance, followed by GLDAS, although the best ranking scores are different for different variables. It is also found that NCEP/NCAR-1 reanalysis shows the worst overall performance compared with both CMA and CAMP data. Since no reanalysis product is superior to others in all variables at both daily and monthly time scales, various reanalysis products should be combined for the study of weather and climate over the TP.

Journal ArticleDOI
TL;DR: In this paper, an absolute tectonic plate motion model made up of 14 major plates, using velocities of 206 sites of high geodetic quality (far from plate boundaries, deformation zones and Glacial Isostatic Adjustment (GIA) regions), derived from and consistent with ITRF2008.
Abstract: [1] The ITRF2008 velocity field is demonstrated to be of higher quality and more precise than past ITRF solutions. We estimated an absolute tectonic plate motion model made up of 14 major plates, using velocities of 206 sites of high geodetic quality (far from plate boundaries, deformation zones and Glacial Isostatic Adjustment (GIA) regions), derived from and consistent with ITRF2008. The precision of the estimated model is evaluated to be at the level of 0.3 mm/a WRMS. No GIA corrections were applied to site velocities prior to estimating plate rotation poles, as our selected sites are outside the Fennoscandia regions where the GIA models we tested are performing reasonably well, and far from GIA areas where the models would degrade the fit (Antarctica and North America). Our selected velocity field has small origin rate bias components following the three axis (X, Y, Z), respectively 0.41 ± 0.54, 0.22 ± 0.64 and 0.41 ± 0.60 (95 per cent confidence limits). Comparing our model to NNR-NUVEL-1A and the newly available NNR-MORVEL56, we found better agreement with NNR-MORVEL56 than with NNR-NUVEL-1A for all plates, except for Australia where we observe an average residual rotation rate of 4 mm/a. Using our selection of sites, we found large global X-rotation rates between the two models (0.016°/Ma) and between our model and NNR-MORVEL56 of 0.023°/Ma, equivalent to 2.5 mm/a at the Earth surface.