scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Empirical‐statistical downscaling and error correction of daily precipitation from regional climate models

01 Aug 2011-International Journal of Climatology (John Wiley & Sons, Ltd.)-Vol. 31, Iss: 10, pp 1530-1544
TL;DR: In this paper, an ensemble of seven empirical-statistical downscaling and error correction methods (DECMs) is applied to post-process daily precipitation sums of a high-resolution regional climate hindcast simulation over the Alpine region, their error characteristics are analyzed and compared to the raw RCM results.
Abstract: Although regional climate models (RCMs) are powerful tools for describing regional and even smaller scale climate conditions, they still feature severe systematic errors. In order to provide optimized climate scenarios for climate change impact research, this study merges linear and nonlinear empirical-statistical downscaling techniques with bias correction methods and investigates their ability for reducing RCM error characteristics. An ensemble of seven empirical-statistical downscaling and error correction methods (DECMs) is applied to post-process daily precipitation sums of a high-resolution regional climate hindcast simulation over the Alpine region, their error characteristics are analysed and compared to the raw RCM results. Drastic reductions in error characteristics due to application of DECMs are demonstrated. Direct point-wise methods like quantile mapping and local intensity scaling as well as indirect spatial methods as nonlinear analogue methods yield systematic improvements in median, variance, frequency, intensity and extremes of daily precipitation. Multiple linear regression methods, even if optimized by predictor selection, transformation and randomization, exhibit significant shortcomings for modelling daily precipitation due to their linear framework. Comparing the well-performing methods to each other, quantile mapping shows the best performance, particularly at high quantiles, which is advantageous for applications related to extreme precipitation events. The improvements are obtained regardless of season and region, which indicates the potential transferability of these methods to other regions. Copyright © 2010 Royal Meteorological Society
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors integrate perspectives from meteorologists, climatologists, statisticians, and hydrologists to identify generic end user (in particular, impact modeler) needs and to discuss downscaling capabilities and gaps.
Abstract: Precipitation downscaling improves the coarse resolution and poor representation of precipitation in global climate models and helps end users to assess the likely hydrological impacts of climate change. This paper integrates perspectives from meteorologists, climatologists, statisticians, and hydrologists to identify generic end user (in particular, impact modeler) needs and to discuss downscaling capabilities and gaps. End users need a reliable representation of precipitation intensities and temporal and spatial variability, as well as physical consistency, independent of region and season. In addition to presenting dynamical downscaling, we review perfect prognosis statistical downscaling, model output statistics, and weather generators, focusing on recent developments to improve the representation of space-time variability. Furthermore, evaluation techniques to assess downscaling skill are presented. Downscaling adds considerable value to projections from global climate models. Remaining gaps are uncertainties arising from sparse data; representation of extreme summer precipitation, subdaily precipitation, and full precipitation fields on fine scales; capturing changes in small-scale processes and their feedback on large scales; and errors inherited from the driving global climate model.

1,443 citations

Journal ArticleDOI
TL;DR: The bias correction method that was developed within ISI-MIP, the first Inter-Sectoral Impact Model Intercomparison Project (IMIMIP), was presented in this paper.
Abstract: . Statistical bias correction is commonly applied within climate impact modelling to correct climate model data for systematic deviations of the simulated historical data from observations. Methods are based on transfer functions generated to map the distribution of the simulated historical data to that of the observations. Those are subsequently applied to correct the future projections. Here, we present the bias correction method that was developed within ISI-MIP, the first Inter-Sectoral Impact Model Intercomparison Project. ISI-MIP is designed to synthesise impact projections in the agriculture, water, biome, health, and infrastructure sectors at different levels of global warming. Bias-corrected climate data that are used as input for the impact simulations could be only provided over land areas. To ensure consistency with the global (land + ocean) temperature information the bias correction method has to preserve the warming signal. Here we present the applied method that preserves the absolute changes in monthly temperature, and relative changes in monthly values of precipitation and the other variables needed for ISI-MIP. The proposed methodology represents a modification of the transfer function approach applied in the Water Model Intercomparison Project (Water-MIP). Correction of the monthly mean is followed by correction of the daily variability about the monthly mean. Besides the general idea and technical details of the ISI-MIP method, we show and discuss the potential and limitations of the applied bias correction. In particular, while the trend and the long-term mean are well represented, limitations with regards to the adjustment of the variability persist which may affect, e.g. small scale features or extremes.

961 citations


Cites methods from "Empirical‐statistical downscaling a..."

  • ...Hence, in many cases differences in the variance or even higher moments of the simulated data are adjusted to the observations by parametric or non-parametric (empirical) quantile mapping (Boe et al., 2007; Piani et al., 2010; Themeßl et al., 2011)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, statistical transformations for post-processing regional climate models (RCMs) are reviewed and classified into distribution derived transformations, parametric transformations and nonparametric transformations, each differing with respect to their underlying assumptions.
Abstract: The impact of climate change on water resources is usually assessed at the local scale. However, regional climate models (RCMs) are known to exhibit systematic biases in precipitation. Hence, RCM simulations need to be post-processed in order to produce reliable estimates of local scale climate. Popular post-processing approaches are based on statistical transformations, which attempt to adjust the distribution of modelled data such that it closely resembles the observed climatology. However, the diversity of suggested methods renders the selection of optimal techniques difficult and therefore there is a need for clarification. In this paper, statistical transformations for post-processing RCM output are reviewed and classified into (1) distribution derived transformations, (2) parametric transformations and (3) nonparametric transformations, each differing with respect to their underlying assumptions. A real world application, using observations of 82 precipitation stations in Norway, showed that nonparametric transformations have the highest skill in systematically reducing biases in RCM precipitation.

773 citations


Cites background or methods or result from "Empirical‐statistical downscaling a..."

  • ...The two nonparametric methods SSPLINE and QUANT have on average the best skill in reducing systematic errors, also for very high (extreme) percentiles, being in line with other studies (Themeßl et al., 2011; Teutschbein and Seibert, 2012)....

    [...]

  • ...…techniques, aiming at providing reliable estimators of observed precipitation climatologies given RCM output (e.g.Ines and Hansen, 2006; Engen-Skaugen, 2007; Schmidli et al., 2007; Dosio and Paruolo, 2011; Themeßl et al., 2011; Turco et al., 2011; Chen et al., 2011b; Teutschbein and Seibert, 2012)....

    [...]

  • ...…that aim to adjust (selected aspects of) the distribution of RCM (e.g.Ashfaq et al., 2010; Dosio and Paruolo, 2011; Rojas et al., 2011; Themeßl et al., 2011; Sunyer et al., 2012) and global circulation model (GCM) (e.g.Wood et al., 2004; Ines and Hansen, 2006; Boé et al., 2007; Li et…...

    [...]

  • ...Other suggested scores assess specific moments of the distribution including the mean (Engen-Skaugen, 2007; Li et al., 2010; Dosio and Paruolo, 2011; Themeßl et al., 2011; Turco et al., 2011; Teutschbein and Seibert, 2012), the standard deviation (Engen-Skaugen, 2007; Li et al., 2010; Themeßl et…...

    [...]

  • ...Empirical investigations indicate that the impact of statistical transformations on the projected changes in mean conditions is comparably small but may systematically alter changes in nonlinearly derived measures, including characteristics of extreme events (Themeßl et al., 2011)....

    [...]

Journal ArticleDOI
TL;DR: It is argued that BC is currently often used in an invalid way: it is added to the GCM/RCM model chain without sufficient proof that the consistency of the latter is maintained, and narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification.
Abstract: Despite considerable progress in recent years, output of both global and regional circulation models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem, bias correction (BC; i.e. the correction of model output towards observations in a post-processing step) has now become a standard procedure in climate change impact studies. In this paper we argue that BC is currently often used in an invalid way: it is added to the GCM/RCM model chain without sufficient proof that the consistency of the latter (i.e. the agreement between model dynamics/model output and our judgement) as well as the generality of its applicability increases. BC methods often impair the advantages of circulation models by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Currently used BC methods largely neglect feedback mechanisms, and it is unclear whether they are time-invariant under climate change conditions. Applying BC increases agreement of climate model output with observations in hindcasts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this hides rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of circulation models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future global and regional circulation model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.

555 citations

01 Jan 2012
TL;DR: The authors argued that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field 10
Abstract: Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still a icted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observa5 tions in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field 10

471 citations

References
More filters
Journal ArticleDOI
TL;DR: ERA-40 is a re-analysis of meteorological observations from September 1957 to August 2002 produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) in collaboration with many institutions as mentioned in this paper.
Abstract: ERA-40 is a re-analysis of meteorological observations from September 1957 to August 2002 produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) in collaboration with many institutions. The observing system changed considerably over this re-analysis period, with assimilable data provided by a succession of satellite-borne instruments from the 1970s onwards, supplemented by increasing numbers of observations from aircraft, ocean-buoys and other surface platforms, but with a declining number of radiosonde ascents since the late 1980s. The observations used in ERA-40 were accumulated from many sources. The first part of this paper describes the data acquisition and the principal changes in data type and coverage over the period. It also describes the data assimilation system used for ERA-40. This benefited from many of the changes introduced into operational forecasting since the mid-1990s, when the systems used for the 15-year ECMWF re-analysis (ERA-15) and the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis were implemented. Several of the improvements are discussed. General aspects of the production of the analyses are also summarized. A number of results indicative of the overall performance of the data assimilation system, and implicitly of the observing system, are presented and discussed. The comparison of background (short-range) forecasts and analyses with observations, the consistency of the global mass budget, the magnitude of differences between analysis and background fields and the accuracy of medium-range forecasts run from the ERA-40 analyses are illustrated. Several results demonstrate the marked improvement that was made to the observing system for the southern hemisphere in the 1970s, particularly towards the end of the decade. In contrast, the synoptic quality of the analysis for the northern hemisphere is sufficient to provide forecasts that remain skilful well into the medium range for all years. Two particular problems are also examined: excessive precipitation over tropical oceans and a too strong Brewer-Dobson circulation, both of which are pronounced in later years. Several other aspects of the quality of the re-analyses revealed by monitoring and validation studies are summarized. Expectations that the ‘second-generation’ ERA-40 re-analysis would provide products that are better than those from the firstgeneration ERA-15 and NCEP/NCAR re-analyses are found to have been met in most cases. © Royal Meteorological Society, 2005. The contributions of N. A. Rayner and R. W. Saunders are Crown copyright.

7,110 citations


"Empirical‐statistical downscaling a..." refers background in this paper

  • ...…study originate from the Austrian project ‘reclip : more–Research for Climate Protection: Model Run Evaluation’ (Loibl et al., 2007), in which parts of the ERA-40 reanalysis (Uppala et al., 2005) were dynamically downscaled (hindcast simulation) in a two-step nesting approach (Gobiet et al., 2006)....

    [...]

Journal ArticleDOI
TL;DR: In this article, a diagram has been devised that can provide a concise statistical summary of how well patterns match each other in terms of their correlation, their root-mean-square difference, and the ratio of their variances.
Abstract: A diagram has been devised that can provide a concise statistical summary of how well patterns match each other in terms of their correlation, their root-mean-square difference, and the ratio of their variances. Although the form of this diagram is general, it is especially useful in evaluating complex models, such as those used to study geophysical phenomena. Examples are given showing that the diagram can be used to summarize the relative merits of a collection of different models or to track changes in performance of a model as it is modified. Methods are suggested for indicating on these diagrams the statistical significance of apparent differences and the degree to which observational uncertainty and unforced internal variability limit the expected agreement between model-simulated and observed behaviors. The geometric relationship between the statistics plotted on the diagram also provides some guidance for devising skill scores that appropriately weight among the various measures of pattern correspondence.

5,762 citations


"Empirical‐statistical downscaling a..." refers background in this paper

  • ...Standardized Taylor diagrams (Figure 8; Taylor, 2001) show the normalized centred root-mean-square (RMS) difference of the different DECMs compared to observations as the distance to point 1 on the abscissa, the variance ratio between models and observations as the radial distance to the zero…...

    [...]

Journal ArticleDOI
TL;DR: There is a need for a move away from comparison studies into the provision of decision-making tools for planning and management that are robust to future uncertainties; with examination and understanding of uncertainties within the modelling system.
Abstract: There is now a large published literature on the strengths and weaknesses of downscaling methods for different climatic variables, in different regions and seasons. However, little attention is given to the choice of downscaling method when examining the impacts of climate change on hydrological systems. This review paper assesses the current downscaling literature, examining new developments in the downscaling field specifically for hydrological impacts. Sections focus on the downscaling concept; new methods; comparative methodological studies; the modelling of extremes; and the application to hydrological impacts. Consideration is then given to new developments in climate scenario construction which may offer the most potential for advancement within the ‘downscaling for hydrological impacts’ community, such as probabilistic modelling, pattern scaling and downscaling of multiple variables and suggests ways that they can be merged with downscaling techniques in a probabilistic climate change scenario framework to assess the uncertainties associated with future projections. Within hydrological impact studies there is still little consideration given to applied research; how the results can be best used to enable stakeholders and managers to make informed, robust decisions on adaptation and mitigation strategies in the face of many uncertainties about the future. It is suggested that there is a need for a move away from comparison studies into the provision of decision-making tools for planning and management that are robust to future uncertainties; with examination and understanding of uncertainties within the modelling system. Copyright © 2007 Royal Meteorological Society

2,015 citations


"Empirical‐statistical downscaling a..." refers background in this paper

  • ...Further, the frequently claimed integration of humidity as predictor (e.g. Giorgi and Mearns, 1991; Wilby and Wigley, 2000; Fowler et al., 2007) is supported....

    [...]

  • ...Since MLR models reduce variability because the regression line is fitted to pass through the centroid of the data (Helsel and Hirsch, 2002) and only a part of local climate variability is related to larger-scale variability in predictors (Fowler et al., 2007), von Storch (1999) proposed randomization of time series to recover their original variability....

    [...]

  • ...Further, the frequently claimed integration of humidity as predictor (e.g. Giorgi and Mearns, 1991; Wilby and Wigley, 2000; Fowler et al., 2007) is supported....

    [...]

  • ...…line is fitted to pass through the centroid of the data (Helsel and Hirsch, 2002) and only a part of local climate variability is related to larger-scale variability in predictors (Fowler et al., 2007), von Storch (1999) proposed randomization of time series to recover their original variability....

    [...]

Book
04 Mar 2002
TL;DR: This chapter discusses statistical concepts in climate research, as well as time series and stochastic processes, and some of the techniques used to estimate covariance functions and spectra.
Abstract: 1. Introduction Part I. Fundamentals: 2. Probability theory 3. Distributions of climate variables 4. Concepts in statistical inference 5. Estimation Part II. Confirmation and Analysis: 6. The statistical test of a hypothesis 7. Analysis of atmospheric circulation problems Part III. Fitting Statistical Models: 8. Regression 9. Analysis of variance Part IV. Time Series: 10. Time series and stochastic processes 11. Parameters of univariate and bivariate time series 12. Estimating covariance functions and spectra Part V. Eigen Techniques: 13. Empirical orthogonal functions 14. Canonical correlation analysis 15. POP analysis 16. Complex eigentechniques Part VI. Other Topics: 17. Specific statistical concepts in climate research 18. Forecast quality evaluation Part VII. Appendices.

1,915 citations