scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Post-blackening approach for modeling dependent annual streamflows

28 Apr 2000-Journal of Hydrology (Elsevier)-Vol. 230, Iss: 1, pp 86-126
TL;DR: The post-blackening (PB) approach is introduced for modeling annual streamflows that exhibit significant dependence and seems to offer considerable scope for improvement in hydrologic time series modeling and its applications to water resources planning.
About: This article is published in Journal of Hydrology.The article was published on 2000-04-28. It has received 41 citations till now. The article focuses on the topics: Parametric model & Nonparametric statistics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, Monte Carlo simulation is used to investigate the effect of serial correlation on the Mann-Kendall (MK) test in trend-detection studies of hydrological time series.
Abstract: [1] Prewhitening has been used to eliminate the influence of serial correlation on the Mann-Kendall (MK) test in trend-detection studies of hydrological time series. However, its ability to accomplish such a task has not been well documented. This study investigates this issue by Monte Carlo simulation. Simulated time series consist of a linear trend and a lag 1 autoregressive (AR(1)) process with a noise. Simulation results demonstrate that when trend exists in a time series, the effect of positive/negative serial correlation on the MK test is dependent upon sample size, magnitude of serial correlation, and magnitude of trend. When sample size and magnitude of trend are large enough, serial correlation no longer significantly affects the MK test statistics. Removal of positive AR(1) from time series by prewhitening will remove a portion of trend and hence reduces the possibility of rejecting the null hypothesis while it might be false. Contrarily, removal of negative AR(1) by prewhitening will inflate trend and leads to an increase in the possibility of rejecting the null hypothesis while it might be true. Therefore, prewhitening is not suitable for eliminating the effect of serial correlation on the MK test when trend exists within a time series.

553 citations


Cites methods from "Post-blackening approach for modeli..."

  • ...Prewhitening has also been proposed to remove an AR process from a time series in the bootstrap postblackening approach [e.g., Davison and Hinkley, 1997; Srinivas and Srinivasan, 2000]....

    [...]

  • ...series in the bootstrap postblackening approach [e.g., Davison and Hinkley, 1997; Srinivas and Srinivasan, 2000]....

    [...]

Journal ArticleDOI
TL;DR: The meta-Gaussian copula was employed to model the dependence of periodic hydrologic data using meta-elliptical copulas and these results were found satisfactory.
Abstract: This study aims to model the joint probability distribution of periodic hydrologic data using meta-elliptical copulas. Monthly precipitation data from a gauging station (410120) in Texas, US, was used to illustrate parameter estimation and goodness-of-fit for univariate drought distributions using chi-square test, Kolmogorov–Smirnov test, Cramer-von Mises statistic, Anderson-Darling statistic, modified weighted Watson statistic, and Liao and Shimokawa statistic. Pearson’s classical correlation coefficient r n , Spearman’s ρ n, Kendall’s τ, Chi-Plots, and K-Plots were employed to assess the dependence of drought variables. Several meta-elliptical copulas and Gumbel-Hougaard, Ali-Mikhail-Haq, Frank and Clayton copulas were tested to determine the best-fit copula. Based on the root mean square error and the Akaike information criterion, meta-Gaussian and t copulas gave a better fit. A bootstrap version based on Rosenblatt’s transformation was employed to test the goodness-of-fit for meta-Gaussian and t copulas. It was found that none of meta-Gaussian and t copulas considered could be rejected at the given significance level. The meta-Gaussian copula was employed to model the dependence, and these results were found satisfactory.

190 citations


Cites methods from "Post-blackening approach for modeli..."

  • ...Based on the statistical properties of historical data, they used condensed disaggregation models (Lane 1979; Srinivas and Srinivasan 2000; Shiau and Shen 2001) to generate synthetic series and obtained values of a, b and c....

    [...]

Journal ArticleDOI
TL;DR: In temperate and semi-arid climates, 60 observation data is sufficient for the following year’s rainfall forecasting and the accuracy of the forecasting models increased with increasing amounts of observation data of arid and humid climates.
Abstract: This paper reports the study of the effect of the length of the recorded data used for monthly rainfall forecasting. Monthly rainfall data for three periods of 5, 10, and 49 years were collected from Kermanshah, Mashhad, Ahvaz, and Babolsar stations and used for calibration time series models. Then, the accuracy of the forecasting models was investigated by the following year’s data. The following was concluded: In temperate and semi-arid climates, 60 observation data is sufficient for the following year’s rainfall forecasting. The accuracy of the time series models increased with increasing amounts of observation data of arid and humid climates. Time series models are appropriate tools for forecasting monthly rainfall forecasting in semi-arid climates. Determining the most critical rainfall month in each climate condition for agriculture schedules is a recommended aim for future studies.

143 citations


Cites background from "Post-blackening approach for modeli..."

  • ...[27] studied the impact of the length of observed records on the performance of ANN and of conceptual parsimonious rainfall-runoff forecasting models....

    [...]

Journal ArticleDOI
TL;DR: Evaluating six published data fusion strategies for hydrological forecasting based on two contrasting catchments reveals unequal aptitudes for fixing different categories of problematic catchment behaviour and, in such cases, the best method were a good deal better than their closest rival(s).
Abstract: . This paper evaluates six published data fusion strategies for hydrological forecasting based on two contrasting catchments: the River Ouse and the Upper River Wye. The input level and discharge estimates for each river comprised a mixed set of single model forecasts. Data fusion was performed using: arithmetic-averaging, a probabilistic method in which the best model from the last time step is used to generate the current forecast, two different neural network operations and two different soft computing methodologies. The results from this investigation are compared and contrasted using statistical and graphical evaluation. Each location demonstrated several options and potential advantages for using data fusion tools to construct superior estimates of hydrological forecast. Fusion operations were better in overall terms in comparison to their individual modelling counterparts and two clear winners emerged. Indeed, the six different mechanisms on test revealed unequal aptitudes for fixing different categories of problematic catchment behaviour and, in such cases, the best method(s) were a good deal better than their closest rival(s). Neural network fusion of differenced data provided the best solution for a stable regime (with neural network fusion of original data being somewhat similar) — whereas a fuzzified probabilistic mechanism produced a superior output in a more volatile environment. The need for a data fusion research agenda within the hydrological sciences is discussed and some initial suggestions are presented. Keywords: data fusion, fuzzy logic, neural network, hydrological modelling

131 citations


Cites methods from "Post-blackening approach for modeli..."

  • ...…manner of: (i) estimation and addition of residuals; (ii) building modular assemblies of expert subunits (Zhang and Govindaraju, 2000); (iii) performing weighted combination of individual forecasters (Shamseldin et al., 1997); or (iv) bootstrapping operations (Srinivas and Srinivasan, 2000, 2001)....

    [...]

Journal ArticleDOI
TL;DR: It was determined that ARIMA model can forecast inflow to the Dez reservoir from 12 months ago with lower error than the ARMA model.
Abstract: In this study the ability of Autoregressive Moving Average (ARMA) and Autoregressive Integrated Moving Average (ARIMA) models in forecasting the monthly inflow of Dez dam reservoir located in Teleh Zang station in Dez dam upstream i s estimated ARIMA model has found a widespread application in many practical sciences In addition, dam reservoir inflow forecasting is done by some methods such as ordinary linear regression, ARMA and artificial neural networks On the other hand, application of both ARMA and ARIMA models simultaneously in order to compare their ability in autoregressive forecast of monthly inflow of dam reservoir has not been carried out i n previous researches Therefore, this paper attempts to forecast the inflow of Dez dam reservoir by using ARMA and ARIMA models while increasing the number of parameters in order to increase the forecast accuracy to four parameters and comparing them In ARMA and ARIMA models, the polynomial was derived respectively with four and s ix parameters to forecast the inflow By comparing root mean square error of the model, it was determi ned that ARIMA model can forecast inflow to the Dez reservoir from 12 months ago with lower error t han the ARMA model

104 citations


Cites methods from "Post-blackening approach for modeli..."

  • ...This study confirmed the superiority of ARMA model to the TDNN. Toth et al. (2000) used the artificial neural network and ARMA models to forecast rainfall....

    [...]

References
More filters
Book
01 Jan 1993
TL;DR: This article presents bootstrap methods for estimation, using simple arguments, with Minitab macros for implementing these methods, as well as some examples of how these methods could be used for estimation purposes.
Abstract: This article presents bootstrap methods for estimation, using simple arguments. Minitab macros for implementing these methods are given.

37,183 citations

Book
01 Jan 1970
TL;DR: In this article, a complete revision of a classic, seminal, and authoritative book that has been the model for most books on the topic written since 1970 is presented, focusing on practical techniques throughout, rather than a rigorous mathematical treatment of the subject.
Abstract: From the Publisher: This is a complete revision of a classic, seminal, and authoritative book that has been the model for most books on the topic written since 1970. It focuses on practical techniques throughout, rather than a rigorous mathematical treatment of the subject. It explores the building of stochastic (statistical) models for time series and their use in important areas of application —forecasting, model specification, estimation, and checking, transfer function modeling of dynamic relationships, modeling the effects of intervention events, and process control. Features sections on: recently developed methods for model specification, such as canonical correlation analysis and the use of model selection criteria; results on testing for unit root nonstationarity in ARIMA processes; the state space representation of ARMA models and its use for likelihood estimation and forecasting; score test for model checking; and deterministic components and structural components in time series models and their estimation based on regression-time series model methods.

19,748 citations

BookDOI
01 Jan 1986
TL;DR: The Kernel Method for Multivariate Data: Three Important Methods and Density Estimation in Action.
Abstract: Introduction. Survey of Existing Methods. The Kernel Method for Univariate Data. The Kernel Method for Multivariate Data. Three Important Methods. Density Estimation in Action.

15,499 citations

Journal ArticleDOI
TL;DR: In this article, the authors discuss the problem of estimating the sampling distribution of a pre-specified random variable R(X, F) on the basis of the observed data x.
Abstract: We discuss the following problem given a random sample X = (X 1, X 2,…, X n) from an unknown probability distribution F, estimate the sampling distribution of some prespecified random variable R(X, F), on the basis of the observed data x. (Standard jackknife theory gives an approximate mean and variance in the case R(X, F) = \(\theta \left( {\hat F} \right) - \theta \left( F \right)\), θ some parameter of interest.) A general method, called the “bootstrap”, is introduced, and shown to work satisfactorily on a variety of estimation problems. The jackknife is shown to be a linear approximation method for the bootstrap. The exposition proceeds by a series of examples: variance of the sample median, error rates in a linear discriminant analysis, ratio estimation, estimating regression parameters, etc.

14,483 citations