scispace - formally typeset

Journal ArticleDOI

River flow forecasting through conceptual models part I — A discussion of principles☆

01 Apr 1970-Journal of Hydrology (Elsevier)-Vol. 10, Iss: 3, pp 282-290

Abstract: The principles governing the application of the conceptual model technique to river flow forecasting are discussed. The necessity for a systematic approach to the development and testing of the model is explained and some preliminary ideas suggested.
Topics: Conceptual model (58%), Flood forecasting (52%)
Citations
More filters

Journal ArticleDOI
Abstract: Watershed models are powerful tools for simulating the effect of watershed processes and management on soil and water resources. However, no comprehensive guidance is available to facilitate model evaluation in terms of the accuracy of simulated data compared to measured flow and constituent values. Thus, the objectives of this research were to: (1) determine recommended model evaluation techniques (statistical and graphical), (2) review reported ranges of values and corresponding performance ratings for the recommended statistics, and (3) establish guidelines for model evaluation based on the review results and project-specific considerations; all of these objectives focus on simulation of streamflow and transport of sediment and nutrients. These objectives were achieved with a thorough review of relevant literature on model application and recommended model evaluation methods. Based on this analysis, we recommend that three quantitative statistics, Nash-Sutcliffe efficiency (NSE), percent bias (PBIAS), and ratio of the root mean square error to the standard deviation of measured data (RSR), in addition to the graphical techniques, be used in model evaluation. The following model evaluation performance ratings were established for each recommended statistic. In general, model simulation can be judged as satisfactory if NSE > 0.50 and RSR < 0.70, and if PBIAS + 25% for streamflow, PBIAS + 55% for sediment, and PBIAS + 70% for N and P. For PBIAS, constituent-specific performance ratings were determined based on uncertainty of measured data. Additional considerations related to model evaluation guidelines are also discussed. These considerations include: single-event simulation, quality and quantity of measured data, model calibration procedure, evaluation time step, and project scope and magnitude. A case study illustrating the application of the model evaluation guidelines is also provided.

7,499 citations


Cites background from "River flow forecasting through conc..."

  • ...Nash-Sutcliffe efficiency (NSE): The Nash-Sutcliffe efficiency (NSE) is a normalized statistic that determines the relative magnitude of the residual variance (“noise”) compared to the measured data variance (“information”) (Nash and Sutcliffe, 1970)....

    [...]


Journal ArticleDOI
Abstract: Correlation and correlation-based measures (e.g., the coefficient of determination) have been widely used to evaluate the “goodness-of-fit” of hydrologic and hydroclimatic models. These measures are oversensitive to extreme values (outliers) and are insensitive to additive and proportional differences between model predictions and observations. Because of these limitations, correlation-based measures can indicate that a model is a good predictor, even when it is not. In this paper, useful alternative goodness-of-fit or relative error measures (including the coefficient of efficiency and the index of agreement) that overcome many of the limitations of correlation-based measures are discussed. Modifications to these statistics to aid in interpretation are presented. It is concluded that correlation and correlation-based measures should not be used to assess the goodness-of-fit of a hydrologic or hydroclimatic model and that additional evaluation measures (such as summary statistics and absolute error measures) should supplement model evaluation tools.

3,462 citations


Journal ArticleDOI
Abstract: The successful application of a conceptual rainfall-runoff (CRR) model depends on how well it is calibrated. Despite the popularity of CRR models, reports in the literature indicate that it is typically difficult, if not impossible, to obtain unique optimal values for their parameters using automatic calibration methods. Unless the best set of parameters associated with a given calibration data set can be found, it is difficult to determine how sensitive the parameter estimates (and hence the model forecasts) are to factors such as input and output data error, model error, quantity and quality of data, objective function used, and so on. Results are presented that establish clearly the nature of the multiple optima problem for the research CRR model SIXPAR. These results suggest that the CRR model optimization problem is more difficult than had been previously thought and that currently used local search procedures have a very low probability of successfully finding the optimal parameter sets. Next, the performance of three existing global search procedures are evaluated on the model SIXPAR. Finally, a powerful new global optimization procedure is presented, entitled the shuffled complex evolution (SCE-UA) method, which was able to consistently locate the global optimum of the SIXPAR model, and appears to be capable of efficiently and effectively solving the CRR model optimization problem.

2,799 citations


Journal ArticleDOI
Abstract: The Soil and Water Assessment Tool (SWAT) model is a continuation of nearly 30 years of modeling efforts conducted by the USDA Agricultural Research Service (ARS). SWAT has gained international acceptance as a robust interdisciplinary watershed modeling tool as evidenced by international SWAT conferences, hundreds of SWAT-related papers presented at numerous other scientific meetings, and dozens of articles published in peer-reviewed journals. The model has also been adopted as part of the U.S. Environmental Protection Agency (USEPA) Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) software package and is being used by many U.S. federal and state agencies, including the USDA within the Conservation Effects Assessment Project (CEAP). At present, over 250 peer-reviewed published articles have been identified that report SWAT applications, reviews of SWAT components, or other research that includes SWAT. Many of these peer-reviewed articles are summarized here according to relevant application categories such as streamflow calibration and related hydrologic analyses, climate change impacts on hydrology, pollutant load assessments, comparisons with other models, and sensitivity analyses and calibration techniques. Strengths and weaknesses of the model are presented, and recommended research needs for SWAT are also provided.

2,116 citations


Journal ArticleDOI
TL;DR: A diagnostically interesting decomposition of NSE is presented, which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and it is shown how model calibration problems can arise due to interactions among these components.
Abstract: The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

2,099 citations


Cites methods from "River flow forecasting through conc..."

  • ...The mean squared error (MSE) criterion and its related normalization, the Nash–Sutcliffe efficiency (NSE, defined by Nash and Sutcliffe, 1970) are the two criteria most widely used for calibration and evaluation of hydrological models with observed data....

    [...]


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
202237
20211,487
20201,449
20191,354
20181,301
20171,255