scispace - formally typeset
Search or ask a question
Author

J.V. Sutcliffe

Bio: J.V. Sutcliffe is an academic researcher from National University of Ireland, Galway. The author has contributed to research in topics: Conceptual model & Flood forecasting. The author has an hindex of 1, co-authored 1 publications receiving 17307 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the principles governing the application of the conceptual model technique to river flow forecasting are discussed and the necessity for a systematic approach to the development and testing of the model is explained and some preliminary ideas suggested.

19,601 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors present guidelines for watershed model evaluation based on the review results and project-specific considerations, including single-event simulation, quality and quantity of measured data, model calibration procedure, evaluation time step, and project scope and magnitude.
Abstract: Watershed models are powerful tools for simulating the effect of watershed processes and management on soil and water resources. However, no comprehensive guidance is available to facilitate model evaluation in terms of the accuracy of simulated data compared to measured flow and constituent values. Thus, the objectives of this research were to: (1) determine recommended model evaluation techniques (statistical and graphical), (2) review reported ranges of values and corresponding performance ratings for the recommended statistics, and (3) establish guidelines for model evaluation based on the review results and project-specific considerations; all of these objectives focus on simulation of streamflow and transport of sediment and nutrients. These objectives were achieved with a thorough review of relevant literature on model application and recommended model evaluation methods. Based on this analysis, we recommend that three quantitative statistics, Nash-Sutcliffe efficiency (NSE), percent bias (PBIAS), and ratio of the root mean square error to the standard deviation of measured data (RSR), in addition to the graphical techniques, be used in model evaluation. The following model evaluation performance ratings were established for each recommended statistic. In general, model simulation can be judged as satisfactory if NSE > 0.50 and RSR < 0.70, and if PBIAS + 25% for streamflow, PBIAS + 55% for sediment, and PBIAS + 70% for N and P. For PBIAS, constituent-specific performance ratings were determined based on uncertainty of measured data. Additional considerations related to model evaluation guidelines are also discussed. These considerations include: single-event simulation, quality and quantity of measured data, model calibration procedure, evaluation time step, and project scope and magnitude. A case study illustrating the application of the model evaluation guidelines is also provided.

9,386 citations

Journal ArticleDOI
TL;DR: In this paper, the goodness-of-fit or relative error measures (including the coefficient of efficiency and the index of agreement) that overcome many of the limitations of correlation-based measures are discussed.
Abstract: Correlation and correlation-based measures (e.g., the coefficient of determination) have been widely used to evaluate the “goodness-of-fit” of hydrologic and hydroclimatic models. These measures are oversensitive to extreme values (outliers) and are insensitive to additive and proportional differences between model predictions and observations. Because of these limitations, correlation-based measures can indicate that a model is a good predictor, even when it is not. In this paper, useful alternative goodness-of-fit or relative error measures (including the coefficient of efficiency and the index of agreement) that overcome many of the limitations of correlation-based measures are discussed. Modifications to these statistics to aid in interpretation are presented. It is concluded that correlation and correlation-based measures should not be used to assess the goodness-of-fit of a hydrologic or hydroclimatic model and that additional evaluation measures (such as summary statistics and absolute error measures) should supplement model evaluation tools.

3,891 citations

Journal ArticleDOI
TL;DR: A diagnostically interesting decomposition of NSE is presented, which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and it is shown how model calibration problems can arise due to interactions among these components.

3,147 citations

Journal ArticleDOI
TL;DR: In this article, a shuffled complex evolution (SCE-UA) method was proposed to solve the multiple optima problem for the conceptual rainfall runoff (CRR) model SIXPAR.
Abstract: The successful application of a conceptual rainfall-runoff (CRR) model depends on how well it is calibrated. Despite the popularity of CRR models, reports in the literature indicate that it is typically difficult, if not impossible, to obtain unique optimal values for their parameters using automatic calibration methods. Unless the best set of parameters associated with a given calibration data set can be found, it is difficult to determine how sensitive the parameter estimates (and hence the model forecasts) are to factors such as input and output data error, model error, quantity and quality of data, objective function used, and so on. Results are presented that establish clearly the nature of the multiple optima problem for the research CRR model SIXPAR. These results suggest that the CRR model optimization problem is more difficult than had been previously thought and that currently used local search procedures have a very low probability of successfully finding the optimal parameter sets. Next, the performance of three existing global search procedures are evaluated on the model SIXPAR. Finally, a powerful new global optimization procedure is presented, entitled the shuffled complex evolution (SCE-UA) method, which was able to consistently locate the global optimum of the SIXPAR model, and appears to be capable of efficiently and effectively solving the CRR model optimization problem.

2,988 citations

Journal ArticleDOI
TL;DR: In this paper, the utility of several efficiency criteria is investigated in three examples using a simple observed streamflow hydrograph, and the selection and use of specific efficiency criteria and interpretation of the results can be a challenge for even the most experienced hydrologist since each criterion may place different emphasis on different types of simulated and observed behaviours.
Abstract: . The evaluation of hydrologic model behaviour and performance is commonly made and reported through comparisons of simulated and observed variables. Frequently, comparisons are made between simulated and measured streamflow at the catchment outlet. In distributed hydrological modelling approaches, additional comparisons of simulated and observed measurements for multi-response validation may be integrated into the evaluation procedure to assess overall modelling performance. In both approaches, single and multi-response, efficiency criteria are commonly used by hydrologists to provide an objective assessment of the "closeness" of the simulated behaviour to the observed measurements. While there are a few efficiency criteria such as the Nash-Sutcliffe efficiency, coefficient of determination, and index of agreement that are frequently used in hydrologic modeling studies and reported in the literature, there are a large number of other efficiency criteria to choose from. The selection and use of specific efficiency criteria and the interpretation of the results can be a challenge for even the most experienced hydrologist since each criterion may place different emphasis on different types of simulated and observed behaviours. In this paper, the utility of several efficiency criteria is investigated in three examples using a simple observed streamflow hydrograph.

2,375 citations