scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Hydroinformatics in 2007"


Journal ArticleDOI
TL;DR: OpenMI as discussed by the authors provides a standardized interface to define, describe and transfer data on a time basis between software components that run simultaneously, thus supporting systems where feedback between the modelled processes is necessary in order to achieve physically sound results.
Abstract: Management issues in many sectors of society demand integrated analysis that can be supported by integrated modelling. Since all-inclusive modelling software is difficult to achieve, and possibly even undesirable, integrated modelling requires the linkage of individual models or model components that address specific domains. Emerging from the water sector, the OpenMI has been developed with the purpose of being the glue that can link together model components from various origins. The OpenMI provides a standardized interface to define, describe and transfer data on a time basis between software components that run simultaneously, thus supporting systems where feedback between the modelled processes is necessary in order to achieve physically sound results. The OpenMI allows the linking of models with different spatial and temporal representations: for example, linking river models and groundwater models, where the river model typically uses a one-dimensional grid and a short timestep and the groundwater model uses a two- or three-dimensional grid and a longer timestep. The OpenMI is designed to accommodate the easy migration of existing modelling systems, since their re-implementation may not be economically feasible due to the large investments that have been put into the development and testing of these systems.

368 citations


Journal ArticleDOI
TL;DR: It is shown that linear and nonlinear kernel functions (i.e. RBF) can yield superior performances against each other under different circumstances in the same catchment, which is a real challenge for any modellers in using SVMs.
Abstract: This paper describes an application of SVM over the Bird Creek catchment and addresses some important issues in developing and applying SVM in flood forecasting. It has been found that, like artificial neural network models, SVM also suffers from over-fitting and under-fitting problems and the over-fitting is more damaging than under-fitting. This paper illustrates that an optimum selection among a large number of various input combinations and parameters is a real challenge for any modellers in using SVMs. A comparison with some benchmarking models has been made, i.e. Transfer Function, Trend and Naive models. It demonstrates that SVM is able to surpass all of them in the test data series, at the expense of a huge amount of time and effort. Unlike previous published results, this paper shows that linear and nonlinear kernel functions (i.e. RBF) can yield superior performances against each other under different circumstances in the same catchment. The study also shows an interesting result in the SVM response to different rainfall inputs, where lighter rainfalls would generate very different responses to heavier ones, which is a very useful way to reveal the behaviour of a SVM model.

175 citations


Journal ArticleDOI
TL;DR: ParaSol, a method that performs optimization and uncertainty analysis for complex models such as distributed water quality models and the SCE-UA sampling used for ParaSol was more effective and efficient, as none of the Monte Carlo samples were close to the minimum or even within the confidence region defined by Para Sol.
Abstract: Catchment water quality models have many parameters, several output variables and a complex structure leading to multiple minima in the objective function General uncertainty/optimization methods based on random sampling (eg GLUE) or local methods (eg PEST) are often not applicable for theoretical or practical reasons This paper presents “ParaSol”, a method that performs optimization and uncertainty analysis for complex models such as distributed water quality models Optimization is done by adapting the Shuffled Complex Evolution algorithm (SCE-UA) to account for multi-objective problems and for large numbers of parameters The simulations performed by the SCE-UA are used further for uncertainty analysis and thereby focus the uncertainty analysis on solutions near the optimum/optima Two methods have been developed that select “good” results out of these simulations based on an objective threshold The first method is based on χ 2 statistics to delineate the confidence regions around the optimum/optima and the second method uses Bayesian statistics to define high probability regions The ParaSol method was applied to a simple bucket model and to a Soil and Water Assessment Tool (SWAT) model of Honey Creek, OH, USA The bucket model case showed the success of the method in finding the minimum and the applicability of the statistics under importance sampling Both cases showed that the confidence regions are very small when the χ 2 statistics are used and even smaller when using the Bayesian statistics By comparing the ParaSol uncertainty results to those derived from 500,000 Monte Carlo simulations it was shown that the SCE-UA sampling used for ParaSol was more effective and efficient, as none of the Monte Carlo samples were close to the minimum or even within the confidence region defined by ParaSol

127 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a new approach for the real-time, near-optimal control of water-distribution networks based on the combined use of an artificial neural network for predicting the consequences of different control settings and a genetic algorithm for selecting the best combination.
Abstract: This paper presents a new approach for the real-time, near-optimal control of water-distribution networks, which forms an integral part of the POWADIMA research project. The process is based on the combined use of an artificial neural network for predicting the consequences of different control settings and a genetic algorithm for selecting the best combination. By this means, it is possible to find the optimal, or at least near-optimal, pump and valve settings for the present time-step as well as those up to a selected operating horizon, taking account of the short-term demand fluctuations, the electricity tariff structure and operational constraints such as minimum delivery pressures, etc. Thereafter, the near-optimal control settings for the present time-step are implemented. Having grounded any discrepancies between the previously predicted and measured storage levels at the next update of the monitoring facilities, the whole process is repeated on a rolling basis and a new operating strategy is computed. Contingency measures for dealing with pump failures, pipe bursts, etc., have also been included. The novelty of this approach is illustrated by the application to a small, hypothetical network. Its relevance to real networks is discussed in the subsequent papers on case studies.

107 citations


Journal ArticleDOI
TL;DR: In this article, an artificial neural network (ANN) is used to predict the consequences of different control settings on the performance of the water-distribution network, in the context of real-time, near-optimal control.
Abstract: As part of the POWADIMA research project, this paper describes the technique used to predict the consequences of different control settings on the performance of the water-distribution network, in the context of real-time, near-optimal control. Since the use of a complex hydraulic simulation model is somewhat impractical for real-time operations as a result of the computational burden it imposes, the approach adopted has been to capture its domain knowledge in a far more efficient form by means of an artificial neural network (ANN). The way this is achieved is to run the hydraulic simulation model off-line, with a large number of different combinations of initial tank-storage levels, demands, pump and valve settings, to predict future tank-storage water levels, hydrostatic pressures and flow rates at critical points throughout the network. These input/output data sets are used to train an ANN, which is then verified using testing sets. Thereafter, the ANN is employed in preference to the hydraulic simulation model within the optimization process. For experimental purposes, this technique was initially applied to a small, hypothetical water-distribution network, using EPANET as the hydraulic simulation package. The application to two real networks is described in subsequent papers of this series.

85 citations


Journal ArticleDOI
TL;DR: The POWADIMA research project was to determine the feasibility and efficacy of introducing real-time, near-optimal control for water-distribution networks and the methodology includes replicating the model by an artificial neural network which, computationally, is far more efficient.
Abstract: This paper is intended to serve as an introduction to the POWADIMA research project, whose objective was to determine the feasibility and efficacy of introducing real-time, near-optimal control for water-distribution networks. With that in mind, its contents include the current state-of-the-art and some of the difficulties that would need to be addressed if the goal of near-optimal control was to be achieved. Subsequently, the approach adopted is outlined, together with the reasons for the choice. Since it would be somewhat impractical to use a conventional hydraulic simulation model for real-time, near-optimal control, the methodology includes replicating the model by an artificial neural network which, computationally, is far more efficient. Thereafter, the latter is embedded in a dynamic genetic algorithm, designed specifically for real-time use. In this way, the near-optimal control settings to meet the current demands and minimize the overall pumping costs up to the operating horizon can be derived. The programme of work undertaken in achieving this end is then described. By way of conclusion, the potential benefits arising from implementing the control system developed are briefly reviewed, as are the possibilities of using the same approach for other application areas.

79 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented the results obtained for a full (simulated) year of operation in which an energy cost saving of some 25% was achieved in comparison to the corresponding cost of current practice.
Abstract: Haifa-A is the first of two case studies relating to the POWADIMA research project. It comprises about 20% of the city9s water-distribution network and serves a population of some 60,000 from two sources. The hydraulic simulation model of the network has 126 pipes, 112 nodes, 9 storage tanks, 1 operating valve and 17 pumps in 5 discrete pumping stations. The complex energy tariff structure changes with hours of the day and days of the year. For a dynamically rolling operational horizon of 24 h ahead, the real-time, near-optimal control strategy is calculated by a software package that combines a genetic algorithm (GA) optimizer with an artificial neural network (ANN) predictor, the latter having replaced a conventional hydraulic simulation model to achieve the computational efficiency required for real-time use. This paper describes the Haifa-A hydraulic network, the ANN predictor, the GA optimizer and the demand- forecasting model that were used. Thereafter, it presents and analyses the results obtained for a full (simulated) year of operation in which an energy cost saving of some 25% was achieved in comparison to the corresponding cost of current practice. Conclusions are drawn regarding the achievement of aims and future prospects.

77 citations


Journal ArticleDOI
TL;DR: The LSI approach based on the noise filtration was considered to be a promising strategy for information retrieval in real hydrochemical data.
Abstract: In this study, data mining using box plots and multivariate statistical analysis using factor analysis are employed for a spatio-temporal analysis of coastal water quality data from Tolo Harbour, Hong Kong. The analysis of box plots reveals pronounced spatial heterogeneity of the parameters studied. The spatial analysis clearly shows monitoring station TM2 in the Harbour Subzone to be most susceptible to eutrophication with the highest nutrient and algal biomass concentrations. The factor analysis brings to light dominant parameters to the ecological system under the coastal marine environment. The temporal analysis confirms the considerable decline in nutrient levels in recent years. In spite of this decline, the factor analysis indicates that nutrient processes play an important role even in recent years, suggesting an adequate supply of nutrients. It seems that they are being released from sources other than known point sources, possibly from nutrients accumulated in the sediments, necessitating steps to be undertaken for their control also. This study demonstrates the use of data mining techniques in the ecological system in Tolo Harbour.

77 citations


Journal ArticleDOI
TL;DR: This paper achieves the optimal control and operation of an irrigation pumping station system by the honey-bees mating optimization (HBMO) algorithm and is tested with a practical design and proves the ability of combining the dynamic penalty function with the HBMO algorithm for solving combinatorial design–operation optimization problems.
Abstract: Because of the complexity of some optimization problems, evolutionary and meta-heuristic algorithms are sometimes more applicable than the traditional optimization methods. Some difficulties in solving design-operation problems in the field of engineering are due to the multi-modality of the solution region of these problems. Since the design variables usually are specified as discrete variables and other continuous decision variables have to be set according to the range of the discrete ones, the possibility of trapping the final solution into some local optimum increases. In such cases, the capability of both traditional and evolutionary algorithms decreases. Thus, the development of a strategy to overcome this problem is the subject of this paper. For water utilities, one of the greatest potential areas for energy cost-savings is the effective scheduling of daily pump operations. Optimum design operation of pumping stations is a potential problem in this area that performs a wide background of solutions to this problem with different methods. Computation in all methods is driven by an objective function that includes operating and capital costs subject to various performances and hydraulic constraints. This paper achieves the optimal control and operation of an irrigation pumping station system by one of the latest tools used in optimization problems, which is the honey-bees mating optimization (HBMO) algorithm and is tested with a practical design. The HBMO algorithm with dynamic penalty function is presented and compared with two other well-known optimization tools which are the Lagrange multipliers (LM) method and genetic algorithms (GA) as well as with the previous results of the HBMO algorithm with constant penalty function for the same problem. The LM, GA and HBMO approaches simultaneously determine the least total annual cost of the pumping station and its operation. The solution includes the selection of pump type, capacity and the number of units, as well as scheduling the operation of irrigation pumps that results in minimum design and operating cost for a set of water demand curves. In this paper, the HBMO algorithm is applied and the dynamic penalty function is tested to demonstrate the efficiency of this combination simultaneously. The results are very promising and prove the ability of combining the dynamic penalty function with the HBMO algorithm for solving combinatorial design–operation optimization problems. Application of all these models to a real-world project shows not only considerable savings in cost and energy but also highlights the efficiency and capability of the dynamic penalty function in the HBMO algorithm for solving complex problems of this type.

58 citations


Journal ArticleDOI
TL;DR: A comprehensive investigation of the GA operators in a high-dimensional search space was conducted and it was found that a uniform crossover operation was superior to both one-point and two-point crossover operations over the whole range of crossover probabilities.
Abstract: Successful implementation of a catchment modelling system requires careful consideration of the system calibration which involves evaluation of many spatially and temporally variable control parameters. Evaluation of spatially variable control parameters has been an issue of increasing concern arising from an increased awareness of the inappropriateness of assuming catchment averaged values. Presented herein is the application of a real-value coding genetic algorithm (GA) for evaluation of spatially variable control parameters for implementation with the Storm Water Management Model (SWMM). It was found that a real-value coding GA using multiple storms calibration was a robust search technique that was capable of identifying the most promising range of values for spatially variable control parameters. As the selection of appropriate GA operators is an important aspect of the GA efficiency, a comprehensive investigation of the GA operators in a high-dimensional search space was conducted. It was found that a uniform crossover operation was superior to both one-point and two-point crossover operations over the whole range of crossover probabilities, and the optimal uniform crossover and mutation probabilities for the complex system considered were in the range of 0.75–0.90 and 0.01–0.1, respectively.

40 citations


Journal ArticleDOI
TL;DR: The use of a Geographic Information System, as an integration framework for the water modeling systems, together with object-oriented data modeling and programming schemes is explained and a case study of the HEC-HMS hydrologic model and the H EC-RAS hydraulic model are integrated into an automated floodplain mapping application on a GIS.
Abstract: The sustainable and equitable management of water requires integrated analysis which includes the integration of a multitude of modeling systems at the core. The linkage of the modeling systems and components is the main bottleneck to achieve the integrated modeling solutions that maintain the integrity of the entire environmental system for comprehensive analysis, planning and management. In this paper, the use of a Geographic Information System (GIS), as an integration framework for the water modeling systems, together with object-oriented data modeling and programming schemes is explained. Integration of the modeling systems on a GIS platform, through a surface-water-specific GIS data model, Arc Hydro, and interface data models as data repositories for common water features, hydrologic and hydraulic modeling elements, is presented with a case study. Arc Hydro served as an integration data model for the simulation models of concern. Time series data transfer between modeling system at the information exchange points is facilitated using object-oriented linkage programs, and relationships among the modeling elements are established through Arc Hydro. In the case study, the HEC-HMS hydrologic model and the HEC-RAS hydraulic model are integrated into an automated floodplain mapping application on a GIS. The implementation of the integration methodology is presented.

Journal ArticleDOI
TL;DR: An interactive digital atlas of Colombia, HidroSIG, has been developed with distributed maps and time series of monthly and long-term average hydro-climatological variables, as part of a more comprehensive geographical information system (GIS) and database as mentioned in this paper.
Abstract: An interactive digital hydro-climatologic atlas of Colombia, HidroSIG, has been developed with distributed maps and time series of monthly and long-term average hydro-climatological variables, as part of a more comprehensive geographical information system (GIS) and database. Maps were developed so as to capture the spatial variability of the diverse geophysical fields resulting from major geographic, topographic and climatic controls. HidroSIG contains modules that perform diverse hydrological and geomorphological estimations including: (i) extraction of geomorphological parameters of drainage channel networks and river basins from Digital Elevation Maps (DEM), (ii) estimation of long-term and monthly water balances and other hydro-climatic variables in river basins, (iii) estimation of extreme flows (floods and low flows) of different return periods along the river network of Colombia by combining long-term water balance with scaling methods, (iv) interpolation of geophysical fields, (v) temporal analysis of hydrological time series including standardization, autocorrelation function, Fourier spectrum and cross-correlations analysis with macro-climatic indices, and (vi) simulation of rainfall–runoff processes using a hydrologic distributed model. The most relevant features of HidroSIG are described in terms of the methods used for hydrologic estimations, visualization capabilities, tools for analysis and interpolation of hydro-climatic variables in space and time, geomorphologic analysis and estimation from DEMs, and other features. Water resources planning and management and diverse socio-economic sectors benefit from this freely available database and computational tool.

Journal ArticleDOI
TL;DR: Empirical evidence is provided that genetic programming can greatly benefit from this approach in forecasting and simulating physical phenomena, and a specific application of ensemble modeling to hydrological forecasts is introduced.
Abstract: This paper introduces an application of machine learning, on real data. It deals with Ensemble Modeling, a simple averaging method for obtaining more reliable approximations using symbolic regression. Considerations on the contribution of bias and variance to the total error, and ensemble methods to reduce errors due to variance, have been tackled together with a specific application of ensemble modeling to hydrological forecasts. This work provides empirical evidence that genetic programming can greatly benefit from this approach in forecasting and simulating physical phenomena. Further considerations have been taken into account, such as the influence of Genetic Programming parameter settings on the model's performance.

Journal ArticleDOI
TL;DR: This study investigates the development of upscaled solute transport models using genetic programming (GP), a domain-independent modeling tool that searches the space of mathematical equations for one or more equations that describe a set of training data.
Abstract: Due to the considerable computational demands of modeling solute transport in heterogeneous porous media, there is a need for upscaled models that do not require explicit resolution of the small-scale heterogeneity. This study investigates the development of upscaled solute transport models using genetic programming (GP), a domain-independent modeling tool that searches the space of mathematical equations for one or more equations that describe a set of training data. An upscaling methodology is developed that facilitates both the GP search and the implementation of the resulting models. A case study is performed that demonstrates this methodology by developing vertically averaged equations of solute transport in perfectly stratified aquifers. The solute flux models developed for the case study were analyzed for parsimony and physical meaning, resulting in an upscaled model of the enhanced spreading of the solute plume, due to aquifer heterogeneity, as a process that changes from predominantly advective to Fickian. This case study not only demonstrates the use and efficacy of GP as a tool for developing upscaled solute transport models, but it also provides insight into how to approach more realistic multi-dimensional problems with this methodology.

Journal ArticleDOI
TL;DR: POTOMAC (Pyrite Oxidation products Transport: Object-oriented Model for Abandoned Colliery sites) as discussed by the authors is a physically based contaminant transport model for mine spoil heaps.
Abstract: A physically based contaminant transport model, POTOMAC (Pyrite Oxidation products Transport: Object-oriented Model for Abandoned Colliery sites), has been developed to simulate the pyrite oxidation process in mine spoil heaps and the subsequent transport of the reaction products. This is believed to represent the first particle tracking model created using object-oriented technology and has proved capable of simulating the large time scales (on the order of centuries) required for this application. The model conceptualises a spoil heap as a series of ‘columns’, each representing a portion of the unsaturated zone, where active weathering and precipitation of secondary minerals takes place. The columns are then connected to a saturated zone, beneath the water table, where the contaminants are transported to the heap discharge. A form of particle tracking, the ‘random walk method’, is used to transport both the oxidant, oxygen, and the products, ferrous iron and sulfate. The subsequent oxidation of ferrous iron and precipitation of ferric oxyhydroxide is incorporated to provide an iron ‘sink’, where iron is effectively removed from the transport process. The application of POTOMAC to a case study, the Morrison Busty spoil heap in County Durham, UK, has produced encouraging results.

Journal ArticleDOI
TL;DR: In this article, data from 105 soil and groundwater remediation projects at BP gasoline service stations located in the state of Illinois were mined for lessons to reduce cost and improve management of remediation sites.
Abstract: In this paper, data from 105 soil and groundwater remediation projects at BP gasoline service stations located in the state of Illinois were mined for lessons to reduce cost and improve management of remediation sites. Data mining software called D2K was used to train decision tree, stepwise linear regression and instance-based weighting models that relate hydrogeologic, sociopolitical, temporal and remedial factors in the site closure reports to remediation cost. The most important factors influencing cost were found to be the amount of soil excavated and the number of groundwater monitoring wells installed, suggesting that better management of excavation and well placement could result in significant cost savings. The best model for predicting cost classes (low, medium and high cost) was the decision tree, which had a prediction accuracy of approximately 73%. The misclassification of approximately 27% of the sites by even the best model suggests that remediation costs at service stations are influenced by other site-specific factors that may be difficult to accurately predict in advance.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a tool that calculates the rates of change in bathymetric data by incorporating a linear regression formula, which is then plotted out by the tool into GIS as a DTM, in which each individual data cell represents a rate of change (time trend) at that particular location, allowing a clear temporal analysis to be presented for the whole area.
Abstract: Incorporating a temporal function (i.e. to analyse data over time) into the spatial environment of GIS has been developed and applied in the assessment of the evolution of Nash Bank, South Wales. A stringent programme of monitoring, including annual bathymetric surveys of the bank, is carried out as part of the requirements of aggregate dredging and provides suitable data to analyse changes in bank levels over time. Traditionally GIS has been used to assess the evolution of such coastal landforms by creating a digital terrain model (DTM) for each of the bathymetric datasets and then performing a simple calculation whereby one DTM is subtracted from an earlier one. However, a simple difference in levels between any two snapshots in time can be misleading when trying to evaluate long-term rates of change. The new GIS tool has been developed that calculates such rates (i.e. the time trend) by incorporating a linear regression formula. The results are then plotted out by the tool into GIS as a DTM, in which each individual data cell represents a rate of change (time trend) at that particular location, allowing a clear temporal analysis to be presented for the whole area.

Journal ArticleDOI
TL;DR: The approach used in the development of the NOAH modelling systems (Newcastle Object-oriented Advanced Hydroinformatics), developed entirely within the object-oriented paradigm, has made Noah modelling systems computationally highly efficient and yet easy to maintain and extend.
Abstract: Over the past 40 years many hydraulic modelling systems for free-surface flows have been developed and successfully used in research and engineering practice. These systems were, in general, developed using sequential programming techniques while object-oriented programming approaches have only been used in the development of their visual parts. This paper outlines the approach used in the development of the NOAH modelling systems (Newcastle Object-oriented Advanced Hydroinformatics), developed entirely within the object-oriented paradigm. This novel approach has made NOAH modelling systems computationally highly efficient and yet easy to maintain and extend. NOAH 1D and NOAH 2D are designed to model free-surface flows in one and two dimensions, respectively. NOAH 1D is based on the full de Saint-Venant equations while NOAH 2D is based on the Shallow Water equations. Beside the basic ideas behind the development of NOAH modelling systems this paper also presents their main features and discusses general benefits of the application of the object-oriented programming approach in the development of numerical codes.

Journal ArticleDOI
TL;DR: In this article, a geographic information system (GIS) was developed for retrieval and display of hydrodynamic and water quality data for marine applications, which not only bridges together a GIS and a database of various physical, chemical and biological geographically based data for efficient retrieval and management of information, but also incorporates advanced display tools designed specifically for marine data.
Abstract: A geographic information system (GIS) was developed for retrieval and display of hydrodynamic and water quality data. To establish such a system, two of the most important challenges are: (1) to establish a rigorous model which captures the three-dimensional and continuously changing characteristics of marine data and (2) to develop interpolation techniques to accommodate for the temporally and spatially scattered distribution of collected data. The developed system not only bridges together a GIS and a database of various physical, chemical and biological geographically based data for efficient retrieval and management of information, but also incorporates advanced display tools designed specifically for marine data. The initial intention of extending GIS for marine application is to mitigate the deteriorating water quality situation in the Pearl River Estuary (PRE).

Journal ArticleDOI
TL;DR: In this paper, an alternative probabilistic approach, to cope with the percentile-based water quality standards based on Monte Carlo simulation, is presented and compared to the deterministic approach, providing a deeper insight into the issue of uncertainty and emphasizing the importance of handling the water quality standard and TMDLs in terms of magnitude and frequency rather than a single-valued approach.
Abstract: The most commonly used deterministic approach to the development of total maximum daily loads (TMDLs) fails to explicitly address issues related to a margin of safety and inherent variability of streamflows in the process of TMDL development. In this paper, the deterministic approach to pH TMDL development for Beech Creek watershed, Muhlenberg County, Kentucky, proposed by Ormsbee, Elshorbagy and Zechman is discussed. The shortcomings and the limitations of the assumptions associated with the deterministic approach are highlighted. An alternative probabilistic approach, to cope with the percentile-based water quality standards based on Monte Carlo simulation, is presented and compared to the deterministic approach. The proposed probabilistic approach provides a deeper insight into the issue of uncertainty and emphasizes the importance of handling the water quality standards and TMDLs in terms of magnitude and frequency rather than a single-valued approach. Expected exceedances and the confidence of compliance with percentile-based standards are estimated. Accordingly, an objective method of estimating the margin of safety for pH TMDLs is proposed.

Journal ArticleDOI
TL;DR: Two modifications to achieve optimised results for a Tank Model in less computational time are presented and it is revealed that the supercomputer enhances the swiftness of the GA and achieves its objective within a couple of hours.
Abstract: Parameter optimisation is a significant but time-consuming process that is inherent in conceptual hydrological models representing rainfall‐runoff processes. This study presents two modifications to achieve optimised results for a Tank Model in less computational time. Firstly, a modified genetic algorithm (GA) is developed to enhance the fitness of the population consisting of possible solutions in each generation. Then the parallel processing capabilities of an IBM 9076 SP2 computer are used to expedite implementation of the GA. A comparison of processing time between a serial IBM RS/6000 390 computer and an IBM 9076 SP2 supercomputer reveals that the latter can be up to 8 times faster. The effectiveness of the modified GA is tested with two Tank Models for a hypothetical catchment and a real catchment. The former showed that the parallel GA reaches a lower overall error in reduced time. The overall RMSE, expressed as a percentage of actual mean flow rate, improves from 31.8% in a serial processing computer to 29.5% on the SP2 supercomputer. The case of the real catchment ‐ Shek-Pi-Tau Catchment in Hong Kong ‐ reveals that the supercomputer enhances the swiftness of the GA and achieves its objective within a couple of hours.

Journal ArticleDOI
TL;DR: A fourth-order two-point advective transport scheme was developed, or should the authors say invented, to be coherent with a two-points unsteady flow simulation scheme and it is disappointing that the Author neither mentioned nor included the reference to that development in his bibliography.
Abstract: The Discusser would like to congratulate the Author for rigorous development and complete treatment of numerical characteristics of finite difference schemes derived from the concept of polynomial interpolation of advected (concentration) values at the foot of backwards characteristic. It is important that such complete treatment is published nowadays because the current practice of using market-available codes without knowledge of underlying numerical problems leads not only to the blind application of “encapsulated methods” but also does not help to awaken a vocation and liking for numerical analysis in younger generations. This said, the Discusser is disappointed to see that the paper seems to imply that the only way to obtain high-order schemes is to develop them from piece-wise interpolation algebraic polynomial such as given by Eq. 2. Such procedure leads automatically, when higher order approximations are concerned, to involvement of more than two computational points in space. Thus the difference upwind formulas needed to compute the advected values at point xnþ1 i involve computational points xi, xi21, xi22, and even xi23 at times level tn. While such an approach is perfectly correct for functions that do not present large variations in space, it is awkward to apply in simulation systems for real rivers and streams. Indeed, not only boundary conditions ask for special procedures (as the Author shows in Appendix B) but it is very difficult and even tricky to include internal special conditions such as weirs, brusque variations of cross-sections, bifurcations etc. Such difficulties are well known to all developers of the schemes applied to simulate unsteady flows that include more than two space computational points (or more than one space interval). For that reason a fourth-order two-point advective transport scheme has been developed and it is disappointing that the Author neither mentioned nor included the reference to that development in his bibliography. This fourth-order two-point scheme is based on the hermitian interpolation of advected values at the foot of backwards characteristic and its originality comes from this approach: instead to follow classic algebraic polynomial procedure new finite difference scheme was developed, or should we say invented, to be coherent with a two-points unsteady flow simulation scheme. This scheme virtually eliminates numerical diffusion and thus permits modelling physical convection and physical diffusion without having to compensate for numerical inadequacy. The development of the fourth-order two-point scheme was carried out by F.M. Holly, Jr and A. Preissmann (1977). The description and details can be found in this paper and other referenced papers (Cunge et al. 1980; Holly & Usseglio-Polatera 1984; Holly and Toda 1984).