scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Hydroinformatics in 2001"


Journal ArticleDOI
TL;DR: In this paper, a simple and efficient prediction technique based on Singular Spectrum Analysis (SSA) coupled with Support Vector Machine (SVM) is proposed to predict the Tryggevaelde catchment runoff data (Denmark) and the Singapore rainfall data as case studies.
Abstract: Real time operation studies such as reservoir operation, flood forecasting, etc., necessitates good forecasts of the associated hydrologic variable(s). A significant improvement in such forecasting can be obtained by suitable pre-processing. In this study, a simple and efficient prediction technique based on Singular Spectrum Analysis (SSA) coupled with Support Vector Machine (SVM) is proposed. While SSA decomposes original time series into a set of high and low frequency components, SVM helps in efficiently dealing with the computational and generalization performance in a high-dimensional input space. The proposed technique is applied to predict the Tryggevaelde catchment runoff data (Denmark) and the Singapore rainfall data as case studies. The results are compared with that of the non-linear prediction (NLP) method. The comparisons show that the proposed technique yields a significantly higher accuracy in the prediction than that of NLP.

183 citations


Journal ArticleDOI
TL;DR: In this paper, a feed-forward artificial neural network (ANN) was used to predict the depth of scour at culvert outlets with a greater accuracy than existing empirical formulae and over a wider range of conditions.
Abstract: Scour at culvert outlets is a phenomenon encountered world-wide. Research into the problem has mainly been of an experimental nature, with equations being derived for particular circumstances. These traditional scour prediction equations, although offering the engineer some guidance on the likely magnitude of maximum scour depth, are applicable only to a limited range of situations. A model for the prediction of scouring that is generally applicable to all circumstances is not currently available. However, there is a substantial amount of data available from research over many years in this area. This paper compares current prediction equations with results obtained from two Artificial Neural Network models (ANN). The development of a basic feed forward artificial neural network trained by back-propagation to model scour downstream of culvert outlets is described. A supervised training algorithm is used with data collected from published studies and the authors' own experimental work. The results show that the ANN can successfully predict the depth of scour with a greater accuracy than existing empirical formulae and over a wider range of conditions.

85 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a methodology for quantifying the tradeoffs between sampling costs and local concentration estimation errors in an existing groundwater monitoring network by combining nonlinear spatial interpolation with the non-nominated sorted genetic algorithm (NSGA) to identify the tradeoff curve (or Pareto frontier).
Abstract: This study presents a methodology for quantifying the tradeoffs between sampling costs and local concentration estimation errors in an existing groundwater monitoring network. The method utilizes historical data at a single snapshot in time to identify potential spatial redundancies within a monitoring network. Spatially redundant points are defined to be monitoring locations that do not appreciably increase local estimation errors if they are not sampled. The study combines nonlinear spatial interpolation with the nondominated sorted genetic algorithm (NSGA) to identify the tradeoff curve (or Pareto frontier) between sampling costs and local concentration estimation errors. Guidelines are given for using theoretical relationships from the field of genetic and evolutionary computation for population sizing and niching to ensure that the NSGA is competently designed to navigate the problem's decision space. Additionally, both a selection pressure analysis and a niching-based elitist enhancement of the NSGA are presented, which were integral to the algorithm's efficiency in quantifying the Pareto frontier for costs and estimation errors. The elitist NSGA identified 34 of 36 members of the Pareto optimal set attained from enumerating the monitoring application's decision space; this represents a substantial improvement over the standard NSGA, which found at most 21 of 36 members.

84 citations


Journal ArticleDOI
TL;DR: The coefficient of efficiency, which is closely allied in form to the coefficient of determination, has been widely adopted in many data mining and modelling exercises, and values of this coefficient close to unity are taken as evidence of good matching between observed and computed flows.
Abstract: Despite almost five decades of activity on the computer modelling of input–output relationships, little general agreement has emerged on appropriate indices for the goodness-of-fit of a model to a set of observations of the pertinent variables. The coefficient of efficiency, which is closely allied in form to the coefficient of determination, has been widely adopted in many data mining and modelling exercises. Values of this coefficient close to unity are taken as evidence of good matching between observed and computed flows. However, studies using synthetic data have demonstrated that negative values of the coefficient of efficiency can occur both in the presence of bias in computed outputs, and when the computed volume of flow greatly exceeds the observed volume of flow. In contrast, the coefficient of efficiency lacks discrimination for cases close to perfect reproduction. In the latter case, a coefficient based upon the first differences of the data proves to be more helpful.

80 citations


Journal ArticleDOI
TL;DR: State-of-the-art variants of these competing methods, non-linear transfer functions and modified recurrent cascade-correlation artificial neural networks are presented, and objectively compares their forecasting performance using a case study based on the UK River Trent.
Abstract: Data-based methods of flow forecasting are becoming increasingly popular due to their rapid development times, minimum information requirements, and ease of real-time implementation, with transfer function and artificial neural network methods the most commonly applied methods in practice There is much antagonism between advocates of these two approaches that is fuelled by comparison studies where a state-of-the-art example of one method is unfairly compared with an out-of-date variant of the other technique This paper presents state-of-the-art variants of these competing methods, non-linear transfer functions and modified recurrent cascade-correlation artificial neural networks, and objectively compares their forecasting performance using a case study based on the UK River Trent Two methods of real-time error-based updating applicable to both the transfer function and artificial neural network methods are also presented Comparison results reveal that both methods perform equally well in this case, and that the use of an updating technique can improve forecasting performance considerably, particularly if the forecast model is poor

52 citations


Journal ArticleDOI
TL;DR: This paper examines the use of genetic algorithm (GA) optimization to identify water delivery schedules for an open-channel irrigation system, and suitable representations of these within the GA framework are developed.
Abstract: This paper examines the use of genetic algorithm (GA) optimization to identify water delivery schedules for an open-channel irrigation system. Significant objectives and important constraints are identified for this system, and suitable representations of these within the GA framework are developed. Objectives include maximizing the number of orders that are scheduled to be delivered at the requested time and minimizing variations in the channel flow rate. If, however, an order is to be shifted, the irrigator preference for this to be by ±24 h rather than ±12 h is accounted for. Constraints include avoiding exceedance of channel capacity. The GA approach is demonstrated for an idealized system of five irrigators on a channel spur. In this case study, the GA technique efficiently identified the optimal schedule that was independently verified using full enumeration of the entire search space of possible order schedules. Results have shown great promise in the ability of GA techniques to identify good irrigation order schedules.

50 citations


Journal ArticleDOI
TL;DR: In this article, an adaptive neuro-fuzzy system with autoregressive exogenous input (ARX) structure is proposed and an application is presented for the modelling of rainfall-runoff processes in the Sieve basin in Italy.
Abstract: Two important applications of rainfall-runoff models are forecasting and simulation. At present, rainfall-runoff models based on artificial intelligence methods are built basically for short-term forecasting purposes and these models are not very effective for simulation purposes. This study explores the applicability and effectiveness of adaptive neuro-fuzzy-system-based rainfall-runoff models for both forecasting and simulation. For this purpose, an adaptive neuro-fuzzy system with autoregressive exogenous input (ARX) structure is proposed and an application is presented for the modelling of rainfall-runoff processes in the Sieve basin in Italy.

49 citations


Journal ArticleDOI
TL;DR: In this article, a three-dimensional Computational Fluid Dynamics (CFD) procedure is used for the analysis of a UV disinfection system and four different configurations of the apparatus are evaluated in terms of maximum dosage, flow patterns, particle tracks and transient dosage.
Abstract: UV disinfection is now widely used for the treatment of water for consumption and wastewater in many countries. It offers advantages over other techniques in specific circumstances. Analysis of these systems has been carried out using a three-dimensional Computational Fluid Dynamics (CFD) procedure. This allows for efficient testing of prototypes. Sensitivity tests are shown for grid size, discretisation and turbulence model. Four different configurations of the apparatus are evaluated in terms of maximum dosage, flow patterns, particle tracks and transient dosage. This leads to conclusions about the most efficient design and shows that significant improvements can be achieved with minor changes to the design. Further conclusions are drawn about the CFD procedure itself. This work opens up the possibility of an internet-based design tool for small- and medium-sized enterprises.

37 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a knowledge self-management system for the water sector in the Third World, where scientific discourse at the centre and narrative discourse at outer periphery sets the overall specification of the inner periphery.
Abstract: The paper introduces an inversion of the structure of the decision making process in the water sector that has been so far followed in most countries, and in almost all so-called ‘third world’ societies. It is commonly observed that the general population becomes alienated and effectively disempowered through this existing ‘top-down’ knowledge transmission process. The disenfranchisement of large parts of the general population and the grievous harm done to these parts through their resulting disempowerment has led to an outcry against the water professionals, who are seen at the very least as accomplices, and often as initiators in ‘crimes against humanity’. Empowering the population as a whole as genuine stakeholders in water resources then becomes the basic objective of water professionals in introducing an alternative paradigm, as exemplified in the second part of this paper (this issue, pp. 35–48) by the design of a new system capable of supporting ‘knowledge-intensive’ agricultural practices. What is being proposed here thus corresponds to an inversion of the so-far established order of knowledge/power in that it corresponds to an inversion in power relations that is realised by an inversion in knowledge relations. The system proposed here by way of an example is then primarily a means of realising this inversion. The economic sustainability of such a system within a ‘third world’ context necessitates the consideration also of knowledge/value relations, and these are also briefly introduced. The system itself is essentially a knowledge self-management system, comprising three principal components: The use of ‘scientific discourse’ at the centre and of ‘narrative discourse’ at the outer periphery sets the overall specification of the inner periphery.

26 citations


Journal ArticleDOI
TL;DR: The motorway and railway connection between Denmark and Sweden, opened on 1 July 2000, when taken together with the connection across the Great Belt between the largest Danish islands, now provides a direct link between the Scandinavian peninsular and the rest of Europe as mentioned in this paper.
Abstract: The motorway and railway connection between Denmark and Sweden, opened on 1 July 2000, when taken together with the connection across the Great Belt between the largest Danish islands, now provides a direct link between the Scandinavian peninsular and the rest of Europe. At a total cost of some 8 billion US dollars, these projects represented the largest infrastructural investments of their kind in Europe. Although backed by strong political and economic interests, these projects were also opposed by a part of the public and especially by political and environmental interest groups. This opposition was particularly pronounced in the case of the Denmark-Sweden link, partly owing to its location in a densely populated area and partly due to the potential impacts of the proposed link on the very sensitive local and regional marine environment. Thus, alongside the task of designing and constructing the physical link, the consortium that was responsible for its realisation, Oresundsbro Konsortiet, had to find ways to satisfy these many diverse interests. This paper describes how Oresundsbro Konsortiet, being an owner that valued constructive partnership, took up these challenges in their management, and how the environmental concerns were accommodated in the design and construction methods. Furthermore, it describes how the socio-technical approaches already taken up and developed within hydroinformatics in earlier projects were taken much further in the case of the Denmark-Sweden link. Finally, the paper describes the role of hydroinformatics in the various phases of the project and its significance in achieving the successful completion. The role of hydroinformatics as an important technology in facilitating stakeholder dialogue is thereby also clearly illustrated.

20 citations


Journal ArticleDOI
Hoi Yeung1
TL;DR: In this paper, a multi-channel reactor model was developed to model the recirculations in service reservoirs, which can be used to characterise the flow characteristics of service reservoirs from tracer test results.
Abstract: Service reservoirs were built to provide the dual function of balancing supply with demand and provision of adequate head to maintain pressure throughout the distribution network. Changing demographics in the UK and reducing leakage have led to significant increases in water age and hence increased risk of poor water quality. Computational fluid mechanics has been used to study the behaviour of a range of service reservoirs with a rectangular plan form. Detailed analysis of flow distribution and water age suggests that tanks with horizontal inlets are better mixed when compared with vertical top water level inlets. With increasing length to width ratio, the flow characteristics of tanks with vertical inlets increasingly resemble plug flow. A new multi-channel reactor model was developed to model the recirculations in service reservoirs. This simple model can be used to characterise the flow characteristics of service reservoirs from tracer test results.

Journal ArticleDOI
TL;DR: A computer infrastructure which integrates the geographic information system, hydrodynamic model, visualization and network applications, and water quality model system was developed and applied in Tamshui River, Taiwan, and exhibits great potential in data visualization capabilities and the improvement of water management.
Abstract: A computer infrastructure which integrates the geographic information system (GIS), hydrodynamic model, visualization and network applications was developed and applied in Tamshui River, Taiwan. A digital terrain model (DTM) of the study area was first generated. We used it as a basis to construct the computation grids, conduct the flow simulations, and visualize the predicted flow scenarios in the virtual world. The three-dimensional hydrodynamic and water quality model system WQMAP was employed to simulate flows under the tidal forcing, upstream river inflows and seawater–freshwater interactions of Tamshui River. Model predictions were generally in good agreement with other simulations. Computed results were visualized in both the innovative virtual reality (VR) environment and Internet-based collaborative visualization environment (CVE). The VR environment enabled us to observe firsthand the complicated flow phenomena in the virtual world. Internet-based CVE supports distributed visualization and collaboration. The GIS-based system exhibits great potential in data visualization capabilities and the improvement of water management. We anticipate that these computer technologies will popularly be applied to hydroinformatics and other related domains in the foreseeable future.

Journal ArticleDOI
TL;DR: In this article, the authors explored the effect of system noise on estimated parameters and compared the estimated parameters with the calibrated results by using artificial and experimental data, and found that hydraulic conductivity does not provide a similar level of accuracy.
Abstract: Real world groundwater aquifers are heterogeneous and system variables are not uniformly distributed across the aquifer. Therefore, in the modelling of the contaminant transport, we need to consider the uncertainty associated with the system. Unny presented a method to describe the system by stochastic differential equations and then to estimate the parameters by using the maximum likelihood approach. In this paper, this method was explored by using artificial and experimental data. First a set of data was used to explore the effect of system noise on estimated parameters. The experimental data was used to compare the estimated parameters with the calibrated results. Estimates obtained from artificial data show reasonable accuracy when the system noise is present. The accuracy of the estimates has an inverse relationship to the noise. Hydraulic conductivity estimates in a one-parameter situation give more accurate results than in a two-parameter situation. The effect of the noise on estimates of the longitudinal dispersion coefficient is less compared to the effect on hydraulic conductivity estimates. Comparison of the results of the experimental dataset shows that estimates of the longitudinal dispersion coefficient are similar to the aquifer calibrated results. However, hydraulic conductivity does not provide a similar level of accuracy. The main advantage of the estimation method presented here is its direct dependence on field observations in the presence of reasonably large noise levels.

Journal ArticleDOI
TL;DR: A synthesis of work in the corpus-based studies of language and in the literature on Language for Special Purposes forms the basis of semi-automatic analysis of texts for extracting terms, elaborating terms and identifying heuristics.
Abstract: A method for systematically analysing text is outlined for use in the acquisition of specialist knowledge. Such knowledge can typically be engineered in the knowledge bases of hydroinformatic systems. A synthesis of work in the corpus-based studies of language and in the literature on Language for Special Purposes is presented. This synthesis forms the basis of semi-automatic analysis of texts for extracting terms, elaborating terms and identifying heuristics.

Journal ArticleDOI
TL;DR: This technical note describes a pan-European education experiment, where students from five European universities have solved a given engineering task in distributed teams in the Internet based on the principle of ‘information sharing’ using a Web based project platform.
Abstract: Modern information and communication technology enables new technical solutions to support collaboration in engineering over distance. The application of Web based project platforms and collaboration methods requires new kinds of soft skills, knowledge and experience—a task for education and training in hydroinformatics. This technical note describes a pan-European education experiment, where students from five European universities have solved a given engineering task in distributed teams in the Internet. The collaboration was based on the principle of ‘information sharing’ using a Web based project platform. In this course the students acquired experience in interdisciplinary teamwork, net based project co-ordination and Web based reporting. They strengthened their social competence to collaborate in heterogeneous teams with different habits, nationalities, ages and educational backgrounds. The described experiment might be the basis for introducing Web based collaborative engineering in the regular course programme of water related curricula.

Journal ArticleDOI
TL;DR: In this article, the authors present the development of an object-oriented software system for water-quality management, and discuss the results from its application to the study of the Upper Mersey river system in the United Kingdom.
Abstract: The aim of this paper is to present the recent advances in the development of an object-oriented software system for water-quality management, and discuss the results from its application to the study of the Upper Mersey river system in the United Kingdom. The software has been extended and includes tools for the construction of flow duration and low-flow frequency curves using different methods, the sensitivity analysis and parameter estimation of the water-quality model, and the stochastic simulation of the mass balance at the discharge points of point-source effluents. The application of object-orientation has facilitated the extension of the software, and supported the integration of different models in it. The results of the case study are in general agreement with published values. They also include low flow estimates at the ungauged river sites based on actual data for the artificial sources, and water-quality simulation results, which have not been presented earlier in the literature for the Upper Mersey system.

Journal ArticleDOI
TL;DR: Modelling the complex turbulent fluxes across strong shear layers, such as exist between the channel and floodplain flow in an over bank flood flow, using Adaptive Neuro-Fuzzy Inference Systems (ANFIS) to make fuzzy mappings between the fluxes and different data types.
Abstract: This study is divided into three parts centred around modelling the complex turbulent fluxes across strong shear layers, such as exist between the channel and floodplain flow in an over bank flood flow. The three stages utilize Adaptive Neuro-Fuzzy Inference Systems (ANFIS) to make fuzzy mappings between the fluxes and different data types. The de-fuzzification stage commonly used in Fuzzy Inference Systems is adapted to avoid the generation of crisp outputs, a process which tends to hide the underlying uncertainty implicit in the fuzzy relationship. Each stage of the study utilizes conditioning data that makes the fuzzy mappings more tenuously linked with what would normally be considered physically based relationships. The need to make such mappings in distributed models of complex systems, such as flood models, stems from the sparsity of available distributed information (e.g. roughness) with which to condition the models. If patterns in distributed observables which clearly affect, or are affected by, the river hydraulics can be linked to the local fluxes, then the conditioning of the model would improve. Mappings such as these often suffer from scaling effects, an issue addressed here through training the fuzzy rules on the basis of both laboratory and field collected data.

Journal ArticleDOI
TL;DR: Open Source is a growing phenomenon in the computer software world, and could have significant implications for Hydroinformatics, as discussed in a paper in this issue.
Abstract: Open Source is a growing phenomenon in the computer software world, and could have significant implications for Hydroinformatics, as discussed in a paper in this issue. The two primary sites containing information about Free and Open Source Software are http://www.fsf.org/ (the Free Software Foundation, or FSF) and http://www.opensource.org/ (the Open Source Initiative, OSI). The FSF were the originators of the GNU project to develop a complete Unixcompatible operating system. The GNU project of the FSF can be found at http://www.gnu.org/. The FSF and the OSI maintain lists of licenses at http://www.fsf.org/philosophy/license-list.html and http://www.opensource.org/licenses/ index.html respectively. The FSF page includes comments on the pros and cons of each listed license from a Free Software perspective, whereas the OSI simply list all the licenses they have approved as fulfilling all the criteria of the Open Source Definition (available at http://www.opensource.org/docs/definition.html). High profile open source projects are http://www.apache.org/ (home of the Apache web server software), http:// www.linux.org/ (merely one of many, many sites dedicated to the open source operating system GNU/Linux), http://www.freebsd.org/, http://www. netbsd.org/ and http://www.openbsd.com/ (three variants of the open source BSD Unix operating system, from which Microsoft took some of the networking code now integrated into Windows). http://www.kde.org/ and http://www.gnome.org/ are two groups dedicated to turning Linux into a usable desktop operating system, with considerable success. Open Source software for engineering and science is also available. MOUSE (http://www.vug.uni-duisburg.de/ MOUSE/) is an open source CFD code. SLFFEA (http://slffea.sourceforge.net/index.html) is a software package and OFELI (http://wwwlma.univ-bpclermont.fr/touzani/ofeli/) a library for finite element analysis. EcoLab (http:// parallel.hpc.unsw.edu.au/rks/ecolab/) is an object oriented simulation environment. GRASS (http://www.baylor.edu/ grass/) is a Geographical Information System which includes a number of hydrological modelling modules. Check http://www.freshmeat.net/, http://www.openscience.org/ or http://sal.kachinatech. com/ for more. Note that the last lists all types of applications for Linux, including closed source and non-free. Check the licence to make sure software is really open source. Companies involved with open source software include the leader in GNU/Linux packaging and distribution, Red Hat (http://www.redhat.com/). Zope Corporation (http://www.zope.com/) released the source code to their web application server software (to be found at http://www.zope.org/) at the suggestion of their venture capitalist in order to focus on providing services. IBM, investing large sums of money in open source software, has assisted in porting Linux to its high end hardware such as the S/390 mainframe (http://www-1.ibm.com/linux/), and is involved with many open source projects (http://oss.software.ibm.com/) including Apache (http://www.apache.org/). The Open Source Development Lab (http://www.osdlab.com/) provides computing facilities and support to open source developers with a focus on improving Linux and Linux based software for enterprise applications. It was founded with the support of a veritable who’s who of the IT industry (details of the lab sponsors at http://osdlab.org/sponsors/). http://www.freshmeat.net/ is the nearest thing to a comprehensive database of open source software. http:// www.slashdot.org/ is a web site dedicated to the dissemination of news related to open source, freedom of information, intellectual property, and many related issues. The front page consists of stories submitted by readers and edited by the site managers. Each story then has an attached discussion area. http://www.sourceforge.org/ is a free hosting site for open source projects. A full set of tools for managing software development is provided, including revision control (through CVS), bug and request tracking, mailing lists and discussion forums, and web site hosting.