scispace - formally typeset
Search or ask a question

Showing papers in "Environmental Modelling and Software in 2008"


Journal ArticleDOI
TL;DR: It is concluded that SOM is a promising technique suitable to investigate, model, and control many types of water resources processes and systems.
Abstract: The use of artificial neural networks (ANNs) in problems related to water resources has received steadily increasing interest over the last decade or so. The related method of the self-organizing map (SOM) is an unsupervised learning method to analyze, cluster, and model various types of large databases. There is, however, still a notable lack of comprehensive literature review for SOM along with training and data handling procedures, and potential applicability. Consequently, the present paper aims firstly to explain the algorithm and secondly, to review published applications with main emphasis on water resources problems in order to assess how well SOM can be used to solve a particular problem. It is concluded that SOM is a promising technique suitable to investigate, model, and control many types of water resources processes and systems. Unsupervised learning methods have not yet been tested fully in a comprehensive way within, for example water resources engineering. However, over the years, SOM has displayed a steady increase in the number of applications in water resources due to the robustness of the method.

411 citations


Journal ArticleDOI
TL;DR: The main aspects of what has been learned in the process of supporting sustainable water resources planning and management in the semi-arid southwestern United States by means of integrated modeling are described and can be useful to other scientific efforts in the broader area of linking environmental science with decision making.
Abstract: The call for more effective integration of science and decision making is ubiquitous in environmental management. While scientists often complain that their input is ignored by decision makers, the latter have also expressed dissatisfaction that critical information for their decision making is often not readily available or accessible to them, or not presented in a usable form. It has been suggested that scientists need to produce more ''usable'' information with enhanced credibility, legitimacy, and saliency to ensure the adoption of research results. In basin-scale management of coupled human-water systems, water resources managers, like other decision makers, are frequently confronted with the need to make major decisions in the face of high system complexity and uncertainty. The integration of useful and relevant scientific information is necessary and critical to enable informed decision-making. This paper describes the main aspects of what has been learned in the process of supporting sustainable water resources planning and management in the semi-arid southwestern United States by means of integrated modeling. Our experience indicates that particular attention must be paid to the proper definition of focus questions, explicit conceptual modeling, a suitable modeling strategy, and a formal scenario analysis approach in order to facilitate the development of ''usable'' scientific information. We believe that these lessons and insights can be useful to other scientific efforts in the broader area of linking environmental science with decision making.

369 citations


Journal ArticleDOI
TL;DR: Ichthyop is a free Java tool designed to study the effects of physical and biological factors on ichthyoplankton dynamics and generates output files that can be post-processed easily using graphic and statistical software.
Abstract: Ichthyop is a free Java tool designed to study the effects of physical and biological factors on ichthyoplankton dynamics. It incorporates the most important processes involved in fish early life: spawning, movement, growth, mortality and recruitment. The tool uses as input time series of velocity, temperature and salinity fields archived from ROMS or MARS oceanic models. It runs with a user-friendly graphic interface and generates output files that can be post-processed easily using graphic and statistical software.

308 citations


Journal ArticleDOI
TL;DR: It is suggested that ''optioneering'' tools will increasingly become part of urban water management planning toolkits as practice moves towards more decentralised, integrated, context-specific solutions to address issues of sustainability.
Abstract: Conventional urban water management practices aim to meet water supply-demands while conveying wastewater and stormwater away from urban settings. Alternative approaches which consider water demands to be manageable and wastewater and stormwater as valuable resources, although being increasingly sought, lack reliable site specific implementation methodologies. This paper describes the development of a decision support tool (termed the Urban Water Optioneering Tool (UWOT)) to facilitate the selection of combinations of water saving strategies and technologies and to support the delivery of integrated, sustainable water management for new developments. The tool is based on a water balance model which allows the investigation of interactions between the major urban water cycle streams. The model is informed by a knowledge library which is populated with technological options and information on their major characteristics and performance. The technology selection is driven by a GA algorithm allowing efficient exploration of the decision space. Quantitative and qualitative sustainability criteria and indicators are used to compare between alternative composite water management strategies while preserving the multiobjective nature of the problem. The tool has been successfully tested on a case study site in the UK, and the results are presented and discussed. It is suggested that ''optioneering'' tools will increasingly become part of urban water management planning toolkits as practice moves towards more decentralised, integrated, context-specific solutions to address issues of sustainability.

301 citations


Journal ArticleDOI
TL;DR: An automated statistical downscaling (ASD) regression-based approach inspired by the SDSM method is presented and assessed to reconstruct the observed climate in eastern Canada based extremes as well as mean state and indicates that the agreement of simulations with observations depends on the GCMs atmospheric variables used as ''predictors'' in the regression- based approach.
Abstract: Many impact studies require climate change information at a finer resolution than that provided by Global Climate Models (GCMs). In the last 10 years, downscaling techniques, both dynamical (i.e. Regional Climate Model) and statistical methods, have been developed to obtain fine resolution climate change scenarios. In this study, an automated statistical downscaling (ASD) regression-based approach inspired by the SDSM method (statistical downscaling model) developed by Wilby, R.L., Dawson, C.W., Barrow, E.M. [2002. SDSM - a decision support tool for the assessment of regional climate change impacts, Environmental Modelling and Software 17, 147-159] is presented and assessed to reconstruct the observed climate in eastern Canada based extremes as well as mean state. In the ASD model, automatic predictor selection methods are based on backward stepwise regression and partial correlation coefficients. The ASD model also gives the possibility to use ridge regression to alleviate the effect of the non-orthogonality of predictor vectors. Outputs from the first generation Canadian Coupled Global Climate Model (CGCM1) and the third version of the coupled global Hadley Centre Climate Model (HadCM3) are used to test this approach over the current period (i.e. 1961-1990), and compare results with observed temperature and precipitation from 10 meteorological stations of Environment Canada located in eastern Canada. All ASD and SDSM models, as these two models are evaluated and inter-compared, are calibrated using NCEP (National Center for Environmental Prediction) reanalysis data before the use of GCMs atmospheric fields as input variables. The results underline certain limitations to downscale the precipitation regime and its strength to downscale the temperature regime. When modeling precipitation, the most commonly combination of predictor variables were relative and specific humidity at 500hPa, surface airflow strength, 850hPa zonal velocity and 500hPa geopotential height. For modeling temperature, mean sea level pressure, surface vorticity and 850hPa geopotential height were the most dominant variables. To evaluate the performance of the statistical downscaling approach, several climatic and statistical indices were developed. Results indicate that the agreement of simulations with observations depends on the GCMs atmospheric variables used as ''predictors'' in the regression-based approach, and the performance of the statistical downscaling model varies for different stations and seasons. The comparison of SDSM and ASD models indicated that neither could perform well for all seasons and months. However, using different statistical downscaling models and multi-sources GCMs data can provide a better range of uncertainty for climatic and statistical indices.

273 citations


Journal ArticleDOI
TL;DR: Although techniques presented in this paper produce better results compared to existing GIS methods, the linear approach has some limitations which can be overcome by accounting for channel meanders, sinuosity and thalweg location.
Abstract: Two- and three-dimensional (2D/3D) hydrodynamic models require the geometric description of river bathymetry and its surrounding area as a continuous surface. These surface representations of river systems are also required in mapping flood inundation extents. Creating surface representations of river systems is a challenging task because of issues associated with interpolating river bathymetry, and then integrating this bathymetry with surrounding topography. The objectives of this paper are to highlight key issues associated with creating an integrated river terrain, and propose GIS techniques to overcome these issues. The following techniques are presented in this paper: mapping and analyzing river channel data in a channel fitted coordinate system; interpolation of river cross-sections to create a 3D mesh for main channel; and integration of interpolated 3D mesh with surrounding topography. These techniques are applied and cross-validated by using datasets from Brazos River in Texas, Kootenai River in Montana, and Strouds Creek in North Carolina. Creation of a 3D mesh for the main channel using a channel-fitted coordinate system and subsequent integration with surrounding topography produces a coherent river terrain model, which can be used for 2D/3D hydrodynamic modeling and flood inundation mapping. Although techniques presented in this paper produce better results compared to existing GIS methods, the linear approach has some limitations which can be overcome by accounting for channel meanders, sinuosity and thalweg location.

273 citations


Journal ArticleDOI
TL;DR: This paper addresses the computational efficiency and accuracy of the algorithm via the formulation and evaluation of alternative techniques for determining the significance of PMI values estimated during selection and demonstrates the superior performance of this non-linear IVS technique in comparison to linear correlation-based techniques.
Abstract: Artificial neural networks (ANNs) have been widely used to model environmental processes. The ability of ANN models to accurately represent the complex, non-linear behaviour of relatively poorly understood processes makes them highly suited to this task. However, the selection of an appropriate set of input variables during ANN development is important for obtaining high-quality models. This can be a difficult task when considering that many input variable selection (IVS) techniques fail to perform adequately due to an underlying assumption of linearity, or due to redundancy within the available data. This paper focuses on a recently proposed IVS algorithm, based on estimation of partial mutual information (PMI), which can overcome both of these issues and is considered highly suited to the development of ANN models. In particular, this paper addresses the computational efficiency and accuracy of the algorithm via the formulation and evaluation of alternative techniques for determining the significance of PMI values estimated during selection. Furthermore, this paper presents a rigorous assessment of the PMI-based algorithm and clearly demonstrates the superior performance of this non-linear IVS technique in comparison to linear correlation-based techniques.

264 citations


Journal ArticleDOI
TL;DR: The ability of the improved model to match observed statistics and extremes is illustrated as an application to the Dommel catchment on the Netherlands/Belgian border illustrates the ability of this robust and well tested stochastic rainfall field generator to match observations and extremes.
Abstract: RainSim V3 is a robust and well tested stochastic rainfall field generator used successfully in a broad range of climates and end-user applications. Rainfall fields or multi-site time series can be sampled from a spatial-temporal Neyman-Scott rectangular pulses process: storm events occur as a temporal Poisson process; each triggers raincell generation using a stationary spatial Poisson process; raincells are clustered in time lagging the storm event; each raincell contributes rainfall uniformly across its circular extent and throughout its lifetime; raincell lag, duration, radius and intensity are random variables; orographic effects are accounted for by non-uniform scaling of the rainfall field. Robust and efficient numerical optimization schemes for model calibration are identified following the evaluation of five schemes with optional log-transformation of the parameters. The log-parameter Shuffled Complex Evolution (lnSCE) algorithm with a convergence criterion is chosen for single site applications and an effort limited restarted lnSCE algorithm is selected for spatial applications. The new objective function is described and shown to improve model calibration. Linear and quadratic expressions are identified which can reduce the bias between the fitted and simulated probabilities of both dry hours and dry days as used in calibration. Exact fitting of mean rainfall statistics is also implemented and demonstrated. An application to the Dommel catchment on the Netherlands/Belgian border illustrates the ability of the improved model to match observed statistics and extremes.

228 citations


Journal ArticleDOI
TL;DR: It was found that combining the predictions from the PCR and ANN models reduced the root mean square errors of ozone concentrations and it is clear that combining predictions generated by different methods could improve the accuracy and provide a prediction that is superior to a single model prediction.
Abstract: This work encompasses ozone modeling in the lower atmosphere. Data on seven environmental pollutant concentrations (CH"4, NMHC, CO, CO"2, NO, NO"2, and SO"2) and five meteorological variables (wind speed, wind direction, air temperature, relative humidity, and solar radiation) were used to develop models to predict the concentration of ozone in Kuwait's lower atmosphere. The models were developed by using summer air quality and meteorological data from a typical urban site when ozone concentration levels were the highest. The site was selected to represent a typical residential area with high traffic influences. The combined method, which is based on using both multiple regression combined with principal component analysis (PCR) and artificial neural network (ANN) modeling, was used to predict ozone concentration levels in the lower atmosphere. This combined approach was used to improve the prediction accuracy of ozone. The predictions of the models were found to be consistent with observed values. The R^2 values were 0.965, 0.986, and 0.995 for PCR, ANN, and the combined model prediction, respectively. It was found that combining the predictions from the PCR and ANN models reduced the root mean square errors (RMSE) of ozone concentrations. It is clear that combining predictions generated by different methods could improve the accuracy and provide a prediction that is superior to a single model prediction.

201 citations


Journal ArticleDOI
TL;DR: Overall, SWAT2005 simulated the hydrology and the water quality constituents at the subwatershed-scale more adequately when all of the available observed data were used for model simulation as evidenced by statistical measure when both the autocalibration and manually adjusted parameters were used in the simulation.
Abstract: SWAT is a physically based model that can simulate water quality and quantity at the watershed scale. Due to many of the processes involved in the manual- or autocalibration of model parameters and the knowledge of realistic input values, calibration can become difficult. An autocalibration-sensitivity analysis procedure was embedded in SWAT version 2005 (SWAT2005) to optimize parameter processing. This embedded procedure is applied to six small-scale watersheds (subwatersheds) in the central Texas Blackland Prairie. The objective of this study is to evaluate the effectiveness of the autocalibration-sensitivity analysis procedures at small-scale watersheds (4.0-8.4ha). Model simulations are completed using two data scenarios: (1) 1 year used for parameter calibration; (2) 5 years used for parameter calibration. The impact of manual parameter calibration versus autocalibration with manual adjustment on model simulation results is tested. The combination of autocalibration tool parameter values and manually adjusted parameters for the 2000-2004 simulation period resulted in the highest E"N"S and R^2 values for discharge; however, the same 5-year period yielded better overall E"N"S, R^2 and P-values for the simulation values that were manually adjusted. The disparity is most likely due to the limited number of parameters that are included in this version of the autocalibration tool (i.e. Nperco, Pperco, and nitrate). Overall, SWAT2005 simulated the hydrology and the water quality constituents at the subwatershed-scale more adequately when all of the available observed data were used for model simulation as evidenced by statistical measure when both the autocalibration and manually adjusted parameters were used in the simulation.

196 citations


Journal ArticleDOI
TL;DR: ROMSTOOLS, a collection of global data sets and a series of Matlab programs collected in an integrated toolbox, generates the grid, surface forcing, initial condition, open boundary conditions, and tides for climatological and inter-annual ROMS ocean simulations.
Abstract: ROMSTOOLS, a collection of global data sets and a series of Matlab programs collected in an integrated toolbox, generates the grid, surface forcing, initial condition, open boundary conditions, and tides for climatological and inter-annual ROMS ocean simulations. ROMSTOOLS also generates embedded models, real-time coastal modeling systems, as well as experiments including biology. Tools for visualization, animations and diagnostics are also provided.

Journal ArticleDOI
TL;DR: A powerful multi-objective optimization genetic algorithm, NSGA II, is used to derive the Pareto optimal solutions, which can illustrate the whole trade-off relationships between objectives.
Abstract: Integrated modelling of the urban wastewater system has received increasing attention in recent years and it has been clearly demonstrated, at least at a theoretical level, that system performance can be enhanced through optimized, integrated control. However, most research to date has focused on simple, single objective control. This paper proposes consideration of multiple objectives to more readily tackle complex real world situations. The water quality indicators of the receiving water are considered as control objectives directly, rather than by reference to surrogate criteria in the sewer system or treatment plant. A powerful multi-objective optimization genetic algorithm, NSGA II, is used to derive the Pareto optimal solutions, which can illustrate the whole trade-off relationships between objectives. A case study is used to demonstrate the benefits of multiple objective control and a significant improvement in each of the objectives can be observed in comparison with a conventional base case scenario. The simulation results also show the effectiveness of NSGA II for the integrated urban wastewater system despite its complexity.

Journal ArticleDOI
TL;DR: This paper describes the application of a newly proposed non-linear IVS algorithm to the development of ANN models to forecast water quality within two water distribution systems to reduce the need for arbitrary judgement and extensive trial-and-error during model development.
Abstract: Recent trends in the management of water supply have increased the need for modelling techniques that can provide reliable, efficient, and accurate representation of the complex, non-linear dynamics of water quality within water distribution systems. Statistical models based on artificial neural networks (ANNs) have been found to be highly suited to this application, and offer distinct advantages over more conventional modelling techniques. However, many practitioners utilise somewhat heuristic or ad hoc methods for input variable selection (IVS) during ANN development. This paper describes the application of a newly proposed non-linear IVS algorithm to the development of ANN models to forecast water quality within two water distribution systems. The intention is to reduce the need for arbitrary judgement and extensive trial-and-error during model development. The algorithm utilises the concept of partial mutual information (PMI) to select inputs based on the analysis of relationship strength between inputs and outputs, and between redundant inputs. In comparison with an existing approach, the ANN models developed using the IVS algorithm are found to provide optimal prediction with significantly greater parsimony. Furthermore, the results obtained from the IVS procedure are useful for developing additional insight into the important relationships that exist between water distribution system variables.

Journal ArticleDOI
TL;DR: A distributed model that is in operational use for forecasting flash floods in northern Austria and includes ensemble Kalman Filtering is used to update the model states (grid soil moisture) based on observed runoff.
Abstract: This paper presents a distributed model that is in operational use for forecasting flash floods in northern Austria. The main challenge in developing the model was parameter identification which was addressed by a modelling strategy that involved a model structure defined at the model element scale and multi-source model identification. The model represents runoff generation on a grid basis and lumped routing in the river reaches. Ensemble Kalman Filtering is used to update the model states (grid soil moisture) based on observed runoff. The forecast errors as a function of forecast lead time are evaluated for a number of major events in the 622km^2 Kamp catchment and range from 10% to 30% for 4-24h lead times, respectively.

Journal ArticleDOI
TL;DR: The perspective offered by coupling a simple vegetation growth model and ground-based remotely-sensed data for the monitoring of wheat production is investigated, offering the advantage of being quite simple, without requiring any data on agricultural practices, which makes it very attractive for operational application at a regional scale.
Abstract: In this study we investigated the perspective offered by coupling a simple vegetation growth model and ground-based remotely-sensed data for the monitoring of wheat production. A simple model was developed to simulate the time courses of green leaf area index (GLAI), dry above-ground phytomass (DAM) and grain yield (GY). A comprehensive sensitivity analysis has allowed addressing the problem of model calibration, distinguishing three categories of parameters: (1) those, well known, derived from the present or previous wheat experiments; (2) those, phenological, which have been identified for the wheat variety under study; (3) those, related to farmer practices, which has been adjusted field by field. The approach was tested against field data collected on irrigated winter wheat in the semi-arid Marrakech plain. This data set includes estimates of GLAI with additional DAM and GY measurements. The model provides excellent simulations of both GLAI and DAM time courses. GY space variations are correctly predicted, but with a general underestimation on the validation fields. Despite this limitation, the approach offers the advantage of being quite simple, without requiring any data on agricultural practices (sowing, irrigation and fertilisation). This makes it very attractive for operational application at a regional scale. This perspective is discussed in the conclusion.

Journal ArticleDOI
TL;DR: It is recommended, that, in order to get more accurate (local) emission predictions and to achieve correct application in particular situations, it is important to improve current average speed models by including a congestion algorithm, or alternatively, at least provide information on the level of congestion in the driving patterns on which these models are based.
Abstract: Road transport emission and fuel consumption models are currently used extensively to predict levels of air pollution along roadway links and networks. This paper examines how, and to what extent, models which are currently used to predict emissions and fuel consumption from road traffic include the effects of congestion. A classification framework is presented in which a key factor, driving pattern, connects emissions to congestion. Prediction of the effects of different driving patterns in emission models is generally restricted to certain aspects of modelling, i.e. hot-running emissions of regulated pollutants. As a consequence, the effects of congestion are only partially incorporated in the predictions. The majority of emission models explicitly incorporate congestion in the modelling process, but for one important family of emission models, namely average speed models, this could not be determined directly. Re-examination of the (light-duty) driving patterns on which three average speed models (COPERT, MOBILE, EMFAC) are based, shows that it is likely that congestion is represented in these patterns. Since (hot-running) emission factors are based on these patterns, this implies that the emission factors used in these emission models also reflect different levels of congestion. Congestion is thus indirectly incorporated in these models. It is recommended, that, in order to get more accurate (local) emission predictions and to achieve correct application in particular situations, it is important to improve current average speed models by including a congestion algorithm, or alternatively, at least provide information on the level of congestion in the driving patterns on which these models are based and recommendations on what applications the models are suitable for.

Journal ArticleDOI
TL;DR: The netCDF Operator (NCO) software facilitates manipulation and analysis of gridded geoscience data stored in the self-describingNetCDF format and is optimized to efficiently analyze large multi-dimensional data sets spanning many files.
Abstract: The netCDF Operator (NCO) software facilitates manipulation and analysis of gridded geoscience data stored in the self-describing netCDF format. NCO is optimized to efficiently analyze large multi-dimensional data sets spanning many files. Researchers and data centers often use NCO to analyze and serve observed and modeled geoscience data including satellite observations and weather, air quality, and climate forecasts. NCO's functionality includes shared memory threading, a message-passing interface, network transparency, and an interpreted language parser. NCO treats data files as a high level data type whose contents may be simultaneously manipulated by a single command. Institutions and data portals often use NCO for middleware to hyperslab and aggregate data set requests, while scientific researchers use NCO to perform three general functions: arithmetic operations, data permutation and compression, and metadata editing. We describe NCO's design philosophy and primary features, illustrate techniques to solve common geoscience and environmental data analysis problems, and suggest ways to design gridded data sets that can ease their subsequent analysis.

Journal ArticleDOI
TL;DR: This work has used new technology to create a machine accessible interface for the National Water Information System, an online repository of historical and real-time streamflow, water-quality, and ground water level observations maintained by the United States Geological Survey.
Abstract: A wealth of freely available hydrologic data are provided by governmental organizations including in situ observations, geospatial data sets, remote sensing products, and simulation model output. Despite having access to this information, much of the data remain underutilized in the hydrologic sciences due in part to the time required to access, obtain, and integrate data from different sources. Web services offer a means for sharing hydrologic data more openly by providing a standard protocol for machine-to-machine communication. We have used this new technology to create a machine accessible interface for the National Water Information System (NWIS), an online repository of historical and real-time streamflow, water-quality, and ground water level observations maintained by the United States Geological Survey (USGS). These services provide a middle-layer of abstraction between the NWIS database and hydrologic analysis systems, allowing such analysis systems to proxy the NWIS server for on-demand data access. We intentionally designed the services to be generic and applicable to other hydrologic databases, in order to provide interoperability between disparate data sources. Performance tests showed that, for time series with less than 1000 observations, the web services layer added minimal overhead in terms of data response time, and development of an example client application for time series visualization highlighted some of the benefits and costs of using web services for data access.

Journal ArticleDOI
TL;DR: The paper shows how the proposed methodology is able to achieve optimum WWTP design using either a steady-state or dynamic mathematical model of the plant and a set of constraints associated with the permitted operational ranges and the required water quality in the effluent.
Abstract: This paper presents the mathematical basis and some illustrative examples of a model-based decision-making method for the automatic calculation of optimum design parameters in modern Wastewater Treatment Plants (WWTP). The starting point of the proposed methodology is the mathematical modelling of the main processes inside a plant's units. The procedure for the automatic calculation of the design parameters is then based on expressing the optimum WWTP design problem as a Mathematical Programming (Optimisation) Problem that can be solved using a non-linear optimisation algorithm (GRG2). The paper shows how the proposed methodology is able to achieve optimum WWTP design using either a steady-state or dynamic mathematical model of the plant and a set of constraints associated with the permitted operational ranges and the required water quality in the effluent. As an illustrative example to show the usefulness of the proposed methodology, the optimum design of the Step-Feed process for nitrogen removal (Alpha) has been analysed by considering two different problems: the optimum plant dimensions, estimated at critical temperature for effluent requirements (Problem 1), and the optimum selection of facultative volumes, fractions of the influent flow-rate and the values of oxygen set-points for long-term plant operation (Problem 2). The proposed decision-making method is intended to facilitate the task of the engineers involved in the design of new WWTP, especially when the complexity of the plant requires a systematic procedure for the selection of the main design parameters.

Journal ArticleDOI
TL;DR: This paper addresses the difficulties involved in large-scale holistic modeling for integrated river basin management, and provides solution methods taking a self-reflective stance through a prototype model.
Abstract: A holistic model embeds water resources and economic components into a consistent mathematical programming model, with the objective of maximizing economic profits from water uses in various sectors. Such a model can be used to address combined environmental-economic issues. Although holistic modeling represents a simple approach for building truly integrated water resources and economic models, it faces many difficulties in terms of temporal and spatial scale issues and model formulation, calibration, solution and result interpretation, as well as extensive data requirement. This paper addresses the difficulties involved in large-scale holistic modeling for integrated river basin management, and provides solution methods taking a self-reflective stance through a prototype model. This paper is a methodological paper for practitioners who are interested in integrated water resources-economic modeling.

Journal ArticleDOI
TL;DR: The results obtained indicate that the Semi-Partial Correlation Coefficient and its rank equivalent the Semi -Partial Rank Cor correlation Coefficient can be considered adequate measures to assess the sensitivity of the DUFLOW model to the uncertainty in its input parameters.
Abstract: Sensitivity analysis methods based on multiple simulations such as Monte Carlo Simulation (MCS) and Latin Hypercube Sampling (LHS) are very efficient, especially for complex computer models. The application of these methods involves successive runs of the model under investigation with different sampled sets of the uncertain model-input variables and (or) parameters. The subsequent statistical analysis based on regression and correlation analysis among the input variables and model output allows determination of the input variables or the parameters to which the model prediction uncertainty is most sensitive. The sensitivity effect of the model-input variables or parameters on the model outputs can be quantified by various statistical measures based on regression and correlation analysis. This paper provides a thorough review of these measures and their properties and develops a concept for selecting the most robust and reliable measures for practical use. The concept is demonstrated through the application of Latin Hypercube Sampling as the sensitivity analysis technique to the DUFLOW water-quality model developed for the Dender River in Belgium. The results obtained indicate that the Semi-Partial Correlation Coefficient and its rank equivalent the Semi-Partial Rank Correlation Coefficient can be considered adequate measures to assess the sensitivity of the DUFLOW model to the uncertainty in its input parameters.

Journal ArticleDOI
TL;DR: The ten steps of model development are critiqued and applied using a process-based biogeochemical model of aquatic systems, with examples from two case studies: a model of phytoplankton succession and nutrient concentrations in the Swan-Canning Estuary (Western Australia) and a modelof sediment and nutrient transport and transformation in the Fitzroy Estuary and Keppel Bay (Queensland).
Abstract: The procedures involved in model development may be set out as a ten step process, beginning with defining the purpose of the model and ending with evaluation of the appropriateness and utility of the completed model. This process, recently outlined by Jakeman et al. [Jakeman, A.J., Letcher, R.A., Norton, J.P., 2006. Ten iterative steps in development and evaluation of environmental models. Environmental Modelling and Software 21, 602-614], is often iterative as model development is a continuous process that refines and improves the intended capacity of the model. Here, the ten steps of model development are critiqued and applied using a process-based biogeochemical model of aquatic systems, with examples from two case studies: a model of phytoplankton succession and nutrient concentrations in the Swan-Canning Estuary (Western Australia) and a model of sediment and nutrient transport and transformation in the Fitzroy Estuary and Keppel Bay (Queensland).

Journal ArticleDOI
TL;DR: A modelling system applied to a winter PM"1"0 episode considering a computational domain centered on Milano metropolitan area has evidenced an acceptable model performance for both models and a better reproduction of PM" 1"0 levels using the more complete aerosol module (aero3).
Abstract: Fine particulate air pollution represents one of the most relevant environmental concerns in Lombardia region (Northern Italy). PM"1"0 concentrations overcome air quality limit values especially during wintertime, when frequently occurring thermal inversions and calm conditions tend to inhibit pollutants dispersion. To have a better understanding of the spatial distribution of PM"1"0, a modelling system has been applied to a winter PM"1"0 episode considering a computational domain centered on Milano metropolitan area. The modelling system software suite is based on an Eulerian photochemical model (FARM - Flexible Air quality Regional Model) and includes an emission pre-processor to apportion data from the regional emission inventory, a diagnostic meteorological model coupled with a micrometeorological module and data visualization and post-processing tools. FARM model has been applied with two aerosol modules: the aero3 modal aerosol module implemented in CMAQ framework and a bulk aerosol module (aero0), based on a simplified thermodynamic scheme. Both tested modules show a good agreement with observed concentrations. A performance analysis of modelling results by means of typical statistical measures has evidenced an acceptable model performance for both models and a better reproduction of PM"1"0 levels using the more complete aerosol module (aero3). Furthermore, the application of the latter aerosol module provides a PM"1"0 chemical composition that results in good agreement with data collected within Milano urban area.

Journal ArticleDOI
TL;DR: The DeepActor approach for representing human decision processes, which makes use of a multi-actor simulation framework and has similarities to agent-based approaches, is presented and demonstrated by means of concrete simulation models of the water supply sector and of the domestic water users.
Abstract: Within coupled hydrological simulation systems, taking socio-economic processes into account is still a challenging task. In particular, systems that aim at evaluating impacts of climatic change on large spatial and temporal scales cannot be based on the assumption that infrastructure, economy, demography and other human factors remain constant while physical boundary conditions change. Therefore, any meaningful simulation of possible future scenarios needs to enable socio-economic systems to react and to adapt to climatic changes. To achieve this it is necessary to simulate decision-making processes of the relevant actors in a way which is adequate for the scale, the catchment specific management problems to be investigated and finally the data availability. This contribution presents the DeepActor approach for representing such human decision processes, which makes use of a multi-actor simulation framework and has similarities to agent-based approaches. This DeepActor approach is embedded in Danubia, a coupled simulation system comprising 16 individual models to simulate Global Change impacts on the entire water cycle of the Upper Danube Catchment (Germany, 77,000km^2). The applicability of Danubia and in particular the DeepActor approach for treating the socio-economic part of the water cycle in a process-based way is demonstrated by means of concrete simulation models of the water supply sector and of the domestic water users. Results from scenario simulations are used to demonstrate the capabilities and limitations of the approach.

Journal ArticleDOI
TL;DR: It is shown that relatively simple methods for modelling traffic pollution can provide reasonable good results without excessive computing time, important when modelling is required for a large number of street locations.
Abstract: Evaluation of the Operational Street Pollution Model (OSPM) on a large set of NO"2 measurements collected in connection with an epidemiological study on traffic pollution and children cancer is presented. These measurements were conducted in the years 1994-1995 using passive samplers placed at 204 street locations in Copenhagen as well as in some smaller towns and the adjacent rural areas. At each location NO"2 concentrations were measured during a period of 6months with a sampling period of 1month. The temporal and geographical variation of the model results compared with measurements was evaluated. Although the overall performance was satisfactory, the modelled concentrations were on average slightly higher than the measured concentrations. The agreement was better in the rural areas than in densely trafficked urban areas. Two different methods for estimation of the urban background are compared. It is shown that relatively simple methods for modelling traffic pollution can provide reasonable good results without excessive computing time. This is important when modelling is required for a large number of street locations.

Journal ArticleDOI
TL;DR: An overview of the development and implementation of an integrated decision support system (DSS) designed to help policy makers and other stakeholders have a clearer understanding of the key factors and processes involved in the sewage induced degradation of surface water quality in the ULB, and formulate, assess and evaluate alternative management plans is provided.
Abstract: The widespread and relentless discharge of untreated wastewater into the Upper Litani Basin (ULB) river system in Lebanon has reached staggering levels rendering its water unfit for most uses especially during the drier times of the year. Despite the call by governmental and non-governmental agencies to develop several wastewater treatment plants and sewage networks in an effort to control this problem, these efforts do not seem to be coordinated or based on comprehensive and integrated assessments of current and projected conditions in the basin. This paper provides an overview of the development and implementation of an integrated decision support system (DSS) designed to help policy makers and other stakeholders have a clearer understanding of the key factors and processes involved in the sewage induced degradation of surface water quality in the ULB, and formulate, assess and evaluate alternative management plans. The DSS is developed based on the WEAP model, which provides a GIS based and visual simulation environment and scenario management and analysis capabilities. The DSS was used to assess two main water quality management plans taking into consideration hydrological, spatial and seasonal variabilities. An incremental cost-effectiveness analysis was conducted to identify best buy plans. The results have confirmed the gravity of this problem and demonstrated the importance of taking immediate action on curbing this onslaught on this valuable and scarce fresh water resource.

Journal ArticleDOI
TL;DR: This work focuses on the prediction of hourly levels up to 8h ahead for five pollutants and six locations in the area of Bilbao (Spain) and 216 models based on neural networks (NNs) were built, which can provide Bil bao's air pollution network originally designed for diagnosis purposes, with short-term, real time forecasting capabilities.
Abstract: This work focuses on the prediction of hourly levels up to 8h ahead for five pollutants (SO"2, CO, NO"2, NO and O"3) and six locations in the area of Bilbao (Spain). To that end, 216 models based on neural networks (NNs) were built. The database used to fit the NNs were historical records of the traffic, meteorological and air pollution networks existing in the area corresponding to year 2000. Then, the models were tested on data from the same networks but corresponding to year 2001. At a first stage, for each of the 216 cases, 100 models based on different types of neural networks were built using data corresponding to year 2000. The final identification of the best model was made under the criteria of simultaneously having at a 95% confidence level the best values of R^2, d"1, FA2 and RMSE when applied to data of year 2001. The number of hourly cases in which due to gaps in data predictions were possible range from 11% to 38% depending on the sensor. Depending on the pollutant, location and number of hours ahead the prediction is made, different types of models were selected. The use of these models based on NNs can provide Bilbao's air pollution network originally designed for diagnosis purposes, with short-term, real time forecasting capabilities. The performance of these models at the different sensors in the area range from a maximum value of R^2=0.88 for the prediction of NO"2 1h ahead, to a minimum value of R^2=0.15 for the prediction of ozone 8h ahead. These boundaries and the limitation in the number of cases that predictions are possible represent the maximum forecasting capability that Bilbao's network can provide in real-life operating conditions.

Journal ArticleDOI
TL;DR: Requirements and other options for model coupling are described, the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation are explained, and an example from each coupled system is provided.
Abstract: Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model ''1'' running on ''M'' processors and model ''2'' running on ''N'' processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models.

Journal ArticleDOI
TL;DR: A method and software package for desktop assessment of environmental flows -a hydrological regime designed to maintain a river in some agreed ecological condition built around a flow duration curve, which ensures that elements of natural flow variability are preserved in the estimated environmental flow time series.
Abstract: The paper describes a method and software package for desktop assessment of environmental flows -a hydrological regime designed to maintain a river in some agreed ecological condition. The method uses monthly flow data and is built around a flow duration curve, which ensures that elements of natural flow variability are preserved in the estimated environmental flow time series. The curve is calculated for several categories of aquatic ecosystem protection -from 'largely natural' to 'severely modified'. The corresponding environmental flows progressively reduce with the decreasing level of ecosystem protection. A non-linear data transformation procedure subsequently converts the calculated environmental flow duration curve into a continuous time series of environmental flows. The software has facilities to zoom on a river basin, calculate a variety of hydrological characteristics, define or select any category of ecosystem protection, calculate the associated environmental flow duration curves and time series and display both. The analyses can be carried out either using default (simulated) global flow data, with a spatial resolution of 0.5degree, or a user-defined file. The package is seen as a training tool for water practitioners, policymakers and students, and as a tool for rapid preliminary environmental flow assessment.

Journal ArticleDOI
TL;DR: A risk assessment model for settling problems of microbiological origin in activated sludge systems (filamentous bulking, foaming and rising sludge) demonstrates that some control strategies, although performing better regarding operating costs and effluent quality, induce a higher risk for solids separation problems.
Abstract: This paper proposes a risk assessment model for settling problems of microbiological origin in activated sludge systems (filamentous bulking, foaming and rising sludge). The aim of the model is not to diagnose microbiology-related solids separation problems with absolute certainty but to quantify in dynamic scenarios whether simulated operational procedures and control strategies lead to favourable conditions for them to arise or not. The rationale behind the model (which integrates the mechanisms of standard activated sludge models with empirical knowledge), its implementation in a fuzzy rule-based system and the details of its operation are illustrated in the different sections of the paper. The performance of the risk assessment model is illustrated by evaluating a number of control strategies facing different short-term influent conditions as well as long-term variability using the IWA/COST simulation benchmark. The results demonstrate that some control strategies, although performing better regarding operating costs and effluent quality, induce a higher risk for solids separation problems. In view of these results, it is suggested to integrate empirical knowledge into mechanistic models to increase reliability and to allow assessment of potential side-effects when simulating complex processes.