scispace - formally typeset
Search or ask a question
JournalISSN: 1364-8152

Environmental Modelling and Software 

Elsevier BV
About: Environmental Modelling and Software is an academic journal published by Elsevier BV. The journal publishes majorly in the area(s): Computer science & Environmental science. It has an ISSN identifier of 1364-8152. Over the lifetime, 4390 publications have been published receiving 217943 citations. The journal is also known as: Environmental modelling and software.


Papers
More filters
Journal ArticleDOI
TL;DR: The steps that should be followed in the development of artificial neural network models are outlined, including the choice of performance criteria, the division and pre-processing of the available data, the determination of appropriate model inputs and network architecture, optimisation of the connection weights (training) and model validation.
Abstract: Artificial Neural Networks (ANNs) are being used increasingly to predict and forecast water resources variables. In this paper, the steps that should be followed in the development of such models are outlined. These include the choice of performance criteria, the division and pre-processing of the available data, the determination of appropriate model inputs and network architecture, optimisation of the connection weights (training) and model validation. The options available to modellers at each of these steps are discussed and the issues that should be considered are highlighted. A review of 43 papers dealing with the use of neural network models for the prediction and forecasting of water resources variables is undertaken in terms of the modelling process adopted. In all but two of the papers reviewed, feedforward networks are used. The vast majority of these networks are trained using the backpropagation algorithm. Issues in relation to the optimal division of the available data, data pre-processing and the choice of appropriate model inputs are seldom considered. In addition, the process of choosing appropriate stopping criteria and optimising network geometry and internal network parameters is generally described poorly or carried out inadequately. All of the above factors can result in non-optimal model performance and an inability to draw meaningful comparisons between different models. Future research efforts should be directed towards the development of guidelines which assist with the development of ANN models and the choice of when ANNs should be used in preference to alternative approaches, the assessment of methods for extracting the knowledge that is contained in the connection weights of trained ANNs and the incorporation of uncertainty into ANN models.

2,181 citations

Journal ArticleDOI
TL;DR: An Internet based facility has been developed which allows database clients to interrogate the gridded surfaces at any desired location, and analyse the temporal and spatial error of the interpolated data.
Abstract: A comprehensive archive of Australian rainfall and climate data has been constructed from ground-based observational data. Continuous, daily time step records have been constructed using spatial interpolation algorithms to estimate missing data. Datasets have been constructed for daily rainfall, maximum and minimum temperatures, evaporation, solar radiation and vapour pressure. Datasets are available for approximately 4600 locations across Australia, commencing in 1890 for rainfall and 1957 for climate variables. The datasets can be accessed on the Internet at http://www.dnr.qld.gov.au/silo. Interpolated surfaces have been computed on a regular 0.05° grid extending from latitude 10°S to 44°S and longitude 112°E to 154°E. A thin plate smoothing spline was used to interpolate daily climate variables, and ordinary kriging was used to interpolate daily and monthly rainfall. Independent cross validation has been used to analyse the temporal and spatial error of the interpolated data. An Internet based facility has been developed which allows database clients to interrogate the gridded surfaces at any desired location.

1,705 citations

Journal ArticleDOI
TL;DR: A revised version of the elementary effects method is proposed, improved in terms of both the definition of the measure and the sampling strategy, having the advantage of a lower computational cost.
Abstract: In 1991 Morris proposed an effective screening sensitivity measure to identify the few important factors in models with many factors. The method is based on computing for each input a number of incremental ratios, namely elementary effects, which are then averaged to assess the overall importance of the input. Despite its value, the method is still rarely used and instead local analyses varying one factor at a time around a baseline point are usually employed. In this piece of work we propose a revised version of the elementary effects method, improved in terms of both the definition of the measure and the sampling strategy. In the present form the method shares many of the positive qualities of the variance-based techniques, having the advantage of a lower computational cost, as demonstrated by the analytical examples. The method is employed to assess the sensitivity of a chemical reaction model for dimethylsulphide (DMS), a gas involved in climate change. Results of the sensitivity analysis open up the ground for model reconsideration: some model components may need a more thorough modelling effort while some others may need to be simplified.

1,528 citations

Journal ArticleDOI
TL;DR: This study illustrates the usefulness of multivariate statistical techniques for analysis and interpretation of complex data sets, and in water quality assessment, identification of pollution sources/factors and understanding temporal/spatial variations in waterquality for effective river water quality management.
Abstract: Multivariate statistical techniques, such as cluster analysis (CA), principal component analysis (PCA), factor analysis (FA) and discriminant analysis (DA), were applied for the evaluation of temporal/spatial variations and the interpretation of a large complex water quality data set of the Fuji river basin, generated during 8 years (1995–2002) monitoring of 12 parameters at 13 different sites (14 976 observations). Hierarchical cluster analysis grouped 13 sampling sites into three clusters, i.e., relatively less polluted (LP), medium polluted (MP) and highly polluted (HP) sites, based on the similarity of water quality characteristics. Factor analysis/principal component analysis, applied to the data sets of the three different groups obtained from cluster analysis, resulted in five, five and three latent factors explaining 73.18, 77.61 and 65.39% of the total variance in water quality data sets of LP, MP and HP areas, respectively. The varifactors obtained from factor analysis indicate that the parameters responsible for water quality variations are mainly related to discharge and temperature (natural), organic pollution (point source: domestic wastewater) in relatively less polluted areas; organic pollution (point source: domestic wastewater) and nutrients (non-point sources: agriculture and orchard plantations) in medium polluted areas; and organic pollution and nutrients (point sources: domestic wastewater, wastewater treatment plants and industries) in highly polluted areas in the basin. Discriminant analysis gave the best results for both spatial and temporal analysis. It provided an important data reduction as it uses only six parameters (discharge, temperature, dissolved oxygen, biochemical oxygen demand, electrical conductivity and nitrate nitrogen), affording more than 85% correct assignations in temporal analysis, and seven parameters (discharge, temperature, biochemical oxygen demand, pH, electrical conductivity, nitrate nitrogen and ammonical nitrogen), affording more than 81% correct assignations in spatial analysis, of three different sampling sites of the basin. Therefore, DA allowed a reduction in the dimensionality of the large data set, delineating a few indicator parameters responsible for large variations in water quality. Thus, this study illustrates the usefulness of multivariate statistical techniques for analysis and interpretation of complex data sets, and in water quality assessment, identification of pollution sources/factors and understanding temporal/spatial variations in water quality for effective river water quality management.

1,481 citations

Journal ArticleDOI
TL;DR: Statistical DownScaling Model (sdsm) facilitates the rapid development of multiple, low-cost, single-site scenarios of daily surface weather variables under current and future regional climate forcing.
Abstract: General Circulation Models (GCMs) suggest that rising concentrations of greenhouse gases will have significant implications for climate at global and regional scales. Less certain is the extent to which meteorological processes at individual sites will be affected. So-called ‘downscaling’ techniques are used to bridge the spatial and temporal resolution gaps between what climate modellers are currently able to provide and what impact assessors require. This paper describes a decision support tool for assessing local climate change impacts using a robust statistical downscaling technique. Statistical DownScaling Model (sdsm) facilitates the rapid development of multiple, low-cost, single-site scenarios of daily surface weather variables under current and future regional climate forcing. Additionally, the software performs ancillary tasks of predictor variable pre-screening, model calibration, basic diagnostic testing, statistical analyses and graphing of climate data. The application of sdsm is demonstrated with respect to the generation of daily temperature and precipitation scenarios for Toronto, Canada by 2040–2069.  2002 Elsevier Science Ltd. All rights reserved.

1,327 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
2023122
2022285
2021249
2020224
2019242
2018233