scispace - formally typeset
Search or ask a question

Showing papers in "Stochastic Environmental Research and Risk Assessment in 2016"


Journal ArticleDOI
TL;DR: In this paper, the accuracy of Artificial Neural Network (ANN), Adaptive Neuro-Fuzzy Inference System (ANFIS), wavelet-ANN and wavelet ANFIS in predicting monthly water salinity levels of northwest Iran's Aji-Chay River was assessed.
Abstract: The accuracy of Artificial Neural Network (ANN), Adaptive Neuro-Fuzzy Inference System (ANFIS), wavelet-ANN and wavelet-ANFIS in predicting monthly water salinity levels of northwest Iran’s Aji-Chay River was assessed. The models were calibrated, validated and tested using different subsets of monthly records (October 1983 to September 2011) of individual solute (Ca2+, Mg2+, Na+, SO4 2− and Cl−) concentrations (input parameters, meq L−1), and electrical conductivity-based salinity levels (output parameter, µS cm−1), collected by the East Azarbaijan regional water authority. Based on the statistical criteria of coefficient of determination (R2), normalized root mean square error (NRMSE), Nash–Sutcliffe efficiency coefficient (NSC) and threshold statistics (TS) the ANFIS model was found to outperform the ANN model. To develop coupled wavelet-AI models, the original observed data series was decomposed into sub-time series using Daubechies, Symlet or Haar mother wavelets of different lengths (order), each implemented at three levels. To predict salinity input parameter series were used as input variables in different wavelet order/level-AI model combinations. Hybrid wavelet-ANFIS (R2 = 0.9967, NRMSE = 2.9 × 10−5 and NSC = 0.9951) and wavelet-ANN (R2 = 0.996, NRMSE = 3.77 × 10−5 and NSC = 0.9946) models implementing the db4 mother wavelet decomposition outperformed the ANFIS (R2 = 0.9954, NRMSE = 3.77 × 10−5 and NSC = 0.9914) and ANN (R2 = 0.9936, NRMSE = 3.99 × 10−5 and NSC = 0.9903) models.

131 citations


Journal ArticleDOI
TL;DR: It is advocated that the RVM model can be employed as a promising machine learning tool for the prediction of evaporative loss.
Abstract: The forecasting of evaporative loss (E) is vital for water resource management and understanding of hydrological process for farming practices, ecosystem management and hydrologic engineering. This study has developed three machine learning algorithms, namely the relevance vector machine (RVM), extreme learning machine (ELM) and multivariate adaptive regression spline (MARS) for the prediction of E using five predictor variables, incident solar radiation (S), maximum temperature (T max), minimum temperature (T min), atmospheric vapor pressure (VP) and precipitation (P). The RVM model is based on the Bayesian formulation of a linear model with appropriate prior that results in sparse representations. The ELM model is computationally efficient algorithm based on Single Layer Feedforward Neural Network with hidden neurons that randomly choose input weights and the MARS model is built on flexible regression algorithm that generally divides solution space into intervals of predictor variables and fits splines (basis functions) to each interval. By utilizing random sampling process, the predictor data were partitioned into the training phase (70 % of data) and testing phase (remainder 30 %). The equations for the prediction of monthly E were formulated. The RVM model was devised using the radial basis function, while the ELM model comprised of 5 inputs and 10 hidden neurons and used the radial basis activation function, and the MARS model utilized 15 basis functions. The decomposition of variance among the predictor dataset of the MARS model yielded the largest magnitude of the Generalized Cross Validation statistic (≈0.03) when the T max was used as an input, followed by the relatively lower value (≈0.028, 0.019) for inputs defined by the S and VP. This confirmed that the prediction of E utilized the largest contributions of the predictive features from the T max, verified emphatically by sensitivity analysis test. The model performance statistics yielded correlation coefficients of 0.979 (RVM), 0.977 (ELM) and 0.974 (MARS), Root-Mean-Square-Errors of 9.306, 9.714 and 10.457 and Mean-Absolute-Error of 0.034, 0.035 and 0.038. Despite the small differences in the overall prediction skill, the RVM model appeared to be more accurate in prediction of E. It is therefore advocated that the RVM model can be employed as a promising machine learning tool for the prediction of evaporative loss.

121 citations


Journal ArticleDOI
TL;DR: In this paper, the Mann-Kendall test was applied to calculate the precipitation concentration index (PCI) in 34 synoptic stations of Iran over a 50-year period (1961-2010).
Abstract: Investigation of the precipitation phenomenon as one of the most important meteorological factors directly affecting access to water resources is of paramount importance. In this study, the precipitation concentration index (PCI) was calculated using annual precipitation data from 34 synoptic stations of Iran over a 50-year period (1961–2010). The trend of precipitation and the PCI index were analyzed using the Mann–Kendall test after removing the effect of autocorrelation coefficients in annual and seasonal time scales. The results of zoning the studied index at annual time scale revealed that precipitation concentration follows a similar trend within two 25-year subscales. Furthermore, the PCI index in central and southern regions of the country, including the stations of Kerman, Bandarabbas, Yazd, Zahedan, Shahrekord, Birjand, Bushehr, Ahwaz, and Esfahan indicates a strong irregularity and high concentration in atmospheric precipitations. In annual time scale, none of the studied stations, had shown regular concentration (PCI < 10). Analyzing the trend of PCI index during the period of 1961–2010 witnessed an insignificant increasing (decreasing) trend in 16 (15) stations for winter season, respectively, while it faced a significant negative trend in Dezful, Saghez, and Hamedan stations. Similarly, in spring, Kerman and Ramsar stations exhibited a significant increasing trend in the PCI index, implying significant development of precipitation concentration irregularities in these two stations. In summer, Gorgan station showed a strong and significant irregularity for the PCI index and in autumn, Tabriz and Zahedan (Babolsar) stations experienced a significant increasing (decreasing) trend in the PCI index. At the annual time scale, 50 % of stations experienced an increasing trend in the PCI index. Investigating the changes in the precipitation trend also revealed that in annual time scale, about 58 % of the stations had a decreasing trend. In winter, which is the rainiest season in Iran, about 64 % of stations experienced a decreasing trend in precipitation that caused an increasing trend in PCI index. Comparing the spatial distribution of PCI index within two 25 years sub-periods indicated that the PCI index of the second sub-period increased in the spring time scale that means irregularity of precipitation distribution has been increased. But in the other seasons any significant variations were not observed. Also in the annual time scale the PCI index increased in the second sub-period because of the increasing trend of precipitation.

102 citations


Journal ArticleDOI
TL;DR: In this article, the effect of serial correlation on change point analyses performed by the Pettitt test was investigated by Monte Carlo experiments involving the first-order autoregressive [AR(1)] process, fractional Gaussian noise (fGn), and fractionally integrated autoregression [ARFIMA(1,d,0)] model.
Abstract: The presence of serial correlation in hydro-meteorological time series often makes the detection of deterministic gradual or abrupt changes with tests such as Mann–Kendall (MK) and Pettitt problematic In this study we investigate the adverse impact of serial correlation on change point analyses performed by the Pettitt test Building on methods developed for the MK test, different prewhitening procedures devised to remove the serial correlation are examined, and the effects of the sample size and strength of serial dependence on their performance are tested by Monte Carlo experiments involving the first-order autoregressive [AR(1)] process, fractional Gaussian noise (fGn), and fractionally integrated autoregressive [ARFIMA(1,d,0)] model Results show that (1) the serial correlation affects the Pettitt test more than tests for slowly varying monotonic trends such as the MK test both for short-range and long-range persistence; (2) the most efficient prewhitening procedure based on AR(1) involves the simultaneous estimation of step change and lag-1 autocorrelation ρ, and bias correction of ρ estimates; (3) as expected, the effectiveness of the prewhitening procedure strongly depends upon the model selected to remove the serial correlation; (4) prewhitening procedures allow for a better control of the type I error resulting in rejection rates reasonably close to the nominal values As ancillary results, (5) we show the ineffectiveness of the original formulation of the so-called trend-free prewhitening (TFPW) method and provide analytical results supporting a corrected version called TFPWcu; and (6) we propose an improved two-stage bias correction of ρ estimates for AR(1) signals

88 citations


Journal ArticleDOI
TL;DR: A machine learning CA model with nonlinear transition rules based on least squares support vector machines (LS-SVM) to simulate such urban growth and shows that the MachCA model produces more hits and less misses and false alarms due to its capability for capturing the spatial complexity of urban dynamics.
Abstract: A critical issue in urban cellular automata (CA) modeling concerns the identification of transition rules that generate realistic urban land use patterns. Recent studies have demonstrated that linear methods cannot sufficiently delineate the extraordinary complex boundaries between urban and non-urban areas and as most urban CA models simulate transitions across these boundaries, there is an urgent need for good methods to facilitate such delineations. This paper presents a machine learning CA model (termed MachCA) with nonlinear transition rules based on least squares support vector machines (LS-SVM) to simulate such urban growth. By projecting the input dataset into a high dimensional space using the LS-SVM method, an optimal hyper-plane is constructed to separate the complex boundaries between urban and nonurban land, thus enabling the retrieval of nonlinear CA transition rules. In the MachCA model, the transition rules are yes–no decisions on whether a cell changes its state or not, the rules being dynamically updated for each iteration of the model implementation. The application of the MachCA for simulating urban growth in the Shanghai Qingpu–Songjiang area in China reveals that the spatial configurations of rural–urban patterns can be modeled. A comparison of the MachCA model with a conventional CA model fitted by logarithmic regression (termed LogCA) shows that the MachCA model produces more hits and less misses and false alarms due to its capability for capturing the spatial complexity of urban dynamics. This results in improved simulation accuracies, although with only less than 1 % deviation between the overall errors produced by the MachCA and LogCA models. Nevertheless, the way MachCA model use in retrieving the transition rules provides a new method for simulating the dynamic process of urban growth.

85 citations


Journal ArticleDOI
TL;DR: In this article, the authors used precipitation data from the Global Precipitation Climatology Centre to reconstruct historical droughts during different climatic seasons in Balochistan province, Pakistan.
Abstract: Droughts are usually destructive when they coincide with crop growing season. Cross-seasonal drought characterization can better inform drought mitigation efforts. The present study relies on precipitation data from the Global Precipitation Climatology Centre to reconstruct historical droughts during different climatic seasons in Balochistan province, Pakistan. We identified seasonal drought events based on the standardized precipitation index for each season. The distribution of reconstructed drought events was analyzed to determine their seasonality and to calculate their return periods. Using these return periods, we constructed seasonal drought maps. The study revealed that early winter droughts are frequent in the north of Balochistan, where the return periods of moderate, severe, and extreme droughts are 7, 21, and 55 years, respectively. Severe and extreme late winter droughts are more frequent in the upper north, with return periods of 16 and 35 years, respectively. Early summer droughts occur more frequently in the east, returning every 8, 20, and 60 years; late summer droughts occur in the northeast, with moderate, severe, and extreme droughts returning every 8, 22, and 65 years, respectively. Rabi droughts are more frequently in the central and northeaster regions of Balochistan, while more severe kharif droughts occur primarily in the eastern regions. These seasonal droughts were found to be positively correlated with variations in the seasonal rainfall throughout the study area. The findings of this study contribute to our understanding of seasonal drought characteristics and help to inform drought mitigation planning.

83 citations


Journal ArticleDOI
TL;DR: In this article, a non-stationary frequency analysis of short-duration (1-, 6-, 12-, and 24-h) precipitation extremes at 65 weather stations scattered across South Korea was performed.
Abstract: The conventional approach to the frequency analysis of extreme precipitation is complicated by non-stationarity resulting from climate variability and change. This study utilized a non-stationary frequency analysis to better understand the time-varying behavior of short-duration (1-, 6-, 12-, and 24-h) precipitation extremes at 65 weather stations scattered across South Korea. Trends in precipitation extremes were diagnosed with respect to both annual maximum precipitation (AMP) and peaks-over-threshold (POT) extremes. Non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with model parameters made a linear function of time were applied to AMP and POT respectively. Trends detected using the Mann–Kendall test revealed that the stations showing an increasing trend in AMP extremes were concentrated in the mountainous areas (the northeast and southwest regions) of South Korea. Trend tests on POT extremes provided fairly different results, with a significantly reduced number of stations showing an increasing trend and with some stations showing a decreasing trend. For most of stations showing a statistically significant trend, non-stationary GEV and GPD models significantly outperformed their stationary counterparts, particularly for precipitation extremes with shorter durations. Due to a significant-increasing trend in the POT frequency found at a considerable number of stations (about 10 stations for each rainfall duration), the performance of modeling POT extremes was further improved with a non-homogeneous Poisson model. The large differences in design storm estimates between stationary and non-stationary models (design storm estimates from stationary models were significantly lower than the estimates of non-stationary models) demonstrated the challenges in relying on the stationary assumption when planning the design and management of water facilities. This study also highlighted the need of caution when quantifying design storms from POT and AMP extremes by showing a large discrepancy between the estimates from those two approaches.

77 citations


Journal ArticleDOI
TL;DR: Multivariate principal component analysis showed significant anthropogenic contributions of Cr, Cu, As and Cd in soil and Cr, Ni, Cu and Pb in vegetables, showing that the inhabitants who consume contaminated vegetables are exposed chronically to metal pollution with carcinogenic and non-carcinogenic risks.
Abstract: This study assessed the concentrations of seven common heavy metals such as chromium (Cr), nickel (Ni), copper (Cu), zinc (Zn), arsenic (As), cadmium (Cd) and lead (Pb) in agricultural soils and mostly consumed vegetable species and their possible human health risk in Bogra District, northern Bangladesh. The range of mean concentrations were 20.47–59.09, 20.09–69.13, 20.50–74.99, 47.46–128.06, 5.04–23.14, 1.67–6.90 and 31.81–67.65 mg/kg for Cr, Ni, Cu, Zn, As, Cd and Pb, respectively. Accumulation factors (AFs) of heavy metals from soil to the vegetables exhibited highest values for Cu (0.56 ± 0.16), followed by Zn (0.39 ± 0.16). Multivariate principal component analysis showed significant anthropogenic contributions of Cr, Cu, As and Cd in soil and Cr, Ni, Cu and Pb in vegetables. Target hazard quotients (THQs) for individual metals (except As) were below 1, suggesting that people would not experience significant health hazards if they ingest a single metal from one species of vegetable. However, total metal THQ (1.530–5.575) signifies the potential non-carcinogenic health hazard to the highly-exposed consumers in Bangladesh. The target carcinogenic risk (TR) of As (0.275–1.108) and Pb (0.001–0.026) through consumption of vegetables were higher than the USEPA threshold level. From the health point of view, this study showed that the inhabitants who consume contaminated vegetables are exposed chronically to metal pollution with carcinogenic and non-carcinogenic risks.

73 citations


Journal ArticleDOI
Charles Onyutha1
TL;DR: In this paper, a graphical approach of identifying trend or sub-trends using nonparametric cumulative rank difference (CRD) was proposed to confirm the significance of the visualized trend, the CRD was translated from the graphical to a statistical metric.
Abstract: In hydro-meteorological trend analysis, an alteration in the given variable is detected by considering the long-term series as a whole. Whereas the long-term trend may be absent, the significance of hidden (short-durational) sub-trends in the series may be important for environmental management practices. In this paper, a graphical approach of identifying trend or sub-trends using nonparametric cumulative rank difference (CRD) was proposed. To confirm the significance of the visualized trend, the CRD was translated from the graphical to a statistical metric. To assess its capability, the performance of the CRD method was compared with that of the well-known Mann–Kendall (MK) test. The graphical and statistical CRD techniques were applied to detect trends and sub-trends in the annual rainfall of 10 River Nile riparian countries (RNRCs). The co-occurrence of the trend evolutions in the rainfall with those of the large-scale ocean–atmosphere interactions was analyzed. The power of the CRD method was shown to closely agree with that of the MK test under the various circumstances of sample sizes, variations, linear trend slopes, and serial correlations. At the level of significance α = 5 %, the long-term trends were found present in 30 % of the RNRCs. However at α = 5 %, the main downward (upward) sub-trends were found significant in 30 (60 %) of the RNRCs. Generally at α = 1 %, linkages of the trend evolutions in the rainfall of the RNRCs were found to those of the influences from the Atlantic and Indian Oceans. At α = 5 %, influences from the Pacific Ocean on the rainfall trends of some countries were also evident.

71 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a supervised committee machine artificial intelligent (SCMAI) model, which combines the results of individual AI models using a supervised artificial neural network, for better prediction of vulnerability.
Abstract: Vulnerability maps are designed to show areas of greatest potential for groundwater contamination on the basis of hydrogeological conditions and human impacts. The objective of this research is (1) to assess the groundwater vulnerability using DRASTIC method and (2) to improve the DRASTIC method for evaluation of groundwater contamination risk using AI methods, such as ANN, SFL, MFL, NF and SCMAI approaches. This optimization method is illustrated using a case study. For this purpose, DRASTIC model is developed using seven parameters. For validating the contamination risk assessment, a total of 243 groundwater samples were collected from different aquifer types of the study area to analyze $$ {\text{NO}}_{ 3}^{ - } $$ concentration. To develop AI and CMAI models, 243 data points are divided in two sets; training and validation based on cross validation approach. The calculated vulnerability indices from the DRASTIC method are corrected by the $$ {\text{NO}}_{3}^{ - } $$ data used in the training step. The input data of the AI models include seven parameters of DRASTIC method. However, the output is the corrected vulnerability index using $$ {\text{NO}}_{3}^{ - } $$ concentration data from the study area, which is called groundwater contamination risk. In other words, there is some target value (known output) which is estimated by some formula from DRASTIC vulnerability and $$ {\text{NO}}_{3}^{ - } $$ concentration values. After model training, the AI models are verified by the second $$ {\text{NO}}_{3}^{ - } $$ concentration dataset. The results revealed that NF and SFL produced acceptable performance while ANN and MFL had poor prediction. A supervised committee machine artificial intelligent (SCMAI), which combines the results of individual AI models using a supervised artificial neural network, was developed for better prediction of vulnerability. The performance of SCMAI was also compared to those of the simple averaging and weighted averaging committee machine intelligent (CMI) methods. As a result, the SCMAI model produced reliable estimates of groundwater contamination risk.

63 citations


Journal ArticleDOI
TL;DR: In this article, a spatial framework integrating naive Bayes and geographic information system (GIS) was developed to assess flooding hazard at regional scale in Bowen Basin in Australia as a case study.
Abstract: Flooding hazard evaluation is the basis of flooding risk assessment which has significances to natural environment, human life and social economy. This study develops a spatial framework integrating naive Bayes (NB) and geographic information system (GIS) to assess flooding hazard at regional scale. The methodology was demonstrated in the Bowen Basin in Australia as a case study. The inputs into the framework are five indices: elevation, slope, soil water retention, drainage proximity and density. They were derived from spatial data processed in ArcGIS. NB as a simplified and efficient type of Bayesian methods was used, with the assistance of remotely sensed flood inundation extent in the sampling process, to infer flooding probability on a cell-by-cell basis over the study area. A likelihood-based flooding hazard map was output from the GIS-based framework. The results reveal elevation and slope have more significant impacts on evaluation than other input indices. Area of high likelihood of flooding hazard is mainly located in the west and the southwest where there is a high water channel density, and along the water channels in the east of the study area. High likelihood of flooding hazard covers 45 % of the total area, medium likelihood accounts for about 12 %, low and very low likelihood represents 19 and 24 %, respectively. The results provide baseline information to identify and assess flooding hazard when making adaptation strategies and implementing mitigation measures in future. The framework and methodology developed in the study offer an integrated approach in evaluation of flooding hazard with spatial distributions and indicative uncertainties. It can also be applied to other hazard assessments.

Journal ArticleDOI
TL;DR: In this article, the authors focus on the assessment of the degree of rarity of the 2014 California drought and highlight some critical aspects of multivariate frequency analysis that are often overlooked and should be carefully taken into account for a correct interpretation of the results.
Abstract: The joint occurrence of extreme hydroclimatic events, such as simultaneous precipitation deficit and high temperature, results in the so-called compound events, and has a serious impact on risk assessment and mitigation strategies. Multivariate frequency analysis (MFA) allows a probabilistic quantitative assessment of this risk under uncertainty. Analyzing precipitation and temperature records in the contiguous United States (CONUS), and focusing on the assessment of the degree of rarity of the 2014 California drought, we highlight some critical aspects of MFA that are often overlooked and should be carefully taken into account for a correct interpretation of the results. In particular, we show that an informative exploratory data analysis (EDA) devised to check the basic hypotheses of MFA, a suitable assessment of the sampling uncertainty, and a better understanding of probabilistic concepts can help to avoid misinterpretation of univariate and multivariate return periods, and incoherent conclusions concerning the risk of compound extreme hydroclimatic events. Empirical results show that the dependence between precipitation deficit and temperature across the CONUS can be positive, negative or not significant and does not exhibit significant changes in the last three decades. Focusing on the 2014 California drought as a compound event and based on the data used, the probability of occurrence strongly depends on the selected variables and how they are combined, and is affected by large uncertainty, thus preventing definite conclusions about the actual degree of rarity of this event.

Journal ArticleDOI
TL;DR: A notable insight is that one can simulate any vector random field whose direct and cross-covariance functions are continuous and absolutely integrable, provided that one knows the analytical expression of their spectral density, without the need for these spectral densities to have a bounded support.
Abstract: We propose a spectral turning-bands approach for the simulation of second-order stationary vector Gaussian random fields. The approach improves existing spectral methods through coupling with importance sampling techniques. A notable insight is that one can simulate any vector random field whose direct and cross-covariance functions are continuous and absolutely integrable, provided that one knows the analytical expression of their spectral densities, without the need for these spectral densities to have a bounded support. The simulation algorithm is computationally faster than circulant-embedding techniques, lends itself to parallel computing and has a low memory storage requirement. Numerical examples with varied spatial correlation structures are presented to demonstrate the accuracy and versatility of the proposal.

Journal ArticleDOI
TL;DR: In this article, the authors present a rigorous computational framework for visualizing uncertainty of tsunami hazard and risk assessment, which consists of three modules: (i) earthquake source characterization and stochastic simulation of slip distribution, (ii) tsunami propagation and inundation, and (iii) tsunami damage assessment and loss estimation.
Abstract: This study presents a rigorous computational framework for visualizing uncertainty of tsunami hazard and risk assessment. The methodology consists of three modules: (i) earthquake source characterization and stochastic simulation of slip distribution, (ii) tsunami propagation and inundation, and (iii) tsunami damage assessment and loss estimation. It takes into account numerous stochastic tsunami scenarios to evaluate the uncertainty propagation of earthquake source characteristics in probabilistic tsunami risk analysis. An extensive Monte Carlo tsunami inundation simulation is implemented for the 2011 Tohoku tsunami (focusing upon on Rikuzentakata along the Tohoku coast of Japan) using 726 stochastic slip models derived from eleven inverted source models. By integrating the tsunami hazard results with empirical tsunami fragility functions, probabilistic tsunami risk analysis and loss estimation are carried out; outputs from the analyses are displayed using various visualization methods. The developed framework is comprehensive, and can provide valuable insights in promoting proactive tsunami risk management and in improving emergency response capability.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the seasonal hydrochemical evolution of coastal groundwater resources in Urmia plain, NW Iran, taking into account that saltwater intrusion is a dynamic process, and that seasonal variations in the balance of the aquifer cause changes in groundwater chemistry.
Abstract: Coastal aquifers are at threat of salinization in most parts of the world. This work investigated the seasonal hydrochemical evolution of coastal groundwater resources in Urmia plain, NW Iran. Two recently proposed methods have been used to comparison, recognize and understand the temporal and spatial evolution of saltwater intrusion in a coastal alluvial aquifer. The study takes into account that saltwater intrusion is a dynamic process, and that seasonal variations in the balance of the aquifer cause changes in groundwater chemistry. Pattern diagrams, which constitute the outcome of several hydrochemical processes, have traditionally been used to characterize vulnerability to sea/saltwater intrusion. However, the formats of such diagrams do not facilitate the geospatial analysis of groundwater quality, thus limiting the ability of spatio-temporal mapping and monitoring. This deficiency calls for methodologies which can translate information from some diagrams such Piper diagram into a format that can be mapped spatially. Distribution of groundwater chemistry types in Urmia plain based on modified Piper diagram using GQIPiper(mix) and GQIPiper(dom) indices that Mixed Ca–Mg–Cl and Ca-HCO3 are the dominant water types in the wet and dry seasons, respectively. In this study, a groundwater quality index specific to seawater intrusion (GQISWI) was used to check its efficiency for the groundwater samples affected by Urmia hypersaline Lake, Iran. Analysis of the main processes, by means of the Hydrochemical Facies Evolution Diagram (HFE-Diagram), provides essential knowledge about the main hydrochemical processes. Subsequently, analysis of the spatial distribution of hydrochemical facies using heatmaps helps to identify the general state of the aquifer with respect to saltwater intrusion during different sampling periods. The HFE-D results appear to be very successful for differentiating variations through time in the salinization processes caused by saltwater intrusion into the aquifer, distinguishing the phase of saltwater intrusion from the phase of recovery, and their respective evolutions. Both GQI and HFE-D methods show that hydrochemical variations can be read in terms of the pattern of saltwater intrusion and groundwater quality status. But generally, in this case (i.e. saltwater and not seawater intrusion) the HFE-D method was presented better efficiency than GQI method (including GQIPiper and GQISWI).

Journal ArticleDOI
TL;DR: In this paper, a novel use of artificial neural networks to map the general input/output relationships in actual operating rules of real world dams is presented, which may be added to daily hydrologic routing models for simulating the releases from dams, in regional and global-scale studies.
Abstract: Construction of dams and the resulting water impoundments are one of the most common engineering procedures implemented on river systems globally; yet simulating reservoir operation at the regional and global scales remains a challenge in human–earth system interactions studies. Developing a general reservoir operating scheme suitable for use in large-scale hydrological models can improve our understanding of the broad impacts of dams operation. Here we present a novel use of artificial neural networks to map the general input/output relationships in actual operating rules of real world dams. We developed a new general reservoir operation scheme (GROS) which may be added to daily hydrologic routing models for simulating the releases from dams, in regional and global-scale studies. We show the advantage of our model in distinguishing between dams with various storage capacities by demonstrating how it modifies the reservoir operation in respond to changes in capacity of dams. Embedding GROS in a water balance model, we analyze the hydrological impact of dam size as well as their distribution pattern within a drainage basin and conclude that for large-scale studies it is generally acceptable to aggregate the capacity of smaller dams and instead model a hypothetical larger dam with the same total storage capacity; however we suggest limiting the aggregation area to HUC 8 sub-basins (approximately equal to the area of a 60 km or a 30 arc minute grid cell) to avoid exaggerated results.

Journal ArticleDOI
TL;DR: In this paper, an existing hydrologic model based on the principles of system dynamics was modified by using the effective cumulative temperature (>0 °C) to calculate snowmelt rate, and the soil temperature to adjust the influence of the soil's physical state on water infiltration.
Abstract: Snowmelt and water infiltration are two important processes of the hydrological cycle in alpine basins where snowmelt water is a main contributor of streamflow. In insufficiently gauged basins, hydrologic modeling is a useful approach to understand the runoff formation process and to simulate streamflow. In this study, an existing hydrologic model based on the principles of system dynamics was modified by using the effective cumulative temperature (>0 °C) to calculate snowmelt rate, and the soil temperature to adjust the influence of the soil’s physical state on water infiltration. This modified model was used to simulate streamflows in the Kaidu River basin from 1982 to 2002, including normal, high, and low flows categorized by the Z index. Sensitivity analyses, visual inspection, and statistical measures were employed to evaluate the capability of the model to simulate various components of the streamflow. Results showed that the modified model was robust, and able to simulate the three categories of flows well. The model’s ability to reproduce streamflow in low-flow and normal-flow years was better than that in high-flow years. The model was also able to simulate the baseflow. Further, its ability to simulate spring-peak flow was much better than its ability to simulate the summer-peak flow. This study could provide useful information for water managers in determining water allocations as well as in managing water resources.

Journal ArticleDOI
Yi Zheng1, Feng Han1
TL;DR: The study results indicate that the MCMC UA has to be management-oriented, that is, management objectives should be factored into the designs of the UA, rather than be considered after the UA.
Abstract: Watershed-scale water quality (WWQ) models are now widely used to support management decision-making. However, significant uncertainty in the model outputs remains a largely unaddressed issue. In recent years, Markov Chain Monte Carlo (MCMC), a category of formal Bayesian approaches for uncertainty analysis (UA), has become popular in the field of hydrological modeling, but its applications to WWQ modeling have been rare. This study systematically evaluated the applicability of MCMC in assessing the uncertainty of WWQ modeling, using Differential Evolution Adaptive Metropolis (DREAM(ZS)) and SWAT as the representative MCMC algorithm and WWQ model, respectively. The nitrate pollution in Newport Bay watershed was the case study for numerical experiments. It has been concluded that the efficiency and effectiveness of a MCMC algorithm would depend on some critical designs of the UA, including: (i) how many and which model parameters to be considered as random in the MCMC analysis; (ii) where to fix the non-random model parameters; and (iii) which criteria to stop the Markov Chain. The study results also indicate that the MCMC UA has to be management-oriented, that is, management objectives should be factored into the designs of the UA, rather than be considered after the UA.

Journal ArticleDOI
TL;DR: In this article, both daily and hourly rainfall observations at 28 rainfall stations were used as inputs to SWAT for daily streamflow simulation in the Upper Huai River Basin, and the results indicated that temporal rainfall resolution could have much impact on the simulation of hydrological process, streamflow, and consequently pollutant transport by SWAT models.
Abstract: Despite the significant role of precipitation in the hydrological cycle, few studies have been conducted to evaluate the impacts of the temporal resolution of rainfall inputs on the performance of SWAT (soil and water assessment tool) models in large-sized river basins. In this study, both daily and hourly rainfall observations at 28 rainfall stations were used as inputs to SWAT for daily streamflow simulation in the Upper Huai River Basin. Study results have demonstrated that the SWAT model with hourly rainfall inputs performed better than the model with daily rainfall inputs in daily streamflow simulation, primarily due to its better capability of simulating peak flows during the flood season. The sub-daily SWAT model estimated that 58 % of streamflow was contributed by baseflow compared to 34 % estimated by the daily model. Using the future daily and 3-h precipitation projections under the RCP (Representative Concentration Pathways) 4.5 scenario as inputs, the sub-daily SWAT model predicted a larger amount of monthly maximum daily flow during the wet years than the daily model. The differences between the daily and sub-daily SWAT model simulation results indicated that temporal rainfall resolution could have much impact on the simulation of hydrological process, streamflow, and consequently pollutant transport by SWAT models. There is an imperative need for more studies to examine the effects of temporal rainfall resolution on the simulation of hydrological and water pollutant transport processes by SWAT in river basins of different environmental conditions.

Journal ArticleDOI
TL;DR: In this paper, the energy inputs and GHG emissions of orange production in north of Iran were modeled and optimized by artificial neural networks (ANN) and multi-objective genetic algorithm (MOGA) in this study and the results obtained were compared with the results of data envelopment analysis (DEA) approach.
Abstract: Management of energy use and reduction of greenhouse gas emissions (GHG) in agricultural system is the important topic. For this purpose, many methods have been proposed in different researches for solution of these items in recent years. Obviously, the selection of appropriate method was a new concern for researchers. Accordingly, the energy inputs and GHG emissions of orange production in north of Iran were modeled and optimized by artificial neural networks (ANN) and multi-objective genetic algorithm (MOGA) in this study and the results obtained were compared with the results of data envelopment analysis (DEA) approach. Results showed that, on average, an amount of 25,582.50 MJ ha−1 was consumed in orange orchards in the region and the nitrogen fertilizer was accounted for 36.84 % of the total input energy. The outcomes of this study demonstrated that on average 803 kg carbon dioxide (kgCO2eq.) is emitted per ha and diesel fuel is responsible for 35.7 % of all emissions. The results of ANN signified that they were capable of modeling crop output and total GHG emissions where the model with a 13-4-2 topology had the highest accuracy in both training and testing steps. The optimization of energy consumption using MOGA revealed that the total energy consumption and GHG emissions of orange production can be reduced to the values of 13,519 MJ ha−1 and 261 kgCO2eq. ha−1, respectively. A comparison between MOGA and DEA clearly showed the better performance of MOGA due to simultaneous application of different objectives and the global optimum solutions produced by the last generation.

Journal ArticleDOI
TL;DR: In this paper, the results of 12 GCMs for three emission scenarios B1, A1B, and A2 were analyzed for mid- (2046-2065) and end-century (2081-2100) intervals, for six locations of a hydroclimatic transect of Michigan.
Abstract: Predictions of a warmer climate over the Great Lakes region due to global change generally agree on the magnitude of temperature changes, but precipitation projections exhibit dependence on which General Circulation Models and emission scenarios are chosen. To minimize model- and scenario-specific biases, we combined information provided by the 3rd phase of the Coupled Model Intercomparison Project database. Specifically, the results of 12 GCMs for three emission scenarios B1, A1B, and A2 were analyzed for mid- (2046–2065) and end-century (2081–2100) intervals, for six locations of a hydroclimatic transect of Michigan. As a result of Bayesian Weighted Averaging, total annual precipitation averaged over all locations and the three emission scenarios increases by 7 % (mid-)–10 % (end-century), as compared to the control period (1961–1990). The projected changes across seasons are non-uniform and precipitation decreases by 3 % (mid-)–5 % (end-) for the months of August and September are likely. Further, average temperature is very likely to increase by 2.02–2.85 °C by the mid-century and 2.58–4.73 °C by the end-century. Three types of non-additive uncertainty sources due to climate models, anthropogenic forcings, and climate internal variability are addressed. When compared to the emission uncertainty, the relative magnitudes of the uncertainty types for climate model ensemble and internal variability are 149 and 225 % for mean monthly precipitation, and they are respectively 127 and 123 % for mean monthly temperature. A decreasing trend of the frost days and an increasing trend of the growing season length are identified. Also, a significant increase in the magnitude and frequency of heavy rainfall events is projected, with relatively more pronounced changes for heavy hourly rainfall as compared to daily events. Quantifying the inherent natural uncertainty and projecting hourly-based extremes, the study results deliver useful information for water resource stakeholders interested in impacts of climate change on hydro-morphological processes.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated and compared five different bias correction techniques (BCT) to correct the Global Precipitation Climatology Center (GPCC) data with respect to rain gauges in Iraq, which is located in a semi-arid climatic zone.
Abstract: Long-term historical precipitation data are important in developing metrics for studying the impacts of past hydrologic events (e.g., droughts) on water resources management. Many geographical regions around the world often witness lack of long term historical observation and to overcome this challenge, Global Precipitation Climatology Center (GPCC) datasets are found to be useful. However, the GPCC data are available at coarser scale (0.5° resolution), therefore bias correction techniques are often applied to generate local scale information before it can be applied for decision making activities. The objective of this study is to evaluate and compare five different bias correction techniques (BCT’s) to correct the GPCC data with respect to rain gauges in Iraq, which is located in a semi-arid climatic zone. The BCT’s included in this study are: Mean Bias-remove (B) technique, Multiplicative Shift (M), Standardized-Reconstruction (S), Linear Regression (R), and Quantile Mapping (Q). It was observed that the Performance Index (PI) of BCT’s differs in space (i.e., precipitation pattern) and temporal scale (i.e., seasonal and monthly). In general, the PI for the Q and B were better compared to other three (M, S and R) bias correction techniques. Comparatively, Q performs better than B during wet season. However, both these techniques performed equally well during average rainy season. This study suggests that instead of using a single bias correction technique at different climatic regimes, multiple BCT’s needs to be evaluated for identifying appropriate methodology that suits local climatology.

Journal ArticleDOI
Zhiyong Wu1, Yun Mao1, Xiaoyan Li1, Guihua Lu1, Qingxia Lin1, Huating Xu1 
TL;DR: In this paper, the relationship among meteorological, agricultural, and hydrological droughts was analyzed at different time scales in Southwest China using the standardized precipitation evapotranspiration index (SPEI), soil moisture anomaly percentage index (SMAPI), and standardized runoff index (SRI), respectively.
Abstract: Each type of drought has different characteristics in different regions. It is important to distinguish different types of droughts and their correlations. Based on gauged precipitation, temperature, simulated soil moisture, and runoff data during the period 1951–2012, the relationships among meteorological, agricultural, and hydrological droughts were analyzed at different time scales in Southwest China. The standardized precipitation evapotranspiration index (SPEI), soil moisture anomaly percentage index (SMAPI), and standardized runoff index (SRI) were used to describe meteorological, agricultural, and hydrological droughts, respectively. The results show that there was a good correlation among the three indices. SMAPI had the best correlation with the 3 month SPEI and SRI values. It indicates that agricultural drought was characterized by a 3-month scale. The three drought indices displayed the similar special features such as drought scope, drought level, and drought center during the extreme drought of 2009–2010. However, the scope and level of SPEI were bigger than those of SMAPI and SRI. The propagation characteristics of the three types of droughts were significantly different. The temporal drought process in typical grids reflect that the meteorological drought occurred ahead of agricultural and hydrological droughts by about 1 and 3 months, respectively. Agricultural drought showed a stable drought process and reasonable time periods for the drought beginning and end. These results showed the quantitative relationships among three types of drought and thus provided an important supporting evidence for regional drought monitoring and strategic decisions.

Journal ArticleDOI
TL;DR: The results of a case study demonstrated that the methods in this paper provide a similar result corresponding with the actual lake health condition, showing that the proposed method is efficient and worths generalization.
Abstract: As lake ecosystem assessment is the foundation to achieve lake monitoring, environmental management and ecological restoration, a new concept of lake ecosystem health and driving force-pressure-state-impact-response-management framework was proposed to find out the causal relationship of the system and health distance model was taken to represent the health level of ecosystem. An assessment indicator system comprised of water quality, ecological and socio-economic criteria was established. The evaluation models were applied for the assessment of the ecosystem health level of a typical lake, Nansi Lake, China. Depends on the values of health distance, the heath level was described as 5°: very healthy, healthy, general healthy, sub-healthy and diseased. Using field investigation data and statistic data within the theory and applied models, the results of comprehensive assessment show that: (1) the health distances of water quality indicators, ecological indicators, socio-economic indicators and comprehensive health distance were 0.3989, 0.2495, 0.4983 and 0.4362, respectively. The health level was in general healthy condition. Ecological indicators were in healthy condition, which indicate that the stability was high. The distance of water quality had shown a tendency to approach general healthy level. As the health distance of socio-economic indicators have shown a bad impact form human beings, more effective measures need to be developed. (2) The results of a case study demonstrated that the methods in this paper provide a similar result corresponding with the actual lake health condition. Therefore, this paper shows that the proposed method is efficient and worths generalization.

Journal ArticleDOI
TL;DR: In this article, the authors present an analysis of observed changes and future projections for meteorological drought for four different time scales (1, 3, 6 and 12 months) in the Beijiang River basin, South China, on the basis of the standardized precipitation evapotranspiration index (SPEI).
Abstract: It is expected that climate warming will be experienced through increases in the magnitude and frequency of extreme events, including droughts. This paper presents an analysis of observed changes and future projections for meteorological drought for four different time scales (1 month, and 3, 6 and 12 months) in the Beijiang River basin, South China, on the basis of the standardized precipitation evapotranspiration index (SPEI). Observed changes in meteorological drought were analysed at 24 meteorological stations from 1969 to 2011. Future meteorological drought was projected based on the representative concentration pathway (RCP) scenarios RCP4.5 and RCP8.5, as projected by the regional climate model RegCM4.0. The statistical significance of the meteorological drought trends was checked with the Mann–Kendall method. The results show that drought has become more intense and more frequent in most parts of the study region during the past 43 years, mainly owing to a decrease in precipitation. Furthermore, long-term dryness is expected to be more pronounced than short-term dryness. Validation of the model simulation indicates that RegCM4.0 provides a good simulation of the characteristic values of SPEIs. During the twenty first century, significant drying trends are projected for most parts of the study region, especially in the southern part of the basin. Furthermore, the drying trends for RCP8.5 (or for long time scales) are more pronounced than for RCP4.5 (or for short time scales). Compared to the baseline period 1971–2000, the frequency of drought for RCP4.5 (RCP8.5) tends to increase (decrease) in 2021–2050 and decrease (increase) in 2051–2080. The results of this paper will be helpful for efficient water resources management in the Beijiang River basin under climate warming.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method of regionalization based on the probability distribution function of model parameters, which accounts the variability in the catchment characteristics, and the results indicated that the ensemble simulations in the ungauged basins were closely matching with the observed streamflow data.
Abstract: Regionalization of model parameters by developing appropriate functional relationship between the parameters and basin characteristics is one of the potential approaches to employ hydrological models in ungauged basins. While this is a widely accepted procedure, the uniqueness of the watersheds and the equifinality of parameters bring lot of uncertainty in the simulations in ungauged basins. This study proposes a method of regionalization based on the probability distribution function of model parameters, which accounts the variability in the catchment characteristics. It is envisaged that the probability distribution function represents the characteristics of the model parameter, and when regionalized the earlier concerns can be addressed appropriately. The method employs probability distribution of parameters, derived from gauged basins, to regionalize by regressing them against the catchment attributes. These regional functions are used to develop the parameter characteristics in ungauged basins based on the catchment attributes. The proposed method is illustrated using soil water assessment tool model for an ungauged basin prediction. For this numerical exercise, eight different watersheds spanning across different climatic settings in the USA are considered. While all the basins considered in this study were gauged, one of them was assumed to be ungauged (pseudo-ungauged) in order to evaluate the effectiveness of the proposed methodology in ungauged basin simulation. The process was repeated by considering representative basins from different climatic and landuse scenarios as pseudo-ungauged. The results of the study indicated that the ensemble simulations in the ungauged basins were closely matching with the observed streamflow. The simulation efficiency varied between 57 and 61 % in ungauged basins. The regional function was able to generate the parameter characteristics that were closely matching with the original probability distribution derived from observed streamflow data.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a modeling procedure to quantify the added value of including climate information in a dengue model for the 76 provinces of Thailand, from 1982-2013.
Abstract: Dengue is the world’s most important vector-borne viral disease. The dengue mosquito and virus are sensitive to climate variability and change. Temperature, humidity and precipitation influence mosquito biology, abundance and habitat, and the virus replication speed. In this study, we develop a modelling procedure to quantify the added value of including climate information in a dengue model for the 76 provinces of Thailand, from 1982–2013. We first developed a seasonal-spatial model, to account for dependency structures from 1 month to the next and between provinces. We then tested precipitation and temperature variables at varying time lags, using linear and nonlinear functional forms, to determine an optimum combination of time lags to describe dengue relative risk. Model parameters were estimated using integrated nested Laplace approximation. This approach provides a novel opportunity to perform model selection in a Bayesian framework, while accounting for underlying spatial and temporal dependency structures and linear or nonlinear functional forms. We quantified the additional variation explained by interannual climate variations, above that provided by the seasonal-spatial model. Overall, an additional 8 % of the variance in dengue relative risk can be explained by accounting for interannual variations in precipitation and temperature in the previous month. The inclusion of nonlinear functions of climate in the model framework improved the model for 79 % of the provinces. Therefore, climate forecast information could significantly contribute to a national dengue early warning system in Thailand.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper applied the joint approach of Hakanson risk index (RI) and Monte Carlo simulation, which expressed the ecological risk in this work is expressed as probability distribution of RI values instead of single point calculations.
Abstract: An overall and comparative ecological risk assessment of heavy metals (including Cd, Cr, Cu, Pb, Zn, Hg and As) in surface sediments from China’s eight major aquatic bodies was conducted to better understand their potential risks on a national scale. By applying the joint approach of Hakanson risk index (RI) and Monte Carlo simulation, ecological risk in this work is expressed as probability distribution of RI values instead of single point calculations to reflect the uncertainties in risk assessment process. The results show that the highest ecological risks posed by heavy metals existed in Xiangjiang River and Dianchi Lake. Although only a slim margin of high risk (651.88/600 = 1.08 and 700.61/600 = 1.17) was identified based on average RI values, the probabilities of high risk level derived from Monte Carlo simulation reached as high as 56.7 and 52.9 % in these two aquatic bodies, respectively. And the probability of low risk level was less than 1.6 %. Furthermore, the risk was mainly contributed by Hg and Cd, discharged through local intensive mining and industrial activities. The findings indicate that rigid control and effective management measures to prevent heavy metal pollution are urgently needed in China, especially for the high-risk aquatic bodies. This study shows that the joint approach can be used to identify the high risk water bodies and the major metal pollutants. It may avoid overestimating or underestimating the ecological risk and provide more decision-making support for risk alleviation in the polluted aquatic bodies.

Journal ArticleDOI
TL;DR: An overview of techniques used to reproduce non-linear relationships between two sets of variables, based on NL-CCA using neural networks (CCA-NN), coupled to a log-linear regression model for flood quantile estimation and results show that CCA-NN is more robust and can better reproduce the non- linear relationship structures between physiographical and hydrological variables.
Abstract: Hydrological processes are complex non-linear phenomena. Canonical correlation analysis (CCA) is frequently used in regional frequency analysis (RFA) to delineate hydrological neighborhoods. Although non-linear CCA (NL-CCA) is widely used in several fields, it has not been used in hydrology, particularly in RFA. This paper presents an overview of techniques used to reproduce non-linear relationships between two sets of variables. The approaches considered in this work are based on NL-CCA using neural networks (CCA-NN), coupled to a log-linear regression model for flood quantile estimation. In order to demonstrate the usefulness of these approaches in RFA, a comparative study between the latter and linear CCA is performed using three different databases from North America. Results show that CCA-NN is more robust and can better reproduce the non-linear relationship structures between physiographical and hydrological variables. This reflects the high flexibility of this approach. Results indicate that for all three databases, it is more advantageous to proceed with the non-linear CCA approach.

Journal ArticleDOI
TL;DR: This work proposes a methodology based on extreme value Birnbaum–Saunders regression models, which includes model formulation, estimation, inference and checking and conducts a simulation study for evaluating its performance.
Abstract: Extreme value models are widely used in different areas The Birnbaum–Saunders distribution is receiving considerable attention due to its physical arguments and its good properties We propose a methodology based on extreme value Birnbaum–Saunders regression models, which includes model formulation, estimation, inference and checking We further conduct a simulation study for evaluating its performance A statistical analysis with real-world extreme value environmental data using the methodology is provided as illustration