scispace - formally typeset
Search or ask a question

Showing papers in "Stochastic Environmental Research and Risk Assessment in 2006"


Journal ArticleDOI
TL;DR: The traditional AHP is modified to fuzzy AHP using fuzzy arithmetic operations and the concept of risk attitude and associated confidence of a decision maker on the estimates of pairwise comparisons are discussed.
Abstract: Environmental risk management is an integral part of risk analyses. The selection of different mitigating or preventive alternatives often involve competing and conflicting criteria, which requires sophisticated multi-criteria decision-making (MCDM) methods. Analytic hierarchy process (AHP) is one of the most commonly used MCDM methods, which integrates subjective and personal preferences in performing analyses. AHP works on a premise that decision-making of complex problems can be handled by structuring the complex problem into a simple and comprehensible hierarchical structure. However, AHP involves human subjectivity, which introduces vagueness type uncertainty and necessitates the use of decision-making under uncertainty. In this paper, vagueness type uncertainty is considered using fuzzy-based techniques. The traditional AHP is modified to fuzzy AHP using fuzzy arithmetic operations. The concept of risk attitude and associated confidence of a decision maker on the estimates of pairwise comparisons are also discussed. The methodology of the proposed technique is built on a hypothetical example and its efficacy is demonstrated through an application dealing with the selection of drilling fluid/mud for offshore oil and gas operations.

189 citations


Journal ArticleDOI
TL;DR: In this article, a geophysical survey from Elk County, Kansas, USA is used to demonstrate the applicability of the approach to geostatistical prediction and simulation of attributes that fluctuate over large areas or volumes.
Abstract: Geostatistical prediction and simulation are being increasingly used in the earth sciences and engineering to address the imperfect knowledge of attributes that fluctuate over large areas or volumes—pollutant concentration, electromagnetic fields, porosity, thickness of a geological formation. Central to the application of such techniques is the need to know the spatial continuity, knowledge that is commonly condensed in the form of covariance or semivariogram models. Their preparation is subdivided here into the following steps: (1) Data editing, (2) Exploratory data analysis, (3) Semivariogram estimation, (4) Directional investigation, (5) Simple modeling, (6) Nested modeling. I illustrate these stages practically with a real data set from a geophysical survey from Elk County, Kansas, USA. The applicability of the approach is not limited by the physical nature of the attribute of interest.

172 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian framework for the analysis of extreme values distributions is described, considering several probabilistic models (stationary, step-change and linear trend models) and four extreme values distribution (exponential, generalized Pareto, Gumbel and GEV) prior distributions are specified by using regional prior knowledge about quantiles.
Abstract: Statistical analysis of extremes currently assumes that data arise from a stationary process, although such an hypothesis is not easily assessable and should therefore be considered as an uncertainty. The aim of this paper is to describe a Bayesian framework for this purpose, considering several probabilistic models (stationary, step-change and linear trend models) and four extreme values distributions (exponential, generalized Pareto, Gumbel and GEV). Prior distributions are specified by using regional prior knowledge about quantiles. Posterior distributions are used to estimate parameters, quantify the probability of models and derive a realistic frequency analysis, which takes into account estimation, distribution and stationarity uncertainties. MCMC methods are needed for this purpose, and are described in the article. Finally, an application to a POT discharge series is presented, with an analysis of both occurrence process and peak distribution.

110 citations


Journal ArticleDOI
TL;DR: In this paper, a generalization of Gneiting's approach is proposed to obtain new classes of stationary nonseparable spatio-temporal covariance functions which are spatially anisotropic.
Abstract: Obtaining new and flexible classes of nonseparable spatio-temporal covariances and variograms has resulted a key point of research in the last years. The goal of this paper is to introduce and develop new spatio-temporal covariance models taking into account the problem of spatial anisotropy. Recent literature has focused on the problem of full symmetry and the problem of anisotropy has been overcome. Here we propose a generalization of Gneiting’s (J Am Stat Assoc 97:590–600, 2002a) approach and obtain new classes of stationary nonseparable spatio-temporal covariance functions which are spatially anisotropic. The resulting structures are proved to have certain interesting mathematical properties, together with a considerable applicability.

90 citations


Journal ArticleDOI
TL;DR: In this paper, a combination of classic trend tests and spatially interpolated precipitation data sets allows the spatiotemporal visualization of detected trends in the Yangtze River catchment.
Abstract: Precipitation trends in the Yangtze River catchment (PR China) have been analyzed for the past 50 years by applying the Mann-Kendall trend test and geospatial analyses. Monthly precipitation trends of 36 stations have been calculated. Significant positive trends at many stations can be observed for the summer months, which naturally show precipitation maxima. They were preceded and/or followed by negative trends. This observation points towards a concentration of summer precipitation within a shorter period of time. The analysis of a second data set on a gridded basis with 0.5° resolution reveals trends with distinct spatial patterns. The combination of classic trend tests and spatially interpolated precipitation data sets allows the spatiotemporal visualization of detected trends. Months with positive trends emphasize the aggravation of severe situation in a region, which is particularly prone to flood disasters during summer. Reasons for the observed trends were found in variations in the meridional wind pattern at the 850 hPa level, which account for an increased transport of warm moist air to the Yangtze River catchment during the summer months.

84 citations


Journal ArticleDOI
TL;DR: In this article, a new method of reliability analysis for crop water production function is presented considering crop water demand uncertainty, which uses an advanced first-order second moment (AFOSM) method in evaluating the crop yield failure probability.
Abstract: A new method of reliability analysis for crop water production function is presented considering crop water demand uncertainty. The procedure uses an advanced first-order second moment (AFOSM) method in evaluating the crop yield failure probability. To determine the variance and the mean of actual evapotranspiration as the component of interest for AFOSM analysis, an explicit stochastic optimization model for optimal irrigation scheduling is developed based on the first and second-order moment analysis of the soil moisture state variables. As a result of the study, the violation probabilities of crop yield at different levels were computed from AFOSM method. Also using the optimization results and the double bounded density function estimation methodology, the weekly soil moisture density function is derived which can be used as a short term reliability index. The proposed approach does not involve any discretization of system variables. The results of reliability analysis and optimization model compare favorably with those obtained from simulation.

70 citations


Journal ArticleDOI
TL;DR: Nonparametric probability density estimation methods for selected variables are used and the use of Latin Hypercube sampling to improve the efficiency of MCS driven by the multiple random variables and Bootstrap resampling to determine initial water surface level are introduced.
Abstract: Hydrologic risk analysis for dam safety relies on a series of probabilistic analyses of rainfall-runoff and flow routing models, and their associated inputs. This is a complex problem in that the probability distributions of multiple independent and derived random variables need to be estimated in order to evaluate the probability of dam overtopping. Typically, parametric density estimation methods have been applied in this setting, and the exhaustive Monte Carlo simulation (MCS) of models is used to derive some of the distributions. Often, the distributions used to model some of the random variables are inappropriate relative to the expected behaviour of these variables, and as a result, simulations of the system can lead to unrealistic values of extreme rainfall or water surface levels and hence of the probability of dam overtopping. In this paper, three major innovations are introduced to address this situation. The first is the use of nonparametric probability density estimation methods for selected variables, the second is the use of Latin Hypercube sampling to improve the efficiency of MCS driven by the multiple random variables, and the third is the use of Bootstrap resampling to determine initial water surface level. An application to the Soyang Dam in South Korea illustrates how the traditional parametric approach can lead to potentially unrealistic estimates of dam safety, while the proposed approach provides rather reasonable estimates and an assessment of their sensitivity to key parameters.

63 citations


Journal ArticleDOI
TL;DR: The segmentation procedure is based on the minimization of Hubert’s segmentation cost or various generalizations of this cost through a dynamic programming algorithm, which is guaranteed to find the globally optimal segmentations with K=1, 2, ..., Kmax segments.
Abstract: We present a procedure for the segmentation of hydrological and environmental time series. The procedure is based on the minimization of Hubert’s segmentation cost or various generalizations of this cost. This is achieved through a dynamic programming algorithm, which is guaranteed to find the globally optimal segmentations with K=1, 2, ..., K max segments. Various enhancements can be used to speed up the basic dynamic programming algorithm, for example recursive computation of segment errors and “block segmentation”. The “true” value of K is selected through the use of the Bayesian information criterion. We evaluate the segmentation procedure with experiments which involve artificial as well as temperature and river discharge time series.

54 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an examination of predictive methodologies for the assessment of long-term risks of hydrological hazards, with particular focus on applications to rainfall and flooding, motivated by three data sets from the Caribbean region.
Abstract: There is an urgent need for the development and implementation of modern statistical methodology for long-term risk assessment of extreme hydrological hazards in the Caribbean. Notwithstanding the inevitable scarcity of data relating to extreme events, recent results and approaches call into question standard methods of estimation of the risks of environmental catastrophes that are currently adopted. Estimation of extreme hazards is often based on the Gumbel model and on crude methods for estimating predictive probabilities. In both cases the result is often a remarkable underestimation of the predicted probabilities for disasters of large magnitude. Simplifications do not stop here: assumptions of data homogeneity and temporal independence are usually made regardless of potential inconsistencies with genuine process behaviour and the fact that results may be sensitive to such mis-specifications. These issues are of particular relevance for the Caribbean, given its exposure to diverse meteorological climate conditions. In this article we present an examination of predictive methodologies for the assessment of long-term risks of hydrological hazards, with particular focus on applications to rainfall and flooding, motivated by three data sets from the Caribbean region. Consideration is given to classical and Bayesian methods of inference for annual maxima and daily peaks-over-threshold models. We also examine situations where data non-homogeneity is compromised by an unknown seasonal structure, and the situation in which the process under examination has a physical upper limit. We highlight the fact that standard Gumbel analyses routinely assign near-zero probability to subsequently observed disasters, and that for San Juan, Puerto Rico, standard 100-year predicted rainfall estimates may be routinely underestimated by a factor of two.

51 citations


Journal ArticleDOI
TL;DR: In this paper, the use of statistical techniques to standardize ground-based measurements of particulate matter (PM10) is presented. But, the authors do not consider the effect of satellite data.
Abstract: This paper illustrates the use of statistical techniques to standardize ground based measurements of particulate matter (PM10). Concentrations are interpolated over Western Europe using uncertain secondary information from a chemical transport model and of aerosol optical thickness from MODIS satellite observations. A consistent overview of PM10 concentrations over Europe based solely on ground based measurements is complicated by differences between countries. Different monitoring methods are used and calibrations are applied. There also is an inherent limitation to the spatial representativeness of ground based measurements. Validation showed that adding secondary information from either the chemical transport model or the satellite observations improved the PM10 mapping. The URMSE decreased from 5.14 to 4.26 and 4.58, respectively. A combination of both sources of secondary information gave the most accurate and precise predictions, with an URMSE of 3.62. This means that both external sources contain additional information on the spatial distribution of PM10 concentrations and should therefore be preferred.

46 citations


Journal ArticleDOI
TL;DR: In this article, a Monte-Carlo based stochastic hourly rainfall generation model considering correlated non-normal random rainstorm characteristics, as well as dependence of various rainstorm patterns on rainfall depth, duration, and season was proposed.
Abstract: Occurrence of rainstorm events can be characterized by the number of events, storm duration, rainfall depth, inter-event time and temporal variation of rainfall within a rainstorm event. This paper presents a Monte-Carlo based stochastic hourly rainfall generation model considering correlated non-normal random rainstorm characteristics, as well as dependence of various rainstorm patterns on rainfall depth, duration, and season. The proposed model was verified by comparing the derived rainfall depth–duration–frequency relations from the simulated rainfall sequences with those from observed annual maximum rainfalls based on the hourly rainfall data at the Hong Kong Observatory over the period of 1884–1990. Through numerical experiments, the proposed model was found to be capable of capturing the essential statistical features of rainstorm characteristics and those of annual extreme rainstorm events according to the available data.

Journal ArticleDOI
TL;DR: In this article, a new method based on three dimensional bursting process is introduced to define the turbulent flow structure, which is used to recognize the susceptible regions for sediment entrainment and deposition at the bed of a vortex chamber.
Abstract: In this study, three dimensional quadrant analysis of bursting process was used to recognize the susceptible regions for sediment entrainment and deposition at the bed of a vortex chamber. From the analysis, it was found that two dimensional quadrant analysis in unable to find the turbulent coherent structure of flow near the bed of the vortex chamber. Therefore, a new method based on three dimensional bursting process is introduced in this study to define the turbulent flow structure. Based on the new methodology in this study, the bursting event is divided into eight different cube zones according to three dimensional velocity fluctuations. It was realized that, four cube zones interactions are toward the central orifice of the vortex chamber and four cube zones interactions are toward the wall of the chamber and they are categorized as classes A and B, respectively. The results from the experiments showed that in class A, the internal sweep events (class IV-A) moves the settled sediment particles toward the central orifice of the chamber, whereas in class B the external sweep events (class IV-B) moves the settled sediment particles toward the external region of the chamber. Also the transition probabilities of the bursting events in 64 particular movements were determined. The result showed that stable organizations of each class of the events had highest transition probabilities whereas cross organizations had lowest transition probabilities. Additionally, an effort was made to find the average inclination angle of the three dimensional bursting events in each cube zone. The results showed that near the bed of the vortex chamber by increasing the tangential velocity toward the center of the chamber, the average inclination angle of the events in the cube zones decreased. Also, at the region where the sediment particles were deposited, the inclination angles had higher values.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the existence of nonlinear deterministic and chaotic dynamic behavior in the spatial pattern of arsenic contamination in the shallow wells (depth <150 m) in Bangladesh.
Abstract: Since the discovery of large-scale arsenic contamination of groundwater in Bangladesh more than a decade ago, studies related to its spatial characterization have relied on geostatistical approaches and the classical notion of linear stochastic dynamics. This study explores an alternative nonlinear approach, with a motivation to possibly achieve more cost-effective solutions for Bangladesh. It investigates the existence of nonlinear deterministic and chaotic dynamic behavior in the spatial pattern of arsenic contamination in the shallow wells (depth<150 m). The database comprises the nationwide arsenic survey completed in 1999 by the British Geological Survey (BGS) in collaboration with the Department of Public Health Engineering (DPHE) of Bangladesh. Distinction is made in terms of regional geology (Pleistocene vs. Holocene deposits/Northwest vs. Southwest) to understand the geologic dependency. Identification of possible presence of nonlinear deterministic and chaotic patterns is made via the Grassberger-Procaccia correlation dimension algorithm. The analysis yields correlation dimension values ranging anywhere from 8 to 11 depending on the region, suggesting that the arsenic contamination in space, from a chaotic dynamic perspective, is a medium- to high-dimensional problem. The dimension results also indicate that the spatial dynamics of arsenic may be moderately sensitive to geology, with Pleistocene aquifers appearing to require a minimum of about two less dominant processes/variables for its description when compared to that required by the Holocene aquifers. Based on these results, a qualitative discussion is also cast on the potential opportunities offered by a nonlinear deterministic and chaotic dynamic approach towards improving cost-effectiveness in siting new safe wells.

Journal ArticleDOI
TL;DR: The DS rule of combination and its modifications including Yager’s modified rule, Dubois–Prade disjunctive rule and Dezert–Smarandache rule are described using an example of microbial water quality in a distribution network.
Abstract: Total coliforms are used as indicators for evaluating microbial water quality in distribution networks. However, total coliform provides only a weak “evidence” of possible fecal contamination because pathogens are subset of total coliform and therefore their presence in drinking water is not necessarily associated with fecal contamination. Heterotrophic plate counts are also commonly used to evaluate microbial water quality in the distribution networks, but they cover even a wider range of organisms. As a result, both of these indicators can provide incomplete and highly uncertain bodies of evidence when used individually. In this paper, it is shown that combing these two sources of information by an appropriate data fusion technique can provide improved insight into microbial water quality within distribution networks. Approximate reasoning methods like fuzzy logic and probabilistic reasoning are commonly used for data fusion where knowledge is uncertain (i.e., ambiguous, incomplete, and/or vague). Traditional probabilistic frameworks like Bayesian analysis, reasons through conditioning based on prior probabilities (which are hardly ever available). The Dempster–Shafer (DS) theory generalizes the Bayesian analysis without requiring prior probabilities. The DS theory can efficiently deal with the difficulties related to the interpretation of overall water quality where the redundancy of information is routinely observed and the credibility of available data continuously changes. In this paper, the DS rule of combination and its modifications including Yager’s modified rule, Dubois–Prade disjunctive rule and Dezert–Smarandache rule are described using an example of microbial water quality in a distribution network.

Journal ArticleDOI
TL;DR: In this article, a variable saturated groundwater flow model, FEMWATER, was used to evaluate the arrival times of recharged water that infiltrates from an artificial recharge pond to the groundwater table under various hydrogeological conditions.
Abstract: Artificial recharge is a practical tool available for increasing the groundwater storage capacity The efficiency of artificial recharge is related to various hydrogeological factors of the target area In this study, a variable saturated groundwater flow model, FEMWATER, was used to evaluate the arrival times of recharged water that infiltrates from an artificial recharge pond to the groundwater table under various hydrogeological conditions Forty-five arrival times were generated by FEMWATER The relationships between the arrival times and hydrogeological factors used in the simulation of FEMWATER were analyzed by the grey correlation method The results show the order of importance of the factors as they influence the arrival time In order from high to low importance, they are α, D g, θ e, D p, K S and β D g and D p are interpreted as the potential for movement of the recharge water; θe is the water storage capacity of soil, and K S represents the ability of soil to transport water α and β describe the characteristic curve of the unsaturated soil The method was applied to evaluate a suitable site for artificial recharge in the Yun-Lin area Grey correlation analysis was performed to obtain the grey correlation grade using the minimum arrival time as a reference sequence An index is proposed herein to determine the recharge efficiency of 20 sampling sites A contour mapping of index values at the 20 sampling sites identified three areas for artificial aquifer recharge in Yun-Lin Area A in the upper plain is considered more appropriate for groundwater recharge than areas B and C in the coast

Journal ArticleDOI
TL;DR: In this article, a new model has been developed for assessing the risk of water erosion in Southern Iran, which identifies areas with potential risk (risky zones) and areas of actual risk (real risk) as well as projects the probability of worse degradation in future.
Abstract: South of the Zagros belt, the entire land of Southern Iran faces problems arising out of various types of land degradation of which water erosion forms a major type. A new model has been developed for assessing the risk of water erosion. Taking into consideration nine indicators of water erosion the model identifies areas with ‘Potential Risk’ (risky zones) and areas of ‘Actual Risk’ as well as projects the probability of the worse degradation in future. The Qareh Aghaj subbasin (1,265,000 ha), which covers the upper reaches of Mond River, has been chosen for a test risk assessment of this kind. The preparation of risk maps based on the GIS analysis of these indicators will be helpful for prioritizing the areas to initiate remedial measures. The different kinds of data for indicators of water erosion were gathered from the records and published reports of the governmental offices of Iran. By fixing the thresholds of severity classes of the nine indicators a hazard map for each indicator was first prepared in GIS. The risk classes were defined on the basis of risk scores arrived at by assigning the appropriate attributes to the indicators and the risk map was prepared by overlaying nine hazard maps in the GIS. Areas under potential risk have been found to be widespread (63%) in the basin and when classified into subclasses with different probability levels the model projects a statistical picture of the risk of land degradation.

Journal ArticleDOI
TL;DR: In this paper, a method is presented to design monitoring networks for detecting groundwater pollution at industrial sites, where the goal is to detect the pollution at some distance from the site's boundary so that it can be cleaned up or hydrologically contained before contaminating groundwater outside the site.
Abstract: A method is presented to design monitoring networks for detecting groundwater pollution at industrial sites. The goal is to detect the pollution at some distance from the site’s boundary so that it can be cleaned up or hydrologically contained before contaminating groundwater outside the site. It is assumed that pollution may occur anywhere on the site, that transport is by advection only and that no retardation and chemical reactions take place. However, the approach can be easily extended to include designated (and uncertain) source areas, dispersion and reactive transport. The method starts from the premise that it is impossible to detect 100% of all the contaminant plumes with reasonable costs and therefore seeks a balance between the risk of pollution and network density. The design approach takes account of uncertainty in the flow field by simulating realisations of conductivity, groundwater head and associated flow fields, using geostatistical simulation and a groundwater flow model. The realisations are conditioned to conductivity and head observations that may already be present on the site. The result is an ensemble of flow fields that is further analysed using a particle track program. From this the probability of missing a contaminant plume originating anywhere on the terrain can be estimated for a given network. From this probability follows the risk, i.e. the expected costs of an undetected pollution. The total costs of the monitoring strategy are calculated by adding the risk of pollution to the costs of installing and maintaining the monitoring wells and the routinely performed chemical analyses. By repeating this procedure for networks of varying well numbers, the best network is chosen as the one that minimises total cost. The method is illustrated with a simulated example showing the added worth of exploratory wells for characterising hydraulic conductivity of a site.

Journal ArticleDOI
TL;DR: In this paper, a statistical cluster analysis of dimensionless rainfall pattern is applied to identify representative temporal rainfall patterns typically occurred in Hong Kong Territory, and a practical procedure to probabilistically generate plausible rainfall patterns is described.
Abstract: In hydrosystem engineering design and analysis, temporal pattern for rainfall events of interest is often required. In this paper, statistical cluster analysis of dimensionless rainfall pattern is applied to identify representative temporal rainfall patterns typically occurred in Hong Kong Territory. For purpose of selecting an appropriate rainfall pattern in engineering applications, factors affecting the occurrence of different rainfall patterns are examined by statistical contingency tables analysis through which the inter-dependence of the occurrence frequency of rainfall patterns with respect to geographical location, rainfall duration and depth, and seasonality is investigated. Furthermore, due to inherent variability of rainfall mass curves or hyetographs within each classified rainfall pattern, a practical procedure to probabilistically generate plausible rainfall patterns is described. The procedure preserves the inherent stochastic features of random dimensionless rainfall hyetograph ordinates, which in general are correlated non-normal multivariate compositional variables.

Journal ArticleDOI
TL;DR: This paper tries to fit a GARMA model that takes into account both seasonality and long memory in meteorological time series, and shows that these models could also be used to model Dutch hourly data.
Abstract: Since Haslett and Raftery’s paper Space-Time Modelling with Long-Memory Dependence: Assessing Ireland’s Wind Power Resource (1989), modelling meteorological time series with long memory processes, in particular the ARFIMA model has become very common. Haslett and Raftery fitted an ARFIMA model on Irish daily wind speeds. In this paper, we try to reproduce Haslett and Raftery’s results (focusing on the dynamic of the wind process, and not on cross-correlation and space dependencies), and show that an ARFIMA model does not properly capture the behaviour of the series (in Modelling daily windspeed in Ireland section). Indeed, the series show a periodic behaviour, that is not taken into account by the ARFIMA model. Removing this periodic behaviour yields no results either, we therefore try to fit a GARMA model that takes into account both seasonality and long memory (in Seasonality and long memory using GARMA models section). If a GARMA process can be fitted to the data to model Irish daily data, we will show that these models could also be used to model Dutch hourly data.

Journal ArticleDOI
TL;DR: In this paper, the authors consider financial markets with agents exposed to external sources of risk caused by short-term climate events such as the South Pacific sea surface temperature anomalies widely known by the name El Nino, and they use a financial market model in which an additional insurance asset provides another possibility of investment besides the usual capital market.
Abstract: We consider financial markets with agents exposed to external sources of risk caused, for example, by short-term climate events such as the South Pacific sea surface temperature anomalies widely known by the name El Nino. Since such risks cannot be hedged through investments on the capital market alone, we face a typical example of an incomplete financial market. In order to make this risk tradable, we use a financial market model in which an additional insurance asset provides another possibility of investment besides the usual capital market. Given one of the many possible market prices of risk, each agent can maximize his individual exponential utility from his income obtained from trading in the capital market, the additional security, and his risk-exposure function. Under the equilibrium market-clearing condition for the insurance security the market price of risk is uniquely determined by a backward stochastic differential equation. We translate these stochastic equations via the Feynman–Kac formalism into semi-linear parabolic partial differential equations. Numerical schemes are available by which these semilinear pde can be simulated. We choose two simple qualitatively interesting models to describe sea surface temperature, and with an ENSO risk exposed fisher and farmer and a climate risk neutral bank three model agents with simple risk exposure functions. By simulating the expected appreciation price of risk trading, the optimal utility of the agents as a function of temperature, and their optimal investment into the risk trading security we obtain first insight into the dynamics of such a market in simple situations.

Journal ArticleDOI
Ram Ranjan1
TL;DR: In this paper, the authors estimate the expected annual impacts of the Pink Hibiscus Mealybug infestation on the economies of Florida and the rest of the United States using a Markov chain analysis, where both short run and long run expected damages from infestation are calculated.
Abstract: This paper estimates the expected annual impacts of the Pink Hibiscus Mealybug infestation on the economies of Florida and the rest of the United States. The approach involves a Markov chain analysis wherein both short run and long run expected damages from infestation are calculated. Use is made of the CLIMEX model that predicts the potential pest-establishment regions in the US. While predictions based upon the CLIMEX model extend the scope of damages beyond Florida, the damages are significantly dependent upon the rate of arrival and detection of species in those regions. Damages are significantly higher when a longer time horizon is considered. When nursery owners bear the full cost of quarantines in the form of loss of sales and treatment costs of infected plants, the cost-effectiveness of quarantines as a regulatory tool is diminished. The long run propensity of the system, in terms of the fraction of time spent in the possible ‘states’ of infestation and control, determines the extent of damages, and not the annual value of crops that could be potential hosts to the pest.

Journal ArticleDOI
TL;DR: This approach relies first on the definition of a mixed random field that can account for a stochastic link between categorical and continuous random fields through the use of a cross-covariance function, and shows that, under mild hypotheses, their joint distribution can be expressed as a mixture of conditional Gaussian prior distributions.
Abstract: Due to the fast pace increasing availability and diversity of information sources in environmental sciences, there is a real need of sound statistical mapping techniques for using them jointly inside a unique theoretical framework. As these information sources may vary both with respect to their nature (continuous vs. categorical or qualitative), their spatial density as well as their intrinsic quality (soft vs. hard data), the design of such techniques is a challenging issue. In this paper, an efficient method for combining spatially non-exhaustive categorical and continuous data in a mapping context is proposed, based on the Bayesian maximum entropy paradigm. This approach relies first on the definition of a mixed random field, that can account for a stochastic link between categorical and continuous random fields through the use of a cross-covariance function. When incorporating general knowledge about the first- and second-order moments of these fields, it is shown that, under mild hypotheses, their joint distribution can be expressed as a mixture of conditional Gaussian prior distributions, with parameters estimation that can be obtained from entropy maximization. A posterior distribution that incorporates the various (soft or hard) continuous and categorical data at hand can then be obtained by a straightforward conditionalization step. The use and potential of the method is illustrated by the way of a simulated case study. A comparison with few common geostatistical methods in some limit cases also emphasizes their similarities and differences, both from the theoretical and practical viewpoints. As expected, adding categorical information may significantly improve the spatial prediction of a continuous variable, making this approach powerful and very promising.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted risk assessment for an array of health effects that may result from exposure to disinfection byproducts (DBPs) and conducted an analysis of the relationship between exposure and health-related outcomes.
Abstract: This study conducts risk assessment for an array of health effects that may result from exposure to disinfection by-products (DBPs). An analysis of the relationship between exposure and health-related outcomes is conducted. The trihalomethanes (THMs) species have been verified as the principal DBPs in the drinking water disinfection process. The data used in this study was collected from the Taiwan Water Corporation (TWC) from 1998 to 2002. Statistical analysis, multistage of Benchmark model, Monte Carlo simulation (MCS) and sensitive analysis were used to estimate the cancer risk analysis and assessment. This study included the statistical data analysis, epidemiology investigation and cancer risk assessment of THMs species in drinking water in Taiwan. It is more significant to establish an assessment procedure for the decision making in policy of drinking water safety predominantly.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the use of a discrimination procedure for selecting a flood frequency model without the knowledge of its performance for the considered underlying distributions may lead to erroneous conclusions.
Abstract: The objective of the paper is to show that the use of a discrimination procedure for selecting a flood frequency model without the knowledge of its performance for the considered underlying distributions may lead to erroneous conclusions. The problem considered is one of choosing between lognormal (LN) and convective diffusion (CD) distributions for a given random sample of flood observations. The probability density functions of these distributions are similarly shaped in the range of the main probability mass and the discrepancies grow with the increase in the value of the coefficient of variation (C V ). This problem was addressed using the likelihood ratio (LR) procedure. Simulation experiments were performed to determine the probability of correct selection (PCS) for the LR method. Pseudo-random samples were generated for several combinations of sample sizes and the coefficient of variation values from each of the two distributions. Surprisingly, the PCS of the LN model was twice smaller than that of the CD model, rarely exceeding 50%. The results obtained from simulation were analyzed and compared both with those obtained using real data and with the results obtained from another selection procedure known as the QK method. The results from the QK are just the opposite to that of the LR procedure.

Journal ArticleDOI
TL;DR: The results demonstrated that GLUE application could offer an acceptable lead time for peak discharge forecast at the expense of high computational demand.
Abstract: Real time updating of rainfall-runoff (RR) models is traditionally performed by state-space formulation in the context of flood forecasting systems. In this paper, however, we examine applicability of generalized likelihood uncertainty estimation (GLUE) approach in real time modification of forecasts. Real time updating and parameter uncertainty analysis was conducted for Abmark catchment, a part of the great Karkheh basin in south west of Iran. A conceptual-distributed RR model, namely ModClark, was used for basin simulation, such that the basin’s hydrograph was determined by the superposition of runoff generated by individual cells in a raster-based discretization. In real time updating of RR model by GLUE method, prior and posterior likelihoods were computed using forecast errors that were obtained from the results of behavioral models and real time recorded discharges. Then, prior and posterior likelihoods were applied to modify forecast confidence limits in each time step. Calibration of parameters was performed using historical data while distribution of parameters was modified in real time based on new data records. Two scenarios of rainfall forecast including prefect-rainfall-forecast and no-rainfall-forecast were assumed in absence of a robust rainfall forecast model in the study catchment. The results demonstrated that GLUE application could offer an acceptable lead time for peak discharge forecast at the expense of high computational demand.

Journal ArticleDOI
Xavier Emery1
TL;DR: In this paper, the multigaussian kriging estimators are compared for estimating a transfer function of the variable under study, and a disjunctive-type estimator that minimizes the variance of the error when the mean is unknown is proposed.
Abstract: In the geostatistical analysis of regionalized data, the practitioner may not be interested in mapping the unsampled values of the variable that has been monitored, but in assessing the risk that these values exceed or fall short of a regulatory threshold. This kind of concern is part of the more general problem of estimating a transfer function of the variable under study. In this paper, we focus on the multigaussian model, for which the regionalized variable can be represented (up to a nonlinear transformation) by a Gaussian random field. Two cases are analyzed, depending on whether the mean of this Gaussian field is considered known or not, which lead to the simple and ordinary multigaussian kriging estimators respectively. Although both of these estimators are theoretically unbiased, the latter may be preferred to the former for practical applications since it is robust to a misspecification of the mean value over the domain of interest and also to local fluctuations around this mean value. An advantage of multigaussian kriging over other nonlinear geostatistical methods such as indicator and disjunctive kriging is that it makes use of the multivariate distribution of the available data and does not produce order relation violations. The use of expansions into Hermite polynomials provides three additional results: first, an expression of the multigaussian kriging estimators in terms of series that can be calculated without numerical integration; second, an expression of the associated estimation variances; third, the derivation of a disjunctive-type estimator that minimizes the variance of the error when the mean is unknown.

Journal ArticleDOI
TL;DR: In this paper, the exact distributions of R = X+Y, P = X + Y and W = X/(X+Y) and corresponding moment properties when X and Y follow Downton's bivariate exponential distribution were derived.
Abstract: Motivated by environmental applications, we derive the exact distributions of R = X+Y, P = X Y and W = X/(X+Y) and the corresponding moment properties when X and Y follow Downton’s bivariate exponential distribution. The expressions turn out to involve several special functions. For practical purposes, we also provide extensive tabulations of the percentage points associated with the distributions.

Journal ArticleDOI
TL;DR: In this paper, the authors used principal components analysis (PCA) to classify a decision alternative as a really unique option or as just one option out of a greater subset of very similar alternatives.
Abstract: Multi-criteria analysis techniques are well known decision support methods and are widely applied in various disciplines. However, defining the input criteria values for the basic decision matrix which contains all criteria values for every alternative considered is normally not an easy task. Especially qualitative criteria variables which are frequently represented as linguistic terms may be hard to quantify. Moreover, some criteria cannot be represented by just one crisp value, but they may offer a range of possible values. Stochastic multi-criteria approaches which call for distribution models instead of single numerical values can be used in these cases. Outranking multi-criteria methods proved that simulation based stochastic techniques are well suited to give better insight into the preference structure of a variety of decision alternatives. However, besides the knowledge of the preference structure, it is also important to find out about the similarity of decision alternatives which allows a modeller to categorize a decision alternative as a really unique option or as just one option out of a greater subset of very similar alternatives. To be able to perform this categorization, principal components analysis (PCA) was used. The results of the PCA are compared to the results of a stochastic outranking analysis.

Journal ArticleDOI
TL;DR: In this article, a zero-dimensional soil moisture dynamics model with the rainfall forcing by the rectangular pulses Poisson process model is used to simulate the soil moisture time series for three sites in Korea: Seoul, Daegu, and Jeonju.
Abstract: This paper studies the statistics of the soil moisture condition and its monthly variation for the purpose of evaluating drought vulnerability. A zero-dimensional soil moisture dynamics model with the rainfall forcing by the rectangular pulses Poisson process model are used to simulate the soil moisture time series for three sites in Korea: Seoul, Daegu, and Jeonju. These sites are located in the central, south-eastern, and south-western parts of the Korean Peninsular, respectively. The model parameters are estimated on a monthly basis using hourly rainfall data and monthly potential evaporation rates obtained by the Penmann method. The resulting soil moisture simulations are summarized on a monthly basis. In brief, the conclusions of our study are as follows. (1) Strong seasonality is observed in the simulations of soil moisture. The soil moisture mean is less than 0.5 during the dry spring season (March, April, and June), but other months exceed the 0.5 value. (2) The spring season is characterized by a low mean value, a high standard deviation and a positive skewness of the soil moisture content. On the other hand, the wet season is characterized by a high mean value, low standard deviation, and negative skewness of the soil moisture content. Thus, in the spring season, much drier soil moisture conditions are apparent due to the higher variability and positive skewness of the soil moisture probability density function (PDF), which also indicates more vulnerability to severe drought occurrence. (3) Seoul, Daegue, and Jeonju show very similar overall trends of soil moisture variation; however, Daegue shows the least soil moisture contents all through the year, which implies that the south-eastern part of the Korean Peninsula is most vulnerable to drought. On the other hand, the central part and the south-western part of the Korean peninsula are found to be less vulnerable to the risk of drought. The conclusions of the study are in agreement with the climatology of the Korean Peninsula.

Journal ArticleDOI
TL;DR: In this article, the authors provided a set of sufficient requirements for the data to ensure the existence of finite least squares parameter estimates for a power-law regression with an unknown location parameter.
Abstract: Practical application of the power-law regression model with an unknown location parameter can be plagued by non-finite least squares parameter estimates. This presents a serious problem in hydrology, since stream flow data is mainly obtained using an estimated stage–discharge power-law rating curve. This study provides a set of sufficient requirements for the data to ensure the existence of finite least squares parameter estimates for a power-law regression with an unknown location parameter. It is shown that in practice, these requirements act as necessary for having a finite least squares solution, in most cases. Furthermore, it is proved that there is a finite probability for the model to produce data having non-finite least squares parameter estimates. The implications of this result are discussed in the context of asymptotic predictions, inference and experimental design. A Bayesian approach to the actual regression problem is recommended.