scispace - formally typeset
Search or ask a question

Showing papers in "Environmental and Ecological Statistics in 2009"


Journal ArticleDOI
TL;DR: In this paper, a hierarchical modeling approach for explaining a collection of spatially referenced time series of extreme values is proposed, where the observations follow generalized extreme value (GEV) distributions whose locations and scales are jointly spatially dependent where the dependence is captured using multivariate Markov random field models specified through coregionalization.
Abstract: We propose a hierarchical modeling approach for explaining a collection of spatially referenced time series of extreme values. We assume that the observations follow generalized extreme value (GEV) distributions whose locations and scales are jointly spatially dependent where the dependence is captured using multivariate Markov random field models specified through coregionalization. In addition, there is temporal dependence in the locations. There are various ways to provide appropriate specifications; we consider four choices. The models can be fitted using a Markov Chain Monte Carlo (MCMC) algorithm to enable inference for parameters and to provide spatio–temporal predictions. We fit the models to a set of gridded interpolated precipitation data collected over a 50-year period for the Cape Floristic Region in South Africa, summarizing results for what appears to be the best choice of model.

220 citations


Journal ArticleDOI
TL;DR: In this article, a review of the CFFDRS is presented with the main focus on understanding and interpreting Canadian Fire Weather Index (FWI) System outputs and examples are shown of how the relationship between actual fuel moisture and the FWI System's moisture codes vary from region to region.
Abstract: Understanding and being able to predict forest fire occurrence, fire growth and fire intensity are important aspects of forest fire management. In Canada fire management agencies use the Canadian Forest Fire Danger Rating System (CFFDRS) to help predict these elements of forest fire activity. In this paper a review of the CFFDRS is presented with the main focus on understanding and interpreting Canadian Fire Weather Index (FWI) System outputs. The need to interpret the outputs of the FWI System with consideration to regional differences is emphasized and examples are shown of how the relationship between actual fuel moisture and the FWI System’s moisture codes vary from region to region. Examples are then shown of the relationship between fuel moisture and fire occurrence for both human- and lightning-caused fire for regions with different forest composition. The relationship between rate of spread, fuel consumption and the relative fire behaviour indices of the FWI System for different forest types is also discussed. The outputs of the CFFDRS are used every day across Canada by fire managers in every district, regional and provincial fire management office. The purpose of this review is to provide modellers with an understanding of this system and how its outputs can be interpreted. It is hoped that this review will expose statistical modellers and other researchers to some of the models used currently in forest fire management and encourage further research and development of models useful for understanding and managing forest fire activity.

180 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic fire growth model is proposed to predict the behavior of large forest fires, with the aim of predicting the behaviour of large fire outbreaks and the variability of the growth.
Abstract: We consider a stochastic fire growth model, with the aim of predicting the behaviour of large forest fires. Such a model can describe not only average growth, but also the variability of the growth. Implementing such a model in a computing environment allows one to obtain probability contour plots, burn size distributions, and distributions of time to specified events. Such a model also allows the incorporation of a stochastic spotting mechanism.

78 citations


Journal ArticleDOI
TL;DR: In this article, a hierarchical multivariate spatial point pattern of a plant community with high biodiversity is modelled using a hierarchical multi-dimensional point process model, and a Bayesian approach provides a flexible framework for incorporating prior information concerning the interaction radii.
Abstract: A complex multivariate spatial point pattern of a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially a maximum likelihood approach to inference where problems arise due to unknown interaction radii for the plants. We next demonstrate that a Bayesian approach provides a flexible framework for incorporating prior information concerning the interaction radii. From an ecological perspective, we are able both to confirm existing knowledge on species’ interactions and to generate new biological questions and hypotheses on species’ interactions.

77 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyze and model the structure of spatio-temporal wildfire ignitions in the St. Johns River Water Management District in northeastern Florida and find that prescribed burns seem not to reduce significantly the occurrence of wildfires in the current or subsequent year over this large geographical region.
Abstract: We analyze and model the structure of spatio-temporal wildfire ignitions in the St. Johns River Water Management District in northeastern Florida. Previous studies, based on the K-function and an assumption of homogeneity, have shown that wildfire events occur in clusters. We revisit this analysis based on an inhomogeneous K-function and argue that clustering is less important than initially thought. We also use K-cross functions to study multitype point patterns, both under homogeneity and inhomogeneity assumptions, and reach similar conclusions as above regarding the amount of clustering. Of particular interest is our finding that prescribed burns seem not to reduce significantly the occurrence of wildfires in the current or subsequent year over this large geographical region. Finally, we describe various point pattern models for the location of wildfires and investigate their adequacy by means of recent residual diagnostics.

66 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined sensitivities of parameter estimates to the spatial resolution of sampling, point and area-based methods for assigning sample values, current age surfaces versus historical intervals in generating distributions, and the inclusion of censored (i.e., incomplete) observations.
Abstract: Statistical characterization of past fire regimes is important for both the ecology and management of fire-prone ecosystems. Survival analysis—or fire frequency analysis as it is often called in the fire literature—has increasingly been used over the last few decades to examine fire interval distributions. These distributions can be generated from a variety of sources (e.g., tree rings and stand age patterns), and analysis typically involves fitting the Weibull model. Given the widespread use of fire frequency analysis and the increasing availability of mapped fire history data, our goal has been to review and to examine some of the issues faced in applying these methods in a spatially explicit context. In particular, through a case study on the massive Cedar Fire in 2003 in southern California, we examine sensitivities of parameter estimates to the spatial resolution of sampling, point- and area-based methods for assigning sample values, current age surfaces versus historical intervals in generating distributions, and the inclusion of censored (i.e., incomplete) observations. Weibull parameter estimates were found to be roughly consistent with previous fire frequency analyses for shrublands (i.e., median age at burning of ~30–50 years and relatively low age dependency). Results indicate, however, that the inclusion or omission of censored observations can have a substantial effect on parameter estimates, far more than other decisions about specifics of sampling.

62 citations


Journal ArticleDOI
TL;DR: This work shows how many of the standard mark-recapture models for open populations can be readily fitted using the software WinBUGS, and illustrates fitting the Cormack–Jolly–Seber model, multi-state and multi-event models, models including auxiliary data, and models including density dependence.
Abstract: Hierarchical mark-recapture models offer three advantages over classical mark-recapture models: (i) they allow expression of complicated models in terms of simple components; (ii) they provide a convenient way of modeling missing data and latent variables in a way that allows expression of relationships involving latent variables in the model; (iii) they provide a convenient way of introducing parsimony into models involving many nuisance parameters. Expressing models using the complete data likelihood we show how many of the standard mark-recapture models for open populations can be readily fitted using the software WinBUGS. We include examples that illustrate fitting the Cormack–Jolly–Seber model, multi-state and multi-event models, models including auxiliary data, and models including density dependence.

62 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the use of model averaging for estimating excess risk using a Monte Carlo simulation and find that the MA method has a small bias when estimating the BMD that is similar to the bias of BMD estimates derived from the assumed model.
Abstract: Model averaging (MA) has been proposed as a method of accommodating model uncertainty when estimating risk. Although the use of MA is inherently appealing, little is known about its performance using general modeling conditions. We investigate the use of MA for estimating excess risk using a Monte Carlo simulation. Dichotomous response data are simulated under various assumed underlying dose–response curves, and nine dose–response models (from the USEPA Benchmark dose model suite) are fit to obtain both model specific and MA risk estimates. The benchmark dose estimates (BMDs) from the MA method, as well as estimates from other commonly selected models, e.g., best fitting model or the model resulting in the smallest BMD, are compared to the true benchmark dose value to better understand both bias and coverage behavior in the estimation procedure. The MA method has a small bias when estimating the BMD that is similar to the bias of BMD estimates derived from the assumed model. Further, when a broader range of models are included in the family of models considered in the MA process, the lower bound estimate provided coverage close to the nominal level, which is superior to the other strategies considered. This approach provides an alternative method for risk managers to estimate risk while incorporating model uncertainty.

53 citations


Journal ArticleDOI
TL;DR: Some of the techniques for the analysis of spatial point patterns that have become available due to recent developments in point process modelling software are demonstrated and a discussion on the nature of statistical models for point patterns is included.
Abstract: In this paper I demonstrate some of the techniques for the analysis of spatial point patterns that have become available due to recent developments in point process modelling software. These developments permit convenient exploratory data analysis, model fitting, and model assessment. Efficient model fitting, in particular, makes possible the testing of statistical hypotheses of genuine interest, even when interaction between points is present, via Monte Carlo methods. The discussion of these techniques is conducted jointly with and in the context of some preliminary analyses of a collection of data sets which are of considerable interest in their own right. These data sets (which were kindly provided to me by the New Brunswick Department of Natural Resources) consist of the complete records of wildfires which occurred in New Brunswick during the years 1987 through 2003. In treating these data sets I deal with data-cleaning problems, methods of exploratory data analysis, means of detecting interaction, fitting of statistical models, and residual analysis and diagnostics. In addition to demonstrating modelling techniques, I include a discussion on the nature of statistical models for point patterns. This is given with a view to providing an understanding of why, in particular, the Strauss model fails as a model for interpoint attraction and how it has been modified to overcome this difficulty. All actual modelling of the New Brunswick fire data is done only with the intent of illustrating techniques. No substantive conclusions are or can be drawn at this stage. Realistic modelling of these data sets would require incorporation of covariate information which I do not so far have available.

49 citations


Journal ArticleDOI
TL;DR: This paper includes brief summaries of the most relevant point process properties, starting from the description and estimation of first and second order moment properties, proceeding to a description of conditional intensity or dynamic models, and ending with an introduction to some of the models and estimation procedures which are currently being used in seismology.
Abstract: A common element in modelling forest fires and earthquakes is the need to develop space-time point process models that can be used to quantify the evolving risk from forest fires (or earthquakes) as a function of time, location, and background factors. This paper is intended as an introduction to space-time point process modelling. It includes brief summaries of the most relevant point process properties, starting from the description and estimation of first and second order moment properties, proceeding to a description of conditional intensity or dynamic models, and ending with an introduction to some of the models and estimation procedures which are currently being used in seismology. A short final section contrasts the modelling problems for seismology and for forest fires.

43 citations


Journal ArticleDOI
TL;DR: This article used field studies and computer simulations to determine how robust rarefaction is to non-random spatial dispersion and whether simple measures of spatial autocorrelation can predict the bias in rare-faction estimates.
Abstract: Rarefaction estimates how many species are expected in a random sample of individuals from a larger collection and allows meaningful comparisons among collections of different sizes. It assumes random spatial dispersion. However, two common dispersion patterns, within-species clumping and segregation among species, can cause rarefaction to overestimate the species richness of a smaller continuous area. We use field studies and computer simulations to determine (1) how robust rarefaction is to nonrandom spatial dispersion and (2) whether simple measures of spatial autocorrelation can predict the bias in rarefaction estimates. Rarefaction does not estimate species richness accurately for many communities, especially at small sample sizes. Measures of spatial autocorrelation of the more abundant species do not reliably predict amount of bias. Survey sites should be standardized to equal-sized areas before sampling. When sites are of equal area but differ in number of individuals sampled, rarefaction can standardize collections. When communities are sampled from different-sized areas, the mean and confidence intervals of species accumulation curves allow more meaningful comparisons among sites.

Journal ArticleDOI
TL;DR: In this article, the authors compared Bayesian estimation with restricted maximum likelihood estimation and KED with universal kriging, and concluded that a traditional simple statistical model is of an almost equal quality.
Abstract: In the mid nineteen eighties the Dutch NOx air quality monitoring network was reduced from 73 to 32 rural and city background stations, leading to higher spatial uncertainties. In this study, several other sources of information are being used to help reduce uncertainties in parameter estimation and spatial mapping. For parameter estimation, we used Bayesian inference. For mapping, we used kriging with external drift (KED) including secondary information from a dispersion model. The methods were applied to atmospheric NOx concentrations on rural and urban scales. We compared Bayesian estimation with restricted maximum likelihood estimation and KED with universal kriging. As a reference we also included ordinary least squares (OLS). Comparison of several parameter estimation and spatial interpolation methods was done by cross-validation. Bayesian analysis resulted in an error reduction of 10 to 20% as compared to restricted maximum likelihood, whereas KED resulted in an error reduction of 50% as compared to universal kriging. Where observations were sparse, the predictions were substantially improved by inclusion of the dispersion model output and by using available prior information. No major improvement was observed as compared to OLS, the cause presumably being that much good information is contained in the dispersion model output, so that no additional spatial residual random field is required to explain the data. In all, we conclude that reduction in the monitoring network could be compensated by modern geostatistical methods, and that a traditional simple statistical model is of an almost equal quality.

Journal ArticleDOI
TL;DR: In this article, the authors proposed the use of a model in which non-separability arises from temporal non-stationarity and used it to analyze tropospheric ozone data from the Emilia-Romagna Region of Italy.
Abstract: The past two decades have witnessed an increasing interest in the use of space-time models for a wide range of environmental problems. The fundamental tool used to embody both the temporal and spatial components of the phenomenon in question is the covariance model. The empirical estimation of space-time covariance models can prove highly complex if simplifying assumptions are not employed. For this reason, many studies assume both spatiotemporal stationarity, and the separability of spatial and temporal components. This second assumption is often unrealistic from the empirical point of view. This paper proposes the use of a model in which non-separability arises from temporal non-stationarity. The model is used to analyze tropospheric ozone data from the Emilia-Romagna Region of Italy.

Journal ArticleDOI
TL;DR: In this article, the authors examined the applicability of propensity score matching (PSM) techniques in modeling wildfire and quantified the returns to suppression and fuels management on wildfire behavior.
Abstract: This paper examines the effect wildfire mitigation has on broad-scale wildfire behavior. Each year, hundreds of million of dollars are spent on fire suppression and fuels management applications, yet little is known, quantitatively, of the returns to these programs in terms of their impact on wildfire extent and intensity. This is especially true when considering that wildfire management influences and reacts to several, often times confounding factors, including socioeconomic characteristics, values at risk, heterogeneous landscapes, and climate. Due to the endogenous nature of suppression effort and fuels management intensity and placement with wildfire behavior, traditional regression models may prove inadequate. Instead, I examine the applicability of propensity score matching (PSM) techniques in modeling wildfire. This research makes several significant contributions including: (1) applying techniques developed in labor economics and in epidemiology to evaluate the effects of natural resource policies on landscapes, rather than on individuals; (2) providing a better understanding of the relationship between wildfire mitigation strategies and their influence on broad-scale wildfire patterns; (3) quantifying the returns to suppression and fuels management on wildfire behavior.

Journal ArticleDOI
TL;DR: A mechanistic, empirically-based method for buffering linear features that minimizes the underestimation of animal use introduced by GPS measurement error and derived an explicit formula for buffer radius which incorporated the error distribution, the width of the linear feature, and a predefined amount of acceptable type I error in location classification.
Abstract: Global Positioning System (GPS) collars are increasingly used to study animal movement and habitat use. Measurement error is defined as the difference between the observed and true value being measured. In GPS data measurement error is referred to as location error and leads to misclassification of observed locations into habitat types. This is particularily true when studying habitats of small spatial extent with large amounts of edge, such as linear features (e.g. roads and seismic lines). However, no consistent framework exists to address the effect of measurement error on habitat classification of observed locations and resulting biological inference. We developed a mechanistic, empirically-based method for buffering linear features that minimizes the underestimation of animal use introduced by GPS measurement error. To do this we quantified the distribution of measurement error and derived an explicit formula for buffer radius which incorporated the error distribution, the width of the linear feature, and a predefined amount of acceptable type I error in location classification. In our empirical study we found the GPS measurement error of the Lotek GPS_3300 collar followed a bivariate Laplace distribution with parameter ρ = 0.1123. When we applied our method to a simulated landscape, type I error was reduced by 57%. This study highlights the need to address the effect of GPS measurement error in animal location classification, particularily for habitats of small spatial extent.

Journal ArticleDOI
TL;DR: Specific BMDs to replace the NOAEL and LOAEL are recommended and how to model continuous health effects that aren’t observed in a natural risk-based context like dichotomous health effects are addressed.
Abstract: Although benchmark-dose methodology has existed for more than 20 years, benchmark doses (BMDs) still have not fully supplanted the no-observed-adverse-effect level (NOAEL) and lowest-observed-adverse-effect level (LOAEL) as points of departure from the experimental dose–response range for setting acceptable exposure levels of toxic substances. Among the issues involved in replacing the NOAEL (LOAEL) with a BMD are (1) which added risk level(s) above background risk should be targeted as benchmark responses (BMRs), (2) whether to apply the BMD methodology to both carcinogenic and noncarcinogenic toxic effects, and (3) how to model continuous health effects that aren’t observed in a natural risk-based context like dichotomous health effects. This paper addresses these issues and recommends specific BMDs to replace the NOAEL and LOAEL.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a coregionalization analysis with a drift (CRAD) method to assess the multi-scale variability of and relationships between ecological variables from a multivariate spatial data set.
Abstract: In this and a second article, we propose ‘coregionalization analysis with a drift’ (CRAD), as a method to assess the multi-scale variability of and relationships between ecological variables from a multivariate spatial data set. CRAD is carried out in two phases: (I) a deterministic component representing the large-scale pattern (called ‘drift’) and a random component modeled as a second-order stationary process are estimated for each variable separately; (II) a linear model of coregionalization is fitted to the direct and cross experimental variograms of residuals (i.e., after removing the estimated drifts) to assess relationships at smaller scales, while the estimated drifts are used to study relationships at large scale. In this article, we focus on phase I of CRAD, by addressing the questions of the choice of the drift estimation procedure, which is linked to the estimation of random components, and of the presence of a bias in the direct experimental variogram of residuals. In this phase, both the estimation of the drift and the fitting of a model to the direct experimental variogram of residuals are performed iteratively by estimated generalized least squares (EGLS). We use theoretical calculations and a Monte Carlo study to demonstrate that complex large-scale patterns, such as patchy drifts, are better captured with local drift estimation procedures using low-order polynomials within a moving window, than with global procedures. Furthermore, despite the bias in direct experimental variograms of residuals, good estimates of spatial autocovariance parameters are obtained with the double iterative EGLS procedure in the conditions of application of CRAD. An example with forest soil property and tree species diversity data is presented to discuss the choice of drift estimation procedure in practice.

Journal ArticleDOI
TL;DR: The use of confidence limits on parameters from a simple one-stage model of risk historically popular in benchmark analysis with quantal data are studied to extend automatically to the case where simultaneous inferences are desired at multiple doses.
Abstract: In modern environmental risk analysis, inferences are often desired on those low dose levels at which a fixed benchmark risk is achieved. In this paper, we study the use of confidence limits on parameters from a simple one-stage model of risk historically popular in benchmark analysis with quantal data. Based on these confidence bounds, we present methods for deriving upper confidence limits on extra risk and lower bounds on the benchmark dose. The methods are seen to extend automatically to the case where simultaneous inferences are desired at multiple doses. Monte Carlo evaluations explore characteristics of the parameter estimates and the confidence limits under this setting.

Journal ArticleDOI
TL;DR: In this paper, the authors present a coregionalization analysis with a drift (CRAD) method to assess the multi-scale variability of and relationships between ecological variables from a multivariate spatial data set.
Abstract: In two articles, we present ‘coregionalization analysis with a drift’ (CRAD), a method to assess the multi-scale variability of and relationships between ecological variables from a multivariate spatial data set. In phase I of CRAD (the first article), a deterministic drift component representing the large-scale pattern and a random component modeled as a second-order stationary process are estimated for each variable separately. In phase II (this article), a linear model of coregionalization (LMC) is fitted by estimated generalized least squares to the direct and cross experimental variograms of residuals (i.e., after the removal of estimated drifts). Structural correlations and coefficients of determination at smaller scales are then computed from the estimated coregionalization matrices, while the estimated drifts are used to calculate pseudo coefficients at large scale. The performance of five procedures in estimating correlations and coefficients of determination was compared using a Monte Carlo study. In four CRAD procedures, drift estimation was based on local polynomials of order 0, 1, 2 (L0, L1, L2) or a global polynomial with forward selection of the basis functions; the fifth procedure was coregionalization analysis (CRA), in which large-scale patterns were modeled as a supplemental component in the LMC. In bivariate and multivariate analyses, the uncertainty in the estimation of correlations and coefficients of determination could be related to the interference between spatial components within a bounded sampling domain. In the bivariate case, most procedures provided acceptable estimates of correlations. In regionalized redundancy analysis, uncertainty was highest for CRA, while L1 provided the best results overall. In a forest ecology example, the identification of scale-specific correlations between plant species diversity and soil and topographical variables illustrated the potential of CRAD to provide unique insight into the functioning of complex ecosystems.

Book ChapterDOI
TL;DR: In this article, the authors analyse 54 year long time series data on the numbers of common redstart (Phoenicurus phoenicourus), common whitethroat (Sylvia communis), garden warbler (Slyvia borin) and lesser whitethroid (Syllvia curruca) trapped in spring and autumn at Ottenby Bird Observatory, Sweden.
Abstract: We analyse 54 year long time series data on the numbers of common redstart (Phoenicurus phoenicurus), common whitethroat (Sylvia communis), garden warbler (Sylvia borin) and lesser whitethroat (Sylvia curruca) trapped in spring and autumn at Ottenby Bird Observatory, Sweden. The Ottenby time series could potentially serve as a reference on how much information on population change is available in count data on migrating birds. To investigate this, we combine spring and autumn data in a Bayesian state-space model trying to separate demographic signals and observation noise. The spring data are assumed to be a measure of the breeding population size, whereas the autumn data measure the population size after reproduction. At the demographic level we include seasonal density dependence and model winter dynamics as a function of precipitation in the Sahel region, south of the Sahara desert, where these species are known to spend the winter. Results show that the large fluctuations in the data restrict what conclusions can be drawn about the dynamics of the species. Annual catches are highly correlated between species and we show that a likely explanation for this is that trapping numbers are strongly dependent on local weather conditions. A comparative analysis of a related data set from the Courish Spit, Russia, gives rather different dynamics which may be caused by low information in the two data sets, but also by distinct populations passing Ottenby and the Courish Spit. This highlights the difficulty of validating results of the analyses when abundance indices derived by other methods or from other populations do not agree.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed the use of the class of beta-normal distribution and demonstrate its application in risk assessment for quantitative responses and derived risk estimates based on the asymptotic properties of the maximum likelihood estimates.
Abstract: To establish allowable daily intakes for humans from animal bioassay experiments, benchmark doses corresponding to low levels of risk have been proposed to replace the no-observed-adverse-effect level for non-cancer endpoints. When the experimental outcomes are quantal, each animal can be classified with or without the disease. The proportion of affected animals is observed as a function of dose and calculation of the benchmark dose is relatively simple. For quantitative responses, on the other hand, one method is to convert the continuous data to quantal data and proceed with benchmark dose estimation. Another method which has found more popularity (Crump, Risk Anal 15:79–89; 1995) is to fit an appropriate dose–response model to the continuous data, and directly estimate the risk and benchmark doses. The normal distribution has often been used in the past as a dose–response model. However, for non-symmetric data, the normal distribution can lead to erroneous results. Here, we propose the use of the class of beta-normal distribution and demonstrate its application in risk assessment for quantitative responses. The most important feature of this class of distributions is its generality, encompassing a wide range of distributional shapes including the normal distribution as a special case. The properties of the model are briefly discussed and risk estimates are derived based on the asymptotic properties of the maximum likelihood estimates. An example is used for illustration.

Journal ArticleDOI
TL;DR: A new bootstrap technique is proposed as an alternative to the large sample methodology and this technique is evaluated via a simulation study and examples from environmental toxicology.
Abstract: A primary objective in quantitative risk assessment is the characterization of risk which is defined to be the likelihood of an adverse effect caused by an environmental toxin or chemcial agent In modern risk-benchmark analysis, attention centers on the “benchmark dose” at which a fixed benchmark level of risk is achieved, with a lower confidence limits on this dose being of primary interest In practice, a range of benchmark risks may be under study, so that the individual lower confidence limits on benchmark dose must be corrected for simultaneity in order to maintain a specified overall level of confidence For the case of quantal data, simultaneous methods have been constructed that appeal to the large sample normality of parameter estimates The suitability of these methods for use with small sample sizes will be considered A new bootstrap technique is proposed as an alternative to the large sample methodology This technique is evaluated via a simulation study and examples from environmental toxicology

Journal ArticleDOI
TL;DR: In this paper, a permutation procedure was used to optimise plant functional response groups, based on the correlation of biological attributes and subjecting these groups to tests of response to environmental variables.
Abstract: Plant functional response groups (PFGs) are now widely established as a tool to investigate plant—environment relationships. Different statistical methods to form PFGs are used in the literature. One way is to derive emergent groups by classifying species based on correlation of biological attributes and subjecting these groups to tests of response to environmental variables. Another way is to search for associations of occurrence data, environmental variables and trait data simultaneously. The fourth-corner method is one way to assess the relationships between single traits and habitat factors. We extended this statistical method to a generally applicable procedure for the generation of plant functional response groups by developing new randomization procedures for presence/absence data of plant communities. Previous PFG groupings used either predefined groups or emergent groups i.e. classifications based on correlations of biological attributes (Lavorel et al Trends Ecol Evol 12:474–478, 1997), of the global species pool and assessed their functional response. However, since not all PFGs might form emergent groups or may be known by experts, we used a permutation procedure to optimise functional grouping. We tested the method using an artificial test data set of virtual plants occurring in different disturbance treatments. Direct trait-treatment relationships as well as more complex associations are incorporated in the test data. Trait combinations responding to environmental variables could be clearly distinguished from non-responding combinations. The results are compared with the method suggested by Pillar (J Veg Sci 10:631–640) for the identification of plant functional groups. After exploring the statistical properties using an artificial data set, the method is applied to experimental data of a greenhouse experiment on the assemblage of plant communities. Four plant functional response groups are formed with regard to differences in soil fertility on the basis of the traits canopy height and spacer length.

Journal ArticleDOI
TL;DR: In this paper, a suite of long-term variance estimators for estimating the probability of quasi-extinction was developed and evaluated using simulated data with temporally autocorrelated population growth and sampling error.
Abstract: Estimates of a population’s growth rate and process variance from time-series data are often used to calculate risk metrics such as the probability of quasi-extinction, but temporal correlations in the data from sampling error, intrinsic population factors, or environmental conditions can bias process variance estimators and detrimentally affect risk predictions. It has been claimed (McNamara and Harding, Ecol Lett 7:16–20, 2004) that estimates of the long-term variance that incorporate observed temporal correlations in population growth are unaffected by sampling error; however, no estimation procedures were proposed for time-series data. We develop a suite of such long-term variance estimators, and use simulated data with temporally autocorrelated population growth and sampling error to evaluate their performance. In some cases, we get nearly unbiased long-term variance estimates despite ignoring sampling error, but the utility of these estimators is questionable because of large estimation uncertainty and difficulties in estimating correlation structure in practice. Process variance estimators that ignored temporal correlations generally gave more precise estimates of the variability in population growth and of the probability of quasi-extinction. We also found that the estimation of probability of quasi-extinction was greatly improved when quasi-extinction thresholds were set relatively close to population levels. Because of precision concerns, we recommend using simple models for risk estimates despite potential biases, and limiting inference to quantifying relative risk; e.g., changes in risk over time for a single population or comparative risk among populations.

Journal ArticleDOI
TL;DR: The authors explored the use of kernel smoothing and parametric estimation of the relationship between wildfire incidence and various meteorological variables, and treated such relationships as components in separable point process models for wildfire activity.
Abstract: This paper explores the use of, and problems that arise in, kernel smoothing and parametric estimation of the relationships between wildfire incidence and various meteorological variables. Such relationships may be treated as components in separable point process models for wildfire activity. The resulting models can be used for comparative purposes in order to assess the predictive performance of the Burning Index.

Journal ArticleDOI
TL;DR: This work presents a DA method which does not rely on the MAR assumption and can model missing data mechanisms and covariate structure and applies this method to an ecological data set that relates fish condition to environmental variables.
Abstract: Missing covariate values in linear regression models can be an important problem facing environmental researchers. Existing missing value treatment methods such as Multiple Imputation (MI), the EM algorithm and Data Augmentation (DA) have the assumption that both observed and unobserved data come from the same distribution, most commonly a multivariate normal or a conditionally multivariate normal family. These methods do try to incorporate the missing data mechanism and rely on the assumption of Missing At Random (MAR). We present a DA method which does not rely on the MAR assumption and can model missing data mechanisms and covariate structure. This method utilizes the Gibbs Sampler as a tool for incorporating these structures and mechanisms. We apply this method to an ecological data set that relates fish condition to environmental variables. Notice that the presented DA method detects relationships that are not detected when other missing data methods are employed.

Journal ArticleDOI
TL;DR: In this article, the authors used additive and multiplicative models to disaggregate household data on water consumption from Athens and provide individual consumption estimates, adjusting for heteroscedasticity assuming that variances relate to covariates.
Abstract: Heteroscedastic additive and multiplicative models are proposed to disaggregate household data on water consumption from Athens and provide individual consumption estimates. The models adjust for heteroscedasticity assuming that variances relate to covariates. Household characteristics that can influence consumption are also included into models in order to allow for a clearer measurement of individual characteristics effects. Estimation is accomplished through a penalized least squares approach. The method is applied to a sample of real data related to domestic water consumption in Athens. The results show a greater consumption of water for males while the single-female households are these that use the lowest quantities of water. The consumption curves by age and gender are constructed presenting differences between the two sexes.

Journal ArticleDOI
TL;DR: In this paper, the concept of the renewal property is extended to processes indexed by a multidimensional time parameter, including partial sum processes, Poisson processes and many other point processes whose jump points are not totally ordered.
Abstract: The concept of the renewal property is extended to processes indexed by a multidimensional time parameter. The definition given includes not only partial sum processes, but also Poisson processes and many other point processes whose jump points are not totally ordered. Various properties of renewal processes are discussed. Renewal processes are proposed as a basis for modelling the spread of a forest fire under a prevailing wind.

Journal ArticleDOI
TL;DR: An approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls.
Abstract: Benchmark calculations often are made from data extracted from publications. Such data may not be in a form most appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.

Journal ArticleDOI
TL;DR: In this article, the Gamma distribution and an unimodal mixture of two Gaussian distributions were applied to analyze the ecological association between radon concentration and childhood acute leukemia in France.
Abstract: Ecological studies enable investigation of geographic variations in exposure to environmental variables, across groups, in relation to health outcomes measured on a geographic scale. Such studies are subject to ecological biases, including pure specification bias which arises when a nonlinear individual exposure-risk model is assumed to apply at the area level. Introduction of the within-area variance of exposure should induce a marked reduction in this source of ecological bias. Assuming several measurements per area of exposure and no confounding risk factors, we study the model including the within-area exposure variability when Gaussian within-area exposure distribution is assumed. The robustness is assessed when the within-area exposure distribution is misspecified. Two underlying exposure distributions are studied: the Gamma distribution and an unimodal mixture of two Gaussian distributions. In case of strong ecological association, this model can reduce the bias and improve the precision of the individual parameter estimates when the within-area exposure means and variances are correlated. These different models are applied to analyze the ecological association between radon concentration and childhood acute leukemia in France.