scispace - formally typeset
Search or ask a question

Showing papers in "Environmetrics in 1994"


Journal ArticleDOI
TL;DR: In this paper, a new variant of Factor Analysis (PMF) is described, where the problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized.
Abstract: A new variant ‘PMF’ of factor analysis is described. It is assumed that X is a matrix of observed data and σ is the known matrix of standard deviations of elements of X. Both X and σ are of dimensions n × m. The method solves the bilinear matrix problem X = GF + E where G is the unknown left hand factor matrix (scores) of dimensions n × p, F is the unknown right hand factor matrix (loadings) of dimensions p × m, and E is the matrix of residuals. The problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized. Furthermore, the solution is constrained so that all the elements of G and F are required to be non-negative. It is shown that the solutions by PMF are usually different from any solutions produced by the customary factor analysis (FA, i.e. principal component analysis (PCA) followed by rotations). Usually PMF produces a better fit to the data than FA. Also, the result of PF is guaranteed to be non-negative, while the result of FA often cannot be rotated so that all negative entries would be eliminated. Different possible application areas of the new method are briefly discussed. In environmental data, the error estimates of data can be widely varying and non-negativity is often an essential feature of the underlying models. Thus it is concluded that PMF is better suited than FA or PCA in many environmental applications. Examples of successful applications of PMF are shown in companion papers.

4,797 citations


Journal ArticleDOI
TL;DR: The first author's data-based mechanistic (DBM) approach to model structure identification and parameter estimation for linear and non-linear dynamic systems and uses it to explore afresh the non- linear relationship between measured rainfall and flow in two typical catchments.
Abstract: Although rainfall-flow processes have received much attention in the hydrological literature, the nature of the non-linear processes involved in the relationship between rainfall and river flow still remains rather unclear. This paper outlines the first author's data-based mechanistic (DBM) approach to model structure identification and parameter estimation for linear and non-linear dynamic systems and uses it to explore afresh the non-linear relationship between measured rainfall and flow in two typical catchments. Exploiting the power of recursive estimation, state dependent non-linearities are identified objectively from the time series data and used as the basis for the estimation of non-linear transfer function models of the rainfall - flow dynamics. These objectively identified models not only explain the data in a parametrically efficient manner but also reveal the possible parallel nature of the underlying physical processes within the catchments. The DBM modelling approach provides a useful tool for the further investigation of rainfall-flow processes, as well as other linear and non-linear environmental systems. Moreover, because DBM modelling uses recursive estimation, it provides a powerful vehicle for the design of real-time, self-adaptive environmental management systems. Finally, the paper points out how DBM models can often be interpreted directly in terms of dynamic conservation equations (mass, energy or momentum) associated with environmental flow processes and stresses the importance of parallel processes in this connection.

260 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined hourly ozone data collected in connection with a model evaluation study for ozone transport in the San Joaquin Valley of California and found that a relatively simple spatial covariance structure at night-time, while the afternoon readings show a more complex spatial covariances, which is partly explained by observations from a single station with suspicious data.
Abstract: We examine hourly ozone data collected in connection with a model evaluation study for ozone transport in the San Joaquin Valley of California. A space-time analysis of a subset of the data, 17 sites concentrated around the Sacramento area, indicates a relatively simple spatial covariance structure at night-time, while the afternoon readings show a more complex spatial covariance, which is partly explained by observations from a single station with suspicious data. Simple separable space-time covariance models do not appear applicable to these data.

146 citations


Journal ArticleDOI
TL;DR: The aim of the analysis was to investigate the structure of the data matrices in order to find the apparent source profiles from which the precipitation samples are constituted, and the strongest factor found was that of sea-salt.
Abstract: A new factor analysis method called positive matrix factorization (PMF) has been applied to daily precipitation data from four Finnish EMEP stations. The aim of the analysis was to investigate the structure of the data matrices in order to find the apparent source profiles from which the precipitation samples are constituted. A brief description of PMF is given. PMF utilizes the known uncertainty of data values. The standard deviations were derived from the results of double sampling at one station during one year. A goodness-of-fit function Q was calculated for factor solutions with 1–8 factors. The shape of the residuals was useful in deciding the number of factors. The strongest factor found was that of sea-salt. The most dominant ions in the factor were sodium, chloride and magnesium. At the coastal stations the ratio Cl/Na of the mean concentrations in the factor was near the ratio found in sea water but at the inland stations the ratio was smaller. For most ions more than 90 per cent of the weighted variation was explained. The worst explained was potassium (at worst c. 60 per cent) which is possibly due to contamination problems in the laboratory. In most factors of different factorizations the anions and cations were fairly well balanced.

124 citations


Journal ArticleDOI
TL;DR: This work uses rainfall runoff modelling to demonstrate the construction of identifiable models which are transferable within a ‘region’ of similarity, and argues that model regionalization is a powerful technique to help generate better understanding and prediction of environmental systems.
Abstract: Models of environmental systems, constructed for the purpose of predicting or understanding the effects of input changes, fall predominantly into two classes: those based on idealized equations of mathematical physics, and those based on compartmentalized conceptual descriptions of processes. Overparameterization is often a common feature of both these approaches. System identification offers the opportunity to begin model construction with simple structures and assumptions, and to build up the level of model detail by testing refinements for their consistency with system observations. We use rainfall runoff modelling to demonstrate this approach. We also use it to argue that model regionalization is a powerful technique to help generate better understanding and prediction of environmental systems. The construction of identifiable models which are transferable within a ‘region’ of similarity makes the exercise more tenable, and facilitates the jump to more generic models applicable in a wider (e.g. global) context.

55 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a new type of geographical information system (GIS) for scientific planning of coastal waters, based on morphometric parameters expressing various characteristics of the coast.
Abstract: This paper presents a new type of geographical information system (GIS) for scientific planning of coastal waters. One hypothesis in the work is that the morphometry of the coast plays a significant role for how the water system functions as a receiving water system, for example, as a receiver of industrial and urban pollution and in response to various forms of aquaculture. A digital technique for transferring information from standard charts into morphometric parameters expressing various characteristics of the coast has been developed. Empirical data on surface water turnover times, which are costly and demanding to determine with traditional hydrodynamic methods, has been obtained from the literature for 20 defined Swedish coastal areas. Two models for simple predictions of the median surface water turnover time, based on morphometric parameters, have been developed to exemplify the use of “morphometrical models” in expressing a coast ecological key parameter. In these models, more than 90 per cent of the variation in empirical values of surface water turnover times can be statistically explained by the topographic openness. The topographic openness describes the exposure of the coastal area towards the open sea or adjacent coastal area. The models are valid for the temperature stratified period (May–October) in areas non-affected by tides, strong coastal currents and river inflow. The areas should also be in the size range 0.15–150 km2.

48 citations


Journal ArticleDOI
TL;DR: In this paper, a model is developed with the following features: periodic seasonal effects; consistency with asymptotic extreme value theory; Markov description of temporal dependence, which can be used to estimate temporal aspects of the extremal process of low temperatures which have most practical and scientific relevance.
Abstract: Time series of temperatures during periods of extreme cold display long-term seasonal variability and short-term temporal dependence. Classical approaches to extremes circumvent these issues, but in so doing cannot address questions relating to the temporal character of the process, though these issues are often the most important. In this paper a model is developed with the following features: periodic seasonal effects; consistency with asymptotic extreme value theory; Markov description of temporal dependence. Smith et al. studied the properties of such a model in the stationary case. Here, it is shown how such a model can be fitted to a non-stationary series, and consequently used to estimate temporal aspects of the extremal process of low temperatures which have most practical and scientific relevance.

46 citations


Journal ArticleDOI
TL;DR: In this paper, a method for estimating the parameters and quantiles of the generalized extreme value distribution (GEVD) was proposed, which is well-defined for all possible combinations of parameter and sample values.
Abstract: The generalized extreme-value distribution (GEVD) was introduced by Jenkinson (1955). It is now widely used to model extremes of natural and environmental data. The GEVD has three parameters: a location parameter (−∞ 0) and a shape parameter (−∞ < k < ∞). The traditional methods of estimation (e.g., the maximum likelihood and the moments-based methods) have problems either because the range of the distribution depends on the parameters, or because the mean and higher moments do not exist when k ≤ − 1. The currently favoured estimators are those obtained by the method of probability-weighted moments (PWM). The PWM estimators are good for cases where −1/2 < k < 1/2. Outside this range of k, the PWM estimates may not exist and if they do exist they cannot be recommended because their performance worsens as k increases. In this paper, we propose a method for estimating the parameters and quantiles of the GEVD. The estimators are well-defined for all possible combinations of parameter and sample values. They are also easy to compute as they are based on equations which involve only one variable (rather than three). A simulation study is implemented to evaluate the performance of the proposed method and to compare it with the PWM. The simulation results seem to indicate that the proposed method is comparable to the PWM for −1/2 < k < 1/2 but outside this range it gives a better performance. Two real-life environmental data sets are used to illustrate the methodology.

42 citations


Journal ArticleDOI
TL;DR: In this article, a wavelet analysis for time series data on the flow rate of the Nile River at Aswan and also on the stages of the Rio Negro at Manaus is presented.
Abstract: Wavelet analysis is described, and a Haar wavelet analysis is carried out, for time series data on the flow rate of the Nile River at Aswan and also on the stages of the Rio Negro at Manaus. A goal of the analysis is to present a wavelet analysis for some time series data particularly looking for jumps in mean level. The work begins with review of techniques for estimating mean levels in the presence of additive noise and then proceeds to the particular case of wavelets and the construction of so-called “improved estimates” by shrinkage. The results of the analyses are consistent with earlier ones.

33 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of trend analysis with time series or point process data is discussed. Parametric, semi-parametric and nonparametric models and procedures are discussed with examples taken from hydrology and seismology.
Abstract: The concern is with trend analysis. The data may be time series or point process. Parametric, semi-parametric and non-parametric models and procedures are discussed. The problems and techniques are illustrated with examples taken from hydrology and seismology. There is review as well as some new analyses and proposals.

32 citations


Journal ArticleDOI
TL;DR: The trajectories of recursive estimates for the elements of the Kalman gain matrix provide informative insights into, and better defined evidence of, the failure of an inadequate model structure.
Abstract: Most models of environmental systems are based on sets of differential equations. The paper investigates the problem of identifying the number and form of appropriately parameterized terms in such continuous-time state-space models, a problem referred to as model structure identification. Filtering theory (recursive estimation) is used as an approach to the solution of this problem. Central to this approach is the notion that the patterns of the (posterior) trajectories of the model's parameters, when contrasted with the prior assumptions about their expected variability, will yield insights into the adequacy, or otherwise, of a candidate model's structure. The particular algorithm employed herein is based upon an analysis of Ljung (1979), who proposed a significant modification of the conventional extended Kalman filter wherein the elements of the Kalman gain matrix may be estimated directly as unknown parameters of an innovations process representation of the system's behaviour. Whereas Ljung's filter was designed for an entirely discrete-time system, the present version of the filter has been derived for a system with continuous-time dynamics and discretetime observations. Using time series data from the River Cam, the paper presents a case study in identifying a sequence of three candidate model structures for describing the assimilation and generation of easily degradable organic matter. The trajectories of recursive estimates for the elements of the gain matrix provide informative insights into, and better defined evidence of, the failure of an inadequate model structure.

Journal ArticleDOI
TL;DR: In this paper, the hazard rate for the maximum of a finite sequence of autocorrelated random variables is determined by means of simulations, and the results imply that the relative sensitivity of extreme events to overall climate change is even greater than the asymptotic theory would predict.
Abstract: The relative sensitivity of an extreme event is defined as the partial derivative of its probability with respect to the location or scale parameter of the distribution of the variable involved. Of particular interest in climate applications are extreme events of the form, the maximum of a sequence of observations of the variable exceeding a threshold. In this case, the relative sensitivities are directly related to the hazard rate for the distribution of the maximum. By means of simulations, this hazard rate is determined for the maximum of a finite sequence of autocorrelated random variables. For large values, the hazard rate rises more steeply than the asymptotic theory based on the type I extreme value distribution would predict. Unless the degree of autocorrelation is quite high, the hazard rate does not differ much for large values from that for independent time series. The hazard rate for the so-called “penultimate approximation”, based on the type III extreme value distribution, is also compared to the exact hazard rate under dependence. These results imply that the relative sensitivity of extreme events to overall climate change is even greater than the asymptotic theory would predict. Time series of daily maximum temperature, data that possess substantial autocorrelation, are utilized for illustrative purposes.

Journal ArticleDOI
TL;DR: In this article, the authors examined the use of Bayes and empirical Bayes methods for stabilizing incidence rates observed in geographically aligned areas and found that the constrained estimators produced collections of rate estimates that dramatically improved estimation of the true dispersion of risk.
Abstract: Assessments of the potential health impacts of contaminants and other environmental risk factors are often based on comparisons of disease rates among collections of spatially aligned areas. These comparisons are valid only if the observed rates adequately reflect the true underlying area-specific risk. In areas with small populations, observed incidence values can be highly unstable and true risk differences among areas can be masked by spurious fluctuations in the observed rates. We examine the use of Bayes and empirical Bayes methods for stabilizing incidence rates observed in geographically aligned areas. While these methods improve stability, both the Bayes and empirical Bayes approaches produce a histogram of the estimates that is too narrow when compared to the true distribution of risk. Constrained empirical Bayes estimators have been developed that provide improved estimation of the variance of the true rates. We use simulations to compare the performance of Bayes, empirical Bayes, and constrained empirical Bayes approaches for estimating incidence rates in a variety of multivariate Gaussian scenarios with differing levels of spatial dependence. The mean squared error of estimation associated with the simulated observed rates was, on average, five times greater than that of the Bayes and empirical Bayes estimates. The sample variance of the standard Bayes and empirical Bayes estimates was consistently smaller than the variance of the simulated rates. The constrained estimators produced collections of rate estimates that dramatically improved estimation of the true dispersion of risk. In addition, the mean square error of the constrained empirical Bayes estimates was only slightly greater than that of the unconstrained rate estimates. We illustrate the use of empirical and constrained empirical Bayes estimators in an analysis of lung cancer mortality rates in Ohio.

Journal ArticleDOI
TL;DR: In this article, a new test of trend is proposed which is analogous to the Spearman test in the complete case, which is used to determine whether the data exhibit a trend.
Abstract: Consider a situation where observations are to be taken at regularly spaced time points, but in fact several of the observations are missing. It is wished to determine whether the data exhibit a trend. When the data set is complete, then a well known rank test is based on Spearman correlation. For the case of missing observations a new test of trend is proposed which is analogous to the Spearman test in the complete case. Comparisons between this new test and the naive Spearman test, which simply deletes all missing observations, indicate that the new test is more sensitive in detecting trends.

Journal ArticleDOI
TL;DR: In this article, multivariate quantitative structure-activity relationships (QSARs) are applied to model the atmospheric persistence of halogenated aliphatic hydrocarbons, and the objective is to forecast the rate of reaction between the haloalkanes and the hydroxyl radical in the gas phase.
Abstract: Multivariate quantitative structure–activity relationships (QSARs) are applied to model the atmospheric persistence of halogenated aliphatic hydrocarbons. The objective is to forecast the rate of reaction between the haloalkanes and the hydroxyl radical in the gas phase. The QSARs are developed in the light of a recently proposed strategy for risk assessment of environmental chemicals, based on multivariate data analysis and statistical experimental design. The QSARs are calibrated using a training set consisting of ten chemicals and different sets of descriptors, namely empirical and qualitative indicator variables and quantum-chemical descriptors. The predictabilities of the QSARs are investigated by making predictions for 13 additional compounds for which experimental observations exist. Finally, the best obtained set of descriptors is used as a basis for making predictions for 35 non-tested haloalkanes.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of constructing tests of hypotheses and interval estimation for the true overlap in a special case of the overlapping coefficient, the common area under two probability density curves, that has received intermittent attention in the scientific and statistical literature.
Abstract: A special case of the overlapping coefficient, the common area under two probability density curves, that has received intermittent attention in the scientific and statistical literature concerns the overlap of two normal distributions with equal variances. Here we consider the problem of constructing tests of hypotheses and interval estimation for the true overlap in this special situation. Direct and conditional tests for the true value of the overlap are discussed. A method of constructing an exact confidence interval estimator for the true overlap is presented. Several alternative methods of obtaining confidence intervals for the true overlap are compared in a Monte Carlo investigation. In an example, we use the normal theory results discussed and an invariance property of the overlapping coefficient to estimate the overlap between two log-normal distributions from sample data.

Journal ArticleDOI
Armand Maul1
TL;DR: In this article, the hazard associated with any specific time interval is taken to be of the form of a logistic function including a number of time-dependent covariates which serve to characterize the individual under consideration.
Abstract: Consideration is given to survival data analysis by modelling the hazard as a discrete function of time. This is done for each individual who is examined independently from the other individuals of the sample observed. Assuming time has been divided into intervals of the same length, the hazard associated with any specific time interval is taken to be of the form of a logistic function including a number of time-dependent covariates which serve to characterize the individual under consideration. Asymptotic maximum likelihood results are given for the estimation of both the regression coefficients in the hazard function and the survivor function corresponding to a given profile, i.e. the successive values of the different covariates. The likelihood ratio statistic for testing the effects of the various covariates in order to compare several survival curves with respect to longevity is also derived. The process of model fitting is illustrated by two examples referring to clinical trails on leukaemia and advanced lung cancer patients, respectively.

Journal ArticleDOI
TL;DR: In this paper, an inverse linear model for the log-odds transformation of the frequency of aberration was proposed for the dose-response experiment in cytogenetic dosimetry.
Abstract: This paper presents a Bayesian analysis of a dose-response experiment in cytogenetic dosimetry. We suggest an inverse linear model for the log-odds transformation of the frequency of aberration. The regression considered is between dose and Bayes estimates. The adjustment obtained seems to produce a very small error thus suggesting that the simple linear and quadratic linear functions usually considered in the literature are not the ideal models.

Journal ArticleDOI
TL;DR: In this article, an estimator of population abundance for subsampling using mark and recapture is developed for wild populations, and important consequences for design of such surveys are discussed.
Abstract: Methods for mark and recapture can be applied for subsampling to give flexible sampling designs for wild populations where separate segments can be sampled independently. In area sampling, the area inhabited by a population is separated into quadrates and a sample of quadrates is selected with known probabilities; this insures a random sample of quadrates, but good estimates of abundances in the quadrates would be difficult to obtain for mobile animal populations without application of a method designed for such populations. If mark and recapture is applied in quadrates selected using area sampling, the result is a subsample with variation between quadrates and variation within quadrates. An estimator of population abundance is developed for subsampling using mark and recapture, and important consequences for design of such surveys are discussed.

Journal ArticleDOI
TL;DR: In this paper, the dynamic behaviour of controlled coupled continuous stirred tank reactors is investigated in terms of multiplicity of steady states, stability of limit cycles and transition to chaos in order to identify, at different gains of the control parameters, the zones in which it is possible to perform the chemical treatment of toxic organics in wastewater.
Abstract: The dynamic behaviour of controlled coupled continuous stirred tank reactors is investigated here in terms of multiplicity of steady states, stability of limit cycles and transition to chaos. The study is developed in order to identify, at different gains of the control parameters, the zones in which it is possible to perform the chemical treatment of toxic organics in wastewater. In this way we can avoid undesirable complex dynamic behaviour that can compromise the quality of the effluent.

Journal ArticleDOI
TL;DR: In this article, the authors used multitaper spectral analysis to detect the presence or absence of 60 Hz power line pick-up for dynamite data recorded in Canada, and more accurately characterize seismic exploration data in terms of coherence and signal-to-noise ratios.
Abstract: Multiple time series usually arise in measurements on physical systems in one of two ways. The first way is when a set of time series arise on an “equal footing.” A distant explosion might be recorded at several contiguous recording sites. From such series can be determined, for example, characteristics of the signal and noise content of the data. The second way is when the series are causally related. Northward and eastward wind velocity series at a coastal location might be inputs to a linear system, which with additive noise produces on output the northward wind velocity at a buoy location. Here we might wish to estimate some properties of the linear system. Many physical systems exhibit a large dynamic range. We show how the technique of multitaper spectral analysis can be used to much improve the analysis of series which arise on an ‘equal footing.’ This is illustrated using multiple time series recordings of seismic explosions. We show how multitaper spectral analysis (a) can better detect the presence or absence of 60 Hz “power-line pick-up” for dynamite data recorded in Canada, and (b) more accurately characterize seismic exploration data in terms of coherence and signal-to-noise ratios. For the case of two causally related time series with additive non-white noise we show how it is possible to calculate a confidence interval for the mean square error of the signal component. This is an advance on traditional methods which merely detect the presence or absence of the signal. The approach is demonstrated using some seismic data recorded at sea.

Journal ArticleDOI
TL;DR: In this article, an observation network design method, which simultaneously minimizes parameter uncertainty and exploration cost, is developed for locating water supply wells, based on two often conflicting objectives: (1) minimization of the uncertainty in characterizing the spatial distribution of the expected yield and travel time; and (2) minimizing the observation cost.
Abstract: An observation network design method, which simultaneously minimizes parameter uncertainty and exploration cost, is developed for locating water supply wells. Observation networks combining wells and geophysical measurements are considered for delineating the ‘best’ areas for wells. An area is considered the ‘best’ where the sustainable pumping rate and the degree of protection for the well are the possible maximum. The level of protection is measured by the travel time for a fixed radius protection zone around a well. The ‘most appropriate’ measurement network is selected based on two often conflicting objectives: (1) minimization of the uncertainty in characterizing the spatial distribution of the expected yield and travel time; and (2) minimization of the observation cost. The accuracy of estimation is characterized by the average and maximum kriging variance for yields and travel times calculated from measured hydrogeologic parameters. The parameters to be measured by an observation network for estimating yields and travel times are layer thicknesses, porosity, hydraulic conductivity and volumetric water content. They may be obtained from direct measurements (e.g. well logs, specific capacity test, and pumping tests) or estimated indirectly from geoelectric measurements. The approach combines two common techniques, geostatistics and multi-criterion decision making (MCDM), to determine the ‘best’ measurement network. Geostatistics are used to determine the spatial distribution of the estimation variance for yield and travel time calculated from different sources of data. MCDM is utilized to evaluate observation network alternatives based on the estimation variances and the costs associated with them. The methodology is illustrated for a study area located near Ashland, Nebraska.

Journal ArticleDOI
TL;DR: A simple and efficient way of evaluation the fractional parameter is suggested and illustrated on a series of mud layer measurements and the model derived is both simple and compelling.
Abstract: Fractional difference models are a useful extension to ARIMA models as they model longer range dependence. We suggest a simple and efficient way of evaluation the fractional parameter and illustrate this on a series of mud layer measurements. As can be seen, the model we derive is both simple and compelling. The method used can also be used to estimate the differencing parameter for conventional ARIMA models and in addition can be used to find ARMA parameters. For a fractional model, once the fractional parameter is known a simple filtering operation allows us to proceed as for an ARIMA model. While this is not quite as elegant as a full likelihood approach, it is straightforward and uses common software tools. The pile-up effect can also be minimized since we do not make a normal approximation.

Journal ArticleDOI
TL;DR: In this article, a method for selecting a transformation to symmetry, based on the sample skewness coefficient, is described, assuming that the goal in transformation methodology is to increase precision in the estimation of the mean, quantiles, and the distribution function of the variable in its original scale.
Abstract: Transformations to symmetry in the Box–Cox family for left-censored observations having a common lower detection limit are considered. Recently, transformations to normality in this setting have been discussed. Symmetry, rather than normality, may be a more realistic assumption, for instance, if an estimated transformation to normality displays heavier tails than those presumed under a normal model. A method for selecting a transformation to symmetry, based on the sample skewness coefficient, is described. The advantages of transformation using this method are illustrated in a simulation study, assuming that the goal in transformation methodology is to increase precision in the estimation of the mean, quantiles, and the distribution function of the variable in its original scale. An example regarding censored measurements in polluted water samples is provided.

Journal ArticleDOI
TL;DR: In this article, a method based on sample quantiles is suggested for estimating the theoretical number f of completely independent, notional determinands whose information content is equivalent to the k actual determinands being regulated.
Abstract: The testing of k > 1 water quality determinands for compliance with environmental regulations leads to difficulties in finding the probabilities of wrongly concluding (a) that one or more determinands failed when in fact all passed (joint type I error), and (b) that one or more passed when all failed (joint type II error). Also, owing to dependence among the determinands, some may contribute little to the compliance assessment and, if they serve no other purpose, are a waste of analytical resources. A method based on sample quantiles is suggested for estimating the theoretical number f of completely independent, notional determinands whose information content is equivalent to the k actual determinands being regulated. Next, the alternative strategies of individual and collective testing of the k determinands against regulations set as quantiles are compared in terms of joint type I and II error probabilities. The extreme cases of negative and positive dependence among the determinands set ranges for these probabilities, but even with moderate k they tend to be impractically broad for reliable decision making with either test strategy. Assuming independence as a worst case instead of negative dependence helps somewhat. It is explained how, alternatively, point estimates for the error probabilities are possible using f, given certain assumptions. Two schemes for regulating multiple determinands thus emerge, one based on an agreed worst-case situation which places minimal reliance on assumed conditions, the other based on an estimated error situation utilizing f. The choice between them would be influenced by the rigour required of the tests. The relative performance of individual and collective testing is shown to depend on the parameters chosen for the tests, and the degree and sign of dependence among the determinands. The design of compliance testing schemes is crucial in determining whether most risk lies with waste producers or the receiving environment, and what effect the prescribed regulations will actually have in practice.

Journal ArticleDOI
TL;DR: This article presented an approach to estimate temperature sensitivity based on historical temperature variability, rather than trend, which circumvents the problem of fitting the response of a climate model forced by historical changes in atmospheric composition to the historical trend in global temperature.
Abstract: The magnitude of the response of global temperature to changes in atmospheric composition depends on a parameter called temperature sensitivity The value of this parameter is unknown When temperature sensitivity is estimated by fitting the response of a climate model forced by historical changes in atmospheric composition to the historical trend in global temperature, the estimate is rather low It has been suggested that this may be due to the suppression of warming by sulphate aerosols, an effect that is difficult to incorporate into model experiments This paper presents an approach to estimating temperature sensitivity based on historical temperature variability, rather than trend, which circumvents this problem The results are in close agreement with those based on fitting the trend

Journal ArticleDOI
TL;DR: In this article, an advanced analytical model which computes air pollution concentrations in complex terrain is presented, where wind tunnel measurements of pollutant concentrations from an elevated source in the presence of a rough hill and a neutrally stable flow are compared with that of COMPLEX I, the Gaussian model proposed by the US Environmental Protection Agency.
Abstract: An advanced analytical model which computes air pollution concentrations in complex terrain is presented. Model performances are evaluated using wind tunnel measurements of pollutant concentrations from an elevated source in the presence of a rough hill and a neutrally stable flow and are compared with that of COMPLEX I, the Gaussian model proposed by the US Environmental Protection Agency.

Journal ArticleDOI
TL;DR: The procedure presented in the paper distinguishes between time varying hazards, may be useful for other survival related environmental problems, be this with regard to animal or human survival and could contribute to its more widespread use.
Abstract: The United States Air Force has been interested in studying the effect of different types of radiation encountered by its personnel in space. A study was, therefore, conducted at the USAF School of Aerospace Medicine, Brooks Air Force, Texas with rhesus monkeys as experimental subjects. These subjects were exposed to different types of radiation such as: (1) electromagnetic radiation; (2) electrons; (3) protons; and (4) nuclei of elements of higher numbers with different amount of radiation. These subjects were followed over a period of 338 months. In this paper an interesting problem related to health and radiation has been addressed. The effects of radiation, taking into account the cause of death (cancer or heart disease) along with the covariates such as sex, age, type of exposure, dose, are examined. A general log-linear hazard model approach is studied. The model estimates the cause specific hazard rates, assuming piecewise exponential distribution, and exhibits the survival function for each of the covariate groups and the probability of death due to each cause. A data set called `Delayed Bio-Effects Colony,` of radiated animals, is analysed and some conclusions are drawn. Overall the study has brought out the effect of high and low dose of more » radiation on both the male and female groups. The procedure presented in the paper distinguishes between time varying hazards. Thus the methodology may be useful for other survival related environmental problems, be this with regard to animal or human survival. Thus, the paper could contribute to its more widespread use. « less

Journal ArticleDOI
TL;DR: In this article, the sampling distribution of radon levels is characterized by a non-normal distribution and the mean radon level is estimated based on the assumed sampling distribution, which is then compared among groups through a weighted analysis of variance.
Abstract: Observed radon gas levels typically demonstrate a skewed sampling distribution. Traditional analyses to compare mean or median radon levels among different demographic groups rely on transformations of these data. The logarithmic transformation is one that is traditionally employed. Unfortunately, transformations such as the logarithmic tend to de-emphasize extreme values. In radon gas research, these extreme values are of primary interest. The proposed methodology is based on characterizing the sampling distribution of radon levels by a non-normal distribution. The mean radon level is estimated based on the assumed sampling distribution. Comparisons among groups are considered through a weighted analysis of variance. Results of a survey conducted by the state of Kansas are used as an example.

Journal ArticleDOI
TL;DR: The authors extended previously published results on selecting sample sizes and discrete observation times for longitudinal mortality studies to consider the effect of skewness in the distribution of mortality times, and showed that skewnesses in the distributions of the mortality times can affect sample sizes.
Abstract: This note extends previously published results on selecting sample sizes and discrete observation times for longitudinal mortality studies to consider the effect of skewness in the distribution of mortality times.