scispace - formally typeset
Search or ask a question

Showing papers in "Biometrical Journal in 1995"


Journal ArticleDOI
TL;DR: In this paper, a class of distribution-free tests for the treatments versus control setting using the partially sequential sampling technique is proposed, and criteria for adapting a particular test to have asymptotic power restrictions against alternatives of interest are discussed.
Abstract: In this paper we propose a class of distribution-free tests for the treatments versus control setting using the partially sequential sampling technique. Expressions for the asymptotic distributions and power for the tests are provided and criteria for adapting a particular test to have asymptotic power restrictions against alternatives of interest are discussed. Comparative results of a Monte Carlo power study are also presented.

60 citations


Journal ArticleDOI
TL;DR: In this article, the analytical solutions for upper and lower confidence limits in a closed form are presented, and examples for numerical illustration are given for numerical illustrative purposes. But, the non-iterative method is generally more desirable than the iterative method.
Abstract: The interval estimation of the ratio of two binomial proportions based on the score statistic is superior over other methods. Iterative algorithms for calculating the approximate confidence interval have been provided by, e.g., KOOPMAN (1984, Biometrics 40:513–517) and GART and NAM (1988a, Biometrics 44:323–338). This note presents the analytical solutions for upper and lower confidence limits in a closed form and gives examples for numerical illustration. The non-iterative method is generally more desirable than the iterative method.

45 citations


Journal ArticleDOI
TL;DR: In this paper, ranked set sampling (RSS) was used to estimate the parameters of the simple regression line and the objective was to increase the efficiency of the estimators relative to the simple random sampling (SRS) method.
Abstract: Ranked set sampling (RSS) as suggested by MCINTYRE (1952) and TAKAHASI and WAKIMOTO (1968) may be used to estimate the parameters of the simple regression line. The objective is to use the RSS method to increase the efficiency of the estimators relative to the simple random sampling (SRS) method. Estimators of the slope and intercept are considered. Computer simulated results are given, and an example using real data presented to illustrate the computations.

37 citations


Journal ArticleDOI
TL;DR: This paper introduces a smoothed version of kappa computed after raking the table to achieve pre-specified marginal distributions, and compares it with raked kappa for various margins to indicate the extent of the dependence on the margins.
Abstract: Several authors have noted the dependence of kappa measures of inter-rater agreement on the marginal distributions of contingency tables displaying the joint ratings. This paper introduces a smoothed version of kappa computed after raking the table to achieve pre-specified marginal distributions. A comparison of kappa with raked kappa for various margins can indicate the extent of the dependence on the margins, and can indicate how much of the lack of agreement is due to marginal heterogeneity.

35 citations


Journal ArticleDOI
TL;DR: This paper fits multivariate and univariate hidden Markov models allowing for time-trend to data from an experiment investigating the effects of feeding on the locomotory behaviour of locusts (Locusta migratoria).
Abstract: This paper proposes the use of hidden Markov time series models for the analysis of the behaviour sequences of one or more animals under observation. These models have advantages over the Markov chain models commonly used for behaviour sequences, as they can allow for time-trend or expansion to several subjects without sacrificing parsimony. Furthermore, they provide an alternative to higher-order Markov chain models if a first-order Markov chain is unsatisfactory as a model. To illustrate the use of such models, we fit multivariate and univariate hidden Markov models allowing for time-trend to data from an experiment investigating the effects of feeding on the locomotory behaviour of locusts (Locusta migratoria).

31 citations


Journal ArticleDOI
TL;DR: In this article, an identifiability theorem in the theory of dependent competing risks is presented and the effect of removing cancer from the United States population when cancer is correlated with the other causes of death is examined.
Abstract: This paper presents an identifiability theorem in the theory of dependent competing risks and it applies the result by examining the effect of removing cancer from the United States population when cancer is correlated with the other causes of death. The paper shows how dependence can be modeled with copula functions and it shows that calculating the survival probabilities after cancer is removed is equivalent to solving a system of nonlinear differential equations.

31 citations


Journal ArticleDOI
TL;DR: The relationship between the locations of the clumps of sprouts and the local soil environment in an old sweet chestnut coppice is studied in this article, where the theory of marked point process, which has not yet been used extensively in forestry studies, is shown to be adequate for the analysis of this type of spatial data.
Abstract: The relationship between the locations of the clumps of sprouts, some morphological characteristics of the clumps and the local soil environment in an old sweet chestnut coppice are studied. The theory of marked point process, which has not yet been used extensively in forestry studies, is shown to be adequate for the analysis of this type of spatial data. The marks correspond to morphological characteristics of the clumps: “diameter”, “number of sprouts”, “height at one year”, and “height at three years”. Several covariance functions are described which give a method for exploring the spatial relationships within the stand. Some of these functions are introduced for the first time in an actual statistical analysis. By using these functions, it is shown that the clumps are regularly distributed. The variables “diameter” and “number of sprouts” are strongly spatially negatively correlated, whereas the heights are slightly or not correlated. By categorising the individuals according to the mark values, it is shown that the small clumps tended to be aggregated in the gaps between medium and large clumps. Values of heights in the ties of the distribution are related as well as their spatial correlation to the local soil environment.

30 citations



Journal ArticleDOI
TL;DR: The authors use Structural Equation Models (SEM) to estimate error variance and produce highly accurate coefficients for formulation of selection gradients, which is particularly appropriate when the selection is viewed as happening at the level of the latent variables.
Abstract: Selection studies involving multiple intercorrelated independent variables have employed multiple regression analysis as a means to estimate and partition natural and sexual selection's direct and indirect effects. These statistical models assume that independent variables are measured without error. Most would conclude that such is not the case in the field studies for which these methods are employed. We demonstrate that the distortion of estimates resulting from error variance is not trivial. When independent variables -are intercorrelated, extreme distortions may occur. We propose to use Structural Equation Models (SEM), to estimate error variance and produce highly accurate coefficients for formulation of selection gradients. This method is particularly appropriate when the selection is viewed as happening at the level of the latent variables

22 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare three different comparison procedures and compare them to two methods for comparing means, i.e., the trimmed mean and the sample mean, which is a measure of location having a standard error that is relatively unaffected by heavy tails and outliers.
Abstract: Two common goals when choosing a method for performing all pairwise comparisons of J independent groups are controlling experiment wise Type I error and maximizing power. Typically groups are compared in terms of their means, but it has been known for over 30 years that the power of these methods becomes highly unsatisfactory under slight departures from normality toward heavy-tailed distributions. An approach to this problem, well-known in the statistical literature, is to replace the sample mean with a measure of location having a standard error that is relatively unaffected by heavy tails and outliers. One possibility is to use the trimmed mean. This paper describes three such multiple comparison procedures and compares them to two methods for comparing means.

21 citations


Journal ArticleDOI
TL;DR: In this article, a new failure model is introduced in the form of a four-parameter nonlinear differential equation, with failure probability as the dependent variable and failure time as the independent variable.
Abstract: A new failure model is introduced in the form of a four-parameter nonlinear differential equation, with failure probability as the dependent variable and failure time as the independent variable. The first parameter characterizes the location, the second the scale, and the other two the shape of the model. The type of the accompanying hazard function is immediately read off the shape parameters. The new model approximates the classical failure models with rather high precision, but also models cases where the failure density is skewed to the left. It can be used to analyze survival data objectively, based on the shape of the failure distribution. The computation of quantiles and moments is easy and fast. Nonlinear regression methods are used to estimate parameter values.

Journal ArticleDOI
TL;DR: In this article, three simple interval estimates for the risk ratio in inverse sampling are considered, and the first two interval estimates are derived on the basis of Fieller's Theorem and the delta method with the logarithmic transformation, respectively.
Abstract: Three simple interval estimates for the risk ratio in inverse sampling are considered. The first two interval estimates are derived on the basis of Fieller's Theorem and the delta method with the logarithmic transformation, respectively. The third interval estimate is derived on the basis of an F-test statistic proposed by BENNETT (1981) for testing equal probabilities of a disease between two comparison groups when the disease is rare. To evaluate the performance of these three methods, a Monte Carlo simulation is used to compare the actual coverage probability with the nominal confidence level for each method and to estimate the expected length of the corresponding confidence interval in a variety of situations. On the basis of the results found in the simulation, we have concluded that the method with the logarithmic transformation is either equivalent to or better than the other two methods for all situations considered here.

Journal ArticleDOI
TL;DR: In this article, a simple method for the statistical comparison of two Receiver Operating Characteristics (ROC) curves derived from the same set of patients and the same subset of healthy subjects was proposed.
Abstract: Receiver operating characteristic (ROC) curves are used to describe the performance of diagnostic procedures. This paper proposes a simple method for the statistical comparison of two ROC curves derived from the same set of patients and the same set of healthy subjects. Generalization to studies involving more than two screening factors is straightforward. This method does not require the calculation of variances of the areas or difference of areas under the curves.

Journal ArticleDOI
TL;DR: In this article, an experimental design problem for the analysis of long-term selection experiments with nonlinear regression models is considered for a 3-parametric exponential regression function whose parameters have also a reasonable biological interpretation and approximate formulas for the determination of the necessary number of observations at each generation are constructed in such a way that the half expected length of an (1 − α)-confidence interval for a chosen parameter is not greater than a given value.
Abstract: An experimental design problem is considered for the analysis of long-term selection experiments with nonlinear regression models. For a 3-parametric exponential regression function whose parameters have also a reasonable biological interpretation approximate formulas for the determination of the necessary number of observations at each generation are constructed in such a way that the half expected length of an (1 — α)-confidence interval for a chosen parameter is not greater than a given value. In this sense the accuracy of the parameter estimators can be described.

Journal ArticleDOI
TL;DR: In this article, an approximate nonparametric maximum likelihood estimation of the tumor incidence rate and comparison of tumor incidence rates between treatment groups are examined in the context of animal carcinogenicity experiments that have interval sacrifice data but lack cause-of-death information.
Abstract: Approximate nonparametric maximum likelihood estimation of the tumor incidence rate and comparison of tumor incidence rates between treatment groups are examined in the context of animal carcinogenicity experiments that have interval sacrifice data but lack cause-of-death information. The estimation procedure introduced by MALANI and VAN RYZIN (1988), which can result in a negative estimate of the tumor incidence rate, is modified by employing a numerical method to maximize the likelihood function iteratively, under the constraint that the tumor incidence rate is nonnegative. With the new procedure, estimates can be obtained even if sacrifices occur anywhere within an interval. The resulting estimates have reduced standard error and give more power to the test of two heterogeneous groups. Furthermore, a linear contrast of more than two groups can be tested using our procedure. The proposed estimation and testing methods are illustrated with an experimental data set.


Journal ArticleDOI
TL;DR: In this paper, the generalized negative binomial distribution has been found useful in fitting over-dispersed as well as under-distributed count data, and the generalized binomial regression model has been used to predict a count response variable affected by one or more explanatory variables.
Abstract: The generalized negative binomial distribution has been found useful in fitting over-dispersed as well as under-dispersed count data. We define and study the generalized binomial regression model which is used to predict a count response variable affected by one or more explanatory variables. The methods of maximum likelihood and moments are given for estimating the model parameters. Approximate tests for the adequacy of the model are considered. The generalized binomial regression model has been applied to two observed data sets to which binomial regression model was applied earlier.

Journal ArticleDOI
TL;DR: In this article, the authors present tables which do not contain these limitations and which are valid for α errors of 1, 5% and 10% and for sample size of n 1 ≤ n 2 ≤ 25.
Abstract: Comparing two independent binomial proportions is quite a common problem in statistical practice. The unconditional method for doing so is more powerful than the conditional method (Fisher's exact test), but the computational difficulties of the former are much greater, and beyond the reach of most researchers. The solution adopted so far has been the publication of tables for critical regions, but the only ones that exist (MCDONALD et al., 1977; SUISSA and SHUSTER, 1985; and HABER, 1986) have certain limitations which prevent their general use (they are only valid for one-tailed tests; they allow very limited values of sample size and α error, and they are not constructed using the most powerful version of the test). In this paper the authors present tables which do not contain these limitations and which are valid for α errors of 1%, 5% and 10% and for sample size of n 1 ≤ n 2 ≤ 25 (beyond this figure, approximations may be used). They also illustrate the way to apply these tables (albeit in a conservative fashion) to the case of multinomial 2 x 2 trials.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the robustness of the maximum likelihood estimators of both the discrete and continuous groups testing models, and showed that both models suffer from similar lack of robustness.
Abstract: Parallels between the discrete group-testing model and some closely-related continuous models are elucidated. It is shown that in both the discrete and continuous cases, the maximum likelihood estimators may suffer from similar lack of robustness. Isotonic regression and maximum likelihood estimation were therefore compared for a modified group testing model.

Journal ArticleDOI
TL;DR: In this paper, it was shown that x − s 2 has an approximated normal distribution with zero mean if the x'i s(i=1 to n) are independent identically distributed Poisson variables.
Abstract: In the study or spatial patterns, the statistic I'=(n − 1) s 2 /x was commonly used. In this paper, we round that x − s 2 has an approximated normal distribution with zero mean if the x ' i s(i=1 to n) are independent identically distributed Poisson variables. Based on this conclusion, the hypothesis that a point pattern is completely random can be tested directly. And a method for the test or spatial patterns was proposed which can be sued as an alternative to the Chi-square based dispersion index test

Journal ArticleDOI
TL;DR: In this article, the authors deal with a problem arising for tests in clinical trials, where the outcomes of a standard and a new treatment to be compared are multivariate normally distributed with common but unknown covariance matrix.
Abstract: The paper deals with a problem arising for tests in clinical trials. The outcomes of a standard and a new treatment to be compared are multivariate normally distributed with common but unknown covariance matrix. Under the null hypothesis the means of the outcomes are equal, under the alternative the new treatment is assumed to be superior, i.e. the means are larger without further quantification. For known covariance matrix there is a variety of tests for this problem. Some of these procedures can be extended to the case of unknown covariances if one is willing to accept a bias. There is, however, also an efficient unbiased test. The paper contains some numerical comparisons of these different procedures and takes a look on the minimax properties of the unbiased test.

Journal ArticleDOI
TL;DR: It is shown that if vaccinated persons increase the frequency of their contacts with infectious persons, then estimators ignoring this change in behavior may substantially underestimate the VE.
Abstract: A deterministic model for the transmission of an acute infectious disease in a heterogeneous, nonrandomly mixing population is developed. This model facilitates the estimation of transmission probabilities from the observed attack rates. If some of the members of the population are vaccinated, then the vaccine efficacy (VE), defined as the relative reduction in the transmission probability due to vaccination, can be estimated. We provide several estimators of VE, depending on the amount of information available on the mixing pattern and on the action of the vaccine. We show that if vaccinated persons increase the frequency of their contacts with infectious persons, then estimators ignoring this change in behavior may substantially underestimate the VE.

Journal ArticleDOI
TL;DR: In this article, the shapes of ten haddock and nine whiting were compared using simulation tests, and the power to discriminate between fish species was shown to be increased by permitting non-affine transformations that correct for fish curvature.
Abstract: Landmarks are used to summarize the shapes of ten haddock and nine whiting. Error variance models, after generalized Procrustes analysis, are compared using simulation tests. The power to discriminate between fish species is shown to be increased by permitting non-affine transformations that correct for fish curvature.

Journal ArticleDOI
TL;DR: In this paper, a one-year birth cohort from Northern Finland has been followed up since 1966, and the progression of myopia up to the age of 20 years was analyzed.
Abstract: A one-year birth cohort from Northern Finland has been followed up since 1966. As a part of this study, we are in this paper concerned with analysing the progression of myopia (nearsightness) up to the age of 20 years. The random coefficient regression model was chosen for the analysis bccause of the large individual variation in the development of myopia. Maximum likelihood estimates for the parameters in the model were obtained via the expectation maximization (EM) algorithm. It is shown how the estimated model can be used to predict future observations for an individual using the previously recorded refractive error measurements as well as other relevant data on the patient in question.

Journal ArticleDOI
TL;DR: In this article, the authors observed the spatial pattern of bacteria colonizing a sterile 316L stainless steel coupon as bulk water containing bacteria flowed across the coupon, and calculated tolerance envelopes such that the tolerance level was simultaneous for all distances of concern.
Abstract: Using sophisticated microscopy techniques, we observed the spatial pattern of bacteria colonizing a sterile 316L stainless steel coupon as bulk water containing bacteria flowed across the coupon. The experiments used stainless steel of differing roughness and surface chemistry. The ultimate goal was to identify surface features which influence bacterial adsorption. The immediate statistical goal was to distinguish patterns consistent with complete spatial randomness from patterns showing regularity or aggregation. This goal was accomplished by using modified analyses of distance functions commonly applied in field ecology. The method protected against a potential multiple comparisons problem. For the null value of the distance function, we calculated tolerance envelopes such that the tolerance level was simultaneous for all distances of concern. Computer simulation experiments showed that the nominal level was accurate. The methodology was effective for detecting and describing patterns of colonization known not to be completely spatially random.

Journal ArticleDOI
TL;DR: In this paper, the problem of assessing the relative calibrations and relative accuracies of a set of p instruments, each designed to measure the same characteristic on a common group of individuals is considered by using the EM algorithm.
Abstract: The problem of assessing the relative calibrations and relative accuracies of a set of p instruments, each designed to measure the same characteristic on a common group of individuals is considered by using the EM algorithm. As shown, the EM algorithm provides a general solution for this problem. Its implementation is simple and in its most general form requires no extra iterative procedures within the M step. One important feature of the algorithm in this set up is that the error variance estimates are always positive. Thus, it can be seen as a kind of restricted maximization procedure. The expected information matrix for the maximum likelihood estimators is derived, upon which the large sample estimated covariance matrix for the maximum likelihood estimators can be computed. The problem of testing hypothesis about the calibration lines can be approached by using the Wald statistics. The approach is illustrated by re-analysing two data sets in the literature.

Journal ArticleDOI
TL;DR: In this paper, a measure is developed to quantify the closeness of Satterthwaite's approximation by using certain optimization results given by THIBAUDEAU and STYAN (1985).
Abstract: Satterthwaite's approximation of the distribution of a nonnegative linear combination of independent mean squares is addressed in this article. A measure is developed to quantify the closeness of this approximation by using certain optimization results given by THIBAUDEAU and STYAN (1985). The main advantage of the proposed measure is to provide a theoretical framework for determining conditions under which Satterthwaite's approximation may be inadequate. This is demonstrated in three examples portraying frequently encountered problems in analysis of variance.

Journal ArticleDOI
TL;DR: The authors presented a complete analysis of variability for breast cancer absolute risk estimates, taking into account all components of variability, namely the variances of relative risk estimates and of baseline hazard estimates as well as the covariance between the two, the latter terms being based on implicit delta-method arguments.
Abstract: The absolute risk is the probability of developing a given disease over a specified time interval given age and risk factors. GAIL et al. (1989) obtained point estimates for the absolute risk of breast cancer from a population-based case-control study, the Breast Cancer Detection and Demonstration Project (BCDDP). They combined relative risk estimates obtained from the case-control data and based on four risk factors and age, and composite hazard estimates from the cohort data. In this paper, we present a complete analysis of variability for breast cancer absolute risk estimates. Our proposed variance estimator takes into account all components of variability, namely the variances of relative risk estimates and of baseline hazard estimates as well as the covariance between the two, the latter terms being based on implicit delta-method arguments (BENICHOU and GAIL, 1989). Our variance estimator also takes into account the specifics of the BCDDP study, namely the subsampling of cases and controls. We provide full details of the variance calculations because we anticipate that, in future studies, subsampling will also occur. We also present numerical illustration based on the BCDDP data. These calculations have been used in a recently developed program that computes point estimates and confidence intervals for the absolute risk of breast cancer based on the BCDDP data (BENICHOU, 1993).


Journal ArticleDOI
TL;DR: In this article, a test statistic is a ratio of quadratic forms in normal variables which is most powerful and point optimal invariant, the main objective of this paper is to construct one sided test for testing equicorrelation coefficient in presence of random coefficients using optimal testing procedure.
Abstract: Recently BHATTI (1993) considered an efficient estimation of random coefficient model based on survey data. The main objective of this paper is to construct one sided test for testing equicorrelation coefficient in presence of random coefficients using optimal testing procedure. The test statistic is a ratio of quadratic forms in normal variables which is most powerful and point optimal invariant.