scispace - formally typeset
Search or ask a question

Showing papers in "Communications in Statistics-theory and Methods in 1994"


Journal ArticleDOI
TL;DR: In this paper, the authors consider the identification and estimation of treatment differences based on a new class of structural models, the multivariate structural nested mean models, when reliable estimates of each subject's actual treatment are available.
Abstract: In a randomized trial designed to study the effect of a treatment of interest on the evolution of the mean of a time-dependent outcome variable, subjects are assigned to a treatment regime, or, equivalently, a treatment protocol. Unfortunately, subjects often fail to comply with their assigned regime. From a public health point of view, the causal parameter of interest will often be a function of the treatment differences that would have been observed hadcontrary to fact, all subjects remained on protocol. This paper considers the identification and estimation of these treatment differences based on a new class of structural models, the multivariate structural nested mean models, when reliable estimates of each subject's actual treatment are available. Estimates of “actual treatment” might, for example, be obtained by measuring the amount of “active drug” in the subject's blood or urine at each follow-up visit or by pill counting techniques. In addition, we discuss a natural extension of our methods to ob...

562 citations


Journal ArticleDOI
TL;DR: A new, general method of modelling covariance structure based on the Kronecker product of underlying factor specific covariance profiles is presented, which has an attractive interpretation in terms of independent factor specific contribution to overall within subject covarianceructure and can be easily adapted to standard software.
Abstract: The main difficulty in parametric analysis of longitudinal data lies in specifying covariance structure. Several covariance structures, which usually reflect one series of measurements collected over time, have been presented in the literature. However there is a lack of literature on covariance structures designed for repeated measures specified by more than one repeated factor. In this paper a new, general method of modelling covariance structure based on the Kronecker product of underlying factor specific covariance profiles is presented. The method has an attractive interpretation in terms of independent factor specific contribution to overall within subject covariance structure and can be easily adapted to standard software.

208 citations


Journal ArticleDOI
TL;DR: A generalized Kaplan-Meier estimator has been considered in the literature on conditional survival analysis (Beran (1981), Gonzalez-Manteiga and Cadarso-Suarez (1991) and Gentleman and Crowley (1991)).
Abstract: A generalized Kaplan-Meier estimator has been considered in the literature on conditional survival analysis (Beran (1981), Gonzalez-Manteiga and Cadarso-Suarez (1991) and Gentleman and Crowley (1991)). An almost sure representation as a sum of independent variables is given here for this estimator. Some applications are obtained as consequences of these results.

126 citations


Journal ArticleDOI
TL;DR: In this paper, a two shape parameter generalization of the well known family of the Weibull distributions is presented and its properties are studied, including skewness and kurtosis, density shapes and tail character, and relation of the members of the family to those of the Pearsonian system.
Abstract: A two shape parameter generalization of the well known family of the Weibull distributions is presented and its properties are studied. The properties examined include the skewness and kurtosis, density shapes and tail character, and relation of the members of the family to those of the Pear-sonian system. The members of the family are grouped in four classes in terms of these properties. Also studied are the extreme value distributions and the limiting distributions of the extreme spacings for the members of the family. It is seen that the generalized Weibull family contains distributions with a variety of density and tail shapes, and distributions which in terms of skewness and kurtosis approximate the main types of curves of the Pearson system. Furthermore, as shown by the extreme value and extreme spacings distributions the family contains short, medium and long tailed distributions. The quantile and density quantile functions are the principle tools used for the structural analysis of the family.

93 citations


Journal ArticleDOI
TL;DR: In this paper, the authors deal with the problem of predicting, on the base of censored sampling, the ordered lifetimes in a future sample when samples are assumed to follow the inverse weibull distribution.
Abstract: This paper deals with the problem of predicting, on the base of censored sampling, the ordered lifetimes in a future sample when samples are assumed to follow the inverse weibull distribution. Bayes prediction intervals are derived, both when no prior information is available and when prior informtion on the unreliability level at a fixed time is introduced in the predictive procedure. A Monte Carlo simulation study has shown that the the use of the prior information leads to a more accurate prediction, also when the choice of the informative prior density is quite wrong.

91 citations


Journal ArticleDOI
TL;DR: In this article, the authors established some recurrence relations satisfied by single and product moments of upper record values from the generalized Pareto distribution, and showed that these relations may be used to obtain all the single/product moments of all record values in a simple recursive manner.
Abstract: In this paper we establish some recurrence relations satisfied by single and product moments of upper record values from the generalized Pareto distribution. It is shown that these relations may be used to obtain all the single and product moments of all record values in a simple recursive manner. We also show that similar results established recently by Balakrishnan and Ahsanullah (1993) for the upper record values from the exponential distribution may be deduced by letting the shape parameter p tend to 0.

82 citations


Journal ArticleDOI
TL;DR: In a number of situations, a simple randomized trial in unselected patients may not be possible or efficient, so studies that allow early escape/early advance of patients doing poorly, and enrichment of study populations with potential responders may increase the feasibility and its chances of success.
Abstract: In a number of situations, a simple randomized trial in unselected patients may not be possible or efficient. Randomized withdrawal of therapy in patients previously treated, studies that allow early escape/early advance of patients doing poorly, and enrichment of study populations with potential responders may increase the feasibility of a trial and its chances for success.

76 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that Mardia's measure of multivariate kurtosis satisfies with σ 2 depending on the distribution of X 1. As a consequence, the power function of a commonly proposed test for multivariate normality based on d2,d.
Abstract: Let be independent identically distributed random(d-vectors with mean μ and nonsingular covariance matrix ∑ such that . We show that Mardia’s measure of multivariate kurtosis satisfies with σ2 depending on the distribution of X 1. As a consequence we obtain an approximation to the power function of a commonly proposed test for multivariate normality based on d2,d . Moreover, this test is consistent if, and only if . Examples include normal mixtures and elliptically symmetric distributions.

65 citations


Journal ArticleDOI
TL;DR: In this paper, the bias of the estimate of the variance of the overall effect synthesized from individual studies by using the variance weighted method is discussed, and the conditions, the likelihood of underestimation and the bias from this conventional estimate are studied based on the assumption that the estimates of the effect are subject to normal distribution with common mean.
Abstract: The authors discuss the bias of the estimate of the variance of the overall effect synthesized from individual studies by using the variance weighted method. This bias is proven to be negative. Furthermore, the conditions, the likelihood of underestimation and the bias from this conventional estimate are studied based on the assumption that the estimates of the effect are subject to normal distribution with common mean. The likelihood of underestimation is very high (e.g. it is greater than 85% when the sample sizes in two combined studies are less than 120). The alternative less biased estimates for the cases with and without the homogeneity of the variances are given in order to adjust for the sample size and the variation of the population variance. In addition, the sample size weight method is suggested if the consistence of the sample variances is violated Finally, a real example is presented to show the difference by using the above three estimate methods.

54 citations


Journal ArticleDOI
TL;DR: The count data model studied in this paper extends the Poisson model by allowing for overdispersion and serial correlation, and alternative approaches to estimate nuisance parameters, required for the c...
Abstract: The count data model studied in the paper extends the Poisson model by allowing for overdispersion and serial correlation. Alternative approaches to estimate nuisance parameters, required for the c ...

49 citations


Journal ArticleDOI
TL;DR: In this article, the use of information in sequential monitoring of clinical trials is described, defined as the inverse of the variance of some estimate, the information in a trial depends on the type of data collected on the patients.
Abstract: The use of information in sequential monitoring of clinical trials is described. Technically defined as the inverse of the variance of some estimate, the information in a trial depends on the type of data collected on the patients. We examine three common situations: comparison of two means, comparison of two survival curves, and comparison of two populations slopes from repeated measures data. In each case we discuss how to proceed when the information available at the planned end of the trial, the total information, is unknown. The amount of information at a given interim analysis divided by the total information is the information fraction. Some natural estimates of the information fraction and their relationships to calender time are presented. The concept of total information can also be useful for the design of trials collecting repeated measures data.

Journal ArticleDOI
TL;DR: In this paper, the authors consider estimation of the mean time to failure using loss functions that reflect both of fit and precision of estimation, and show how this can be done using balanced loss functions (BLF) of the type introduced in Zellner (1994) and weighted balanced loss function (WBLF).
Abstract: The purpose of this paper is to consider estimation of the mean time to failure using loss functions that reflect both of fit and precision of estimation. We show how this can be done using balanced loss functions (BLF) of the type introduced in Zellner (1994) and weighted balanced loss function (WBLF) introduced in this paper. Optimal point estimates relative to BLF and WBLF are shown to be a compromise between usual Bayesian and non-Bayesian estimates. Using diffuse and informative priors, posterior expected losses associated with alternative estimates are evaluated and compared.

Journal ArticleDOI
TL;DR: In this article, a confidence interval for the lOOpth percentile of the Birnbaum-Saunders distribution is constructed, and conservative two-sided tolerance limits are then obtained from the confidence limits.
Abstract: In this paper, a confidence interval for the lOOpth percentile of the Birnbaum-Saunders distribution is constructed. Conservative two-sided tolerance limits are then obtained from the confidence limits. These results are useful for reliability evaluation when using the Birnbaum-Saunders model. A simple scheme for generating Birnbaum-Saunders random variates is derived. This is used for a simulation study on investigating the effectiveness of the proposed confidence interval in terms of its coverage probability.

Journal ArticleDOI
TL;DR: In this article, for a single record-breaking sample, consistent estimation is not possible, and replication is required for global results, and the proposed distribution function and density estimators are shown to be strongly consistent and asymptotically normal as m → ∞.
Abstract: In some experiments, such as destructive stress testing and industrial quality control experiments, only values smaller than all previous ones are observed. Here, for such record-breaking data, kernel estimation of the cumulative distribution function and smooth density estimation is considered. For a single record-breaking sample, consistent estimation is not possible, and replication is required for global results. For m independent record-breaking samples, the proposed distribution function and density estimators are shown to be strongly consistent and asymptotically normal as m → ∞. Also, for small m, the mean squared errors and biases of the estimators and their smoothing parameters are investigated through computer simulations.

Journal ArticleDOI
TL;DR: This paper demonstrates ways issues can be resolved, and presents some modifications to the current literature of a randomized adaptive allocation scheme, a design in which the probability a treatment is administered to each patient depends upon the results of the previous patients.
Abstract: A randomized adaptive allocation scheme is a design in which the probability a treatment is administered to each patient depends upon the results of the previous patients in the study. Typically, an arm that is doing well is more likely to be allocated to future patients than an arm that is doing poorly. Occasionally, ethical and/or practical considerations suggest that such designs may be appropriate. However, many issues need to be addressed in order to run the trial properly. Among these are studies with more than two arms, the logistics behind the trial, delayed patient response, and inferences drawn from data collected in this manner. This paper demonstrates ways these issues can be resolved, and presents some modifications to the current literature. A simulation study demonstrates the operating characteristics of the design.

Journal ArticleDOI
TL;DR: The maximin distance criterion is used for the selection of an OA-based Latin hypercube that reaches the same distance as its parent array.
Abstract: The maximin distance criterion is used for the selection of an OA-based Latin hypercube. For the case in which the underlying orthogonal array is a full factorial design without replication, we construct an OA-based Latin hypercube that reaches the same distance as its parent array.

Journal ArticleDOI
TL;DR: In this article, an estimator of the number of jumps of the jump regression functions is proposed based on the difference between right and left onesided kernel smoothers, which is proved to be a.s. consistent.
Abstract: This paper suggests an estimator of the number of jumps of the jump regression functions. The estimator is based on the difference between right and left onesided kernel smoothers. It is proved to be a.s. consistent. Some results about its rate of convergence are also provided.

Journal ArticleDOI
TL;DR: In this article, the concepts of design and scheme are introduced for characterizing RR surveys and some consequences of comparing RR designs based on statistical measures of efficiency and respondent protection are discussed.
Abstract: In studies about sensitive characteristics, randomized response (RR) methods are useful for generating reliable data, protecting respondents’ privacy. It is shown that all RR surveys for estimating a proportion can be encompassed in a common model and some general results for statistical inferences can be used for any given survey. The concepts of design and scheme are introduced for characterizing RR surveys. Some consequences of comparing RR designs based on statistical measures of efficiency and respondent’ protection are discussed. In particular, such comparisons lead to the designs that may not be suitable in practice. It is suggested that one should consider other criteria and the scheme parameters for planning a RR survey.

Journal ArticleDOI
TL;DR: In this article, a class of estimators for estimating ratio and product of two means of a finite population using information on two auxiliary characters is proposed and an empirical study is carried out to compare the performance of various estimators of ratio with the conventional estimators.
Abstract: This paper proposes a class of estimators for estimating ratio and product of two means of a finite population using information on two auxiliary characters. Asymptotic expression to terms of order 0(n-1) for bias and mean square error (MSE) of the proposed class of estimators are derived. Optimum conditions are obtained under which the proposed class of estimators has the minimum MSE. An empirical study is carried out to compare the performance of various estimators of ratio with the conventional estimators.

Journal ArticleDOI
TL;DR: Results of a small simulation study are shown to compare this estimator of the marginal survival function based on an assumed copula based on the notion of self consistency based on dependent competing risk data.
Abstract: When the time to death, X, and the time to censoring, Y, are associated some additional information is need to identify the marginal survival functions. A natural function which provides this additional information is the copula of X and Y. Assuming that the copula is known, we use the notion of self consistency to construct an estimator of the marginal survival functions based on dependent competing risk data. Results of a small simulation study are shown to compare this estimator to other estimators of the marginal survival function based on an assumed copula.

Journal ArticleDOI
M. C. Jones1
TL;DR: In this article, the authors compared the derivatives of a density function based on polynomial multiples of kernels with those based on differentiated kernels, and showed that the latter is more accurate than the former.
Abstract: Estimators of derivatives of a density function based on polynomial multiples of kernels are compared with those based on differentiated kernels.

Journal ArticleDOI
TL;DR: In this paper, a hierarchical model approach to the estimation of state space models with diffuse initial conditions is presented. But this approach is not suitable for state transition models with fixed effects and nonstationarity in state transition equations.
Abstract: This article takes a hierarchical model approach to the estimation of state space models with diffuse initial conditions. An initial state is said to be diffuse when it cannot be assigned a proper prior distribution. In state space models this occurs either when fixed effects are present or when modelling nonstationarity in the state transition equation. Whereas much of the literature views diffuse states as an initialization problem, we follow the approach of Sallas and Harville (1981,1988) and incorporate diffuse initial conditions via noninformative prior distributions into hierarchical linear models. We apply existing results to derive the restricted loglike-lihood and appropriate modifications to the standard Kalman filter and smoother. Our approach results in a better understanding of De Jong's (1991) contributions. This article also shows how to adjust the standard Kalman filter, the fixed inter- val smoother and the state space model forecasting recursions, together with their mean square errors, ...

Journal ArticleDOI
TL;DR: In this article, the authors extend the results of AL-Hussaini and Jaheen (1992) and develop approximate Bayes estimators of the two unknown parameters, reliability and failure rate functions of the Burr type XII failure model by using the method of Tierney and Kadane (1986) based on type-2 censored samples.
Abstract: This paper extends the results of AL-Hussaini and Jaheen (1992) and develops approximate Bayes estimators of the two (unknown) parameters, reliability and failure rate functions of the Burr type XII failure model by using the method of Tierney and Kadane (1986) based on type-2 censored samples. Comparisons are made between those estimators and their corresponding Bayes estimators obtained by using the method of Lindley (1980) together with the maximum likelihood estimators based on Monte Carlo simulation study.

Journal ArticleDOI
TL;DR: In this article, a conditional regression model which generalizes Gabriel's constant-order ante-dependence model is presented, together with simple expressions for likelihood ratio test statistics in terms of sum of squares from appropriate analysis of covariance.
Abstract: Ante-dependence models can be used to model the covariance structure in problems involving repeated measures through time. They are conditional regression models which generalize Gabriel’s constant-order ante-dependence model. Likelihood-based procedures are presented, together with simple expressions for likelihood ratio test statistics in terms of sum of squares from appropriate analysis of covariance. The estimation of the orders is approached as a model selection problem, and penalized likelihood criteria are suggested. Extensions of all procedures discussed here to situations with a monotone pattern of missing data are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors derived recurrence relations for the single and product moments of order statistics arising from n independent non-identically distributed right-truncated exponential random variables.
Abstract: By considering order statistics arising from n independent non-identically distributed right-truncated exponential random variables, we derive in this paper several recurrence relations for the single and the product moments of order statistics These recurrence relations are simple in nature and could be used systematically in order to compute all the single and the product moments of order statistics for all sample sizes in a simple recursive manner The results for order statistics from a multiple-outlier model (with a slippage of p observations) from a right-truncated exponential population are deduced as special cases These results will be useful in assessing robustness properties of any linear estimator of the unknown parameter of the right-truncated exponential distribution, in the presence of one or more outliers in the sample These results generalize those for the order statistics arising from an iid sample from a right-truncated exponential population established by Joshi (1978, 1982)

Journal ArticleDOI
TL;DR: In this article, a fast initial response (FIR) feature for the zone control chart is proposed and the average run lengths of the ZCC with this feature are calculated, and it is shown that the FIR feature improves the performance of ZCC by providing significantly earlier signals when the process is out of control.
Abstract: A general model for the zone control chart is presented. Using this model, it is shown that there are score vectors for zone control charts which result in superior average run length performance in comparison to Shewhart charts with common runs rules. A fast initial response (FIR) feature for the zone control chart is also proposed. Average run lengths of the zone control chart with this feature are calculated. It is shown that the FIR feature improves zone control chart performance by providing significantly earlier signals when the process is out of control.

Journal ArticleDOI
TL;DR: In this article, the authors examined the accuracy of various normal approximations for the confidence limits for the Poisson parameter λ for large numbers of total counts, for large degrees of freedom are required and may not be readily available.
Abstract: This paper examines the accuracy of various normal approximations for the confidence limits for the Poisson parameter λ. While exact limits can be obtained from the Chi-square distribution, for large numbers of total counts, Chi-square values for large degrees of freedom are required and may not be readily available. Approximations provided by Pratt(1968)and Molenaar(1973)provide very accurate confidence limits while the two usual approximations, both with and without the continuity correction, are relatively inaccurate.

Journal ArticleDOI
Udo Kamps1
TL;DR: In this article, some results on the transmission of distributional properties, such as increasing failure rate, are shown for such records, which contain the results for order statistics and ordinary record values as particular cases.
Abstract: In reliability theory, order statistics and record values are used for statistical modelling. The r-th order statistic in a sample of size n represents the life—length of a (n−r+l)-out-of-n system, and record values are used in shock models. In recent years, reliability properties of order statistics and record values have been investigated. The two models are included in Pfeifer's concept of record values from non-identically distributed random variables. Here, some results on the transmission of distributional properties, such as increasing failure rate, are shown for such records, which contain the results for order statistics and ordinary record values as particular cases.

Journal ArticleDOI
TL;DR: The probability distribution of the sample mutual information is studied for p-variables under the assumption of multivariate normality and exact and asymptotic approximations are obtained.
Abstract: The probability distribution of the sample mutual information is studied for p-variables under the assumption of multivariate normality. Exact and asymptotic approximations are obtained for the distribution of the sample mutual information for different structures of the correlation matrix P. As an application, the concept of mutual information is used to develop a quality control chart to determine the correlation structure of a process. When a correlation structure, given by the mutual information, is "out-of-control" we use a modified version of the Bonferroni inequality.

Journal ArticleDOI
TL;DR: In this paper, it was shown that under certain moment restrictions and the weak assumption that the support of the underlying distribution has positive Lebesgue measure, the asymptotic distribution of, suitably normalized, is normal.
Abstract: Let be independent identically distributed random d-dimensional column vectors with arithmetic mean [Xbar] n and empirical covariance matrix S n. Apart from the celebrated kurtosis measure of Mardia, there has been recent interest in the variant which formally constitutes a closer analogue to the multivariate skewness measure , than b2,d . We show that, under certain moment restrictions and the weak assumption that the support of the underlying distribution has positive Lebesgue measure, the asymptotic distribution of , suitably normalized, is normal. Moreover, the joint limiting distribution of b2,d and is bivariate normal. Within the class of elliptically symmetric distributions the asymptotic correlation between b2,d and is 1. The consistency class of a test for multivariate normality based on is determined.