scispace - formally typeset
Search or ask a question

Showing papers in "Lifetime Data Analysis in 2009"


Journal ArticleDOI
TL;DR: The pseudo-values suggested for competing risks models and some conjectures regarding their asymptotics are analyzed and proved and a second order von Mises expansion of the Aalen-Johansen estimator yields an appropriate representation of the pseudo- values.
Abstract: For regression on state and transition probabilities in multi-state models Andersen et al. (Biometrika 90:15–27, 2003) propose a technique based on jackknife pseudo-values. In this article we analyze the pseudo-values suggested for competing risks models and prove some conjectures regarding their asymptotics (Klein and Andersen, Biometrics 61:223–229, 2005). The key is a second order von Mises expansion of the Aalen-Johansen estimator which yields an appropriate representation of the pseudo-values. The method is illustrated with data from a clinical study on total joint replacement. In the application we consider for comparison the estimates obtained with the Fine and Gray approach (J Am Stat Assoc 94:496–509, 1999) and also time-dependent solutions of pseudo-value regression equations.

109 citations


Journal ArticleDOI
TL;DR: This paper considers the effect of external, possibly multiple and sequential, interventions in a system of multivariate time series, the Granger causal structure of which is taken to be known.
Abstract: We combine two approaches to causal reasoning. Granger causality, on the one hand, is popular in fields like econometrics, where randomised experiments are not very common. Instead information about the dynamic development of a system is explicitly modelled and used to define potentially causal relations. On the other hand, the notion of causality as effect of interventions is predominant in fields like medical statistics or computer science. In this paper, we consider the effect of external, possibly multiple and sequential, interventions in a system of multivariate time series, the Granger causal structure of which is taken to be known. We address the following questions: under what assumptions about the system and the interventions does Granger causality inform us about the effectiveness of interventions, and when does the possibly smaller system of observable times series allow us to estimate this effect? For the latter we derive criteria that can be checked graphically and are in the same spirit as Pearl's back-door and front-door criteria (Pearl 1995).

86 citations


Journal ArticleDOI
TL;DR: A generalized log-gamma regression model encompassing the log-exponential, log-Weibull and log-normal regression models with a cure rate typically used to model such data is modified to allow the possibility that long-term survivors may be present in the data.
Abstract: In this paper, the generalized log-gamma regression model is modified to allow the possibility that long-term survivors may be present in the data. This modification leads to a generalized log-gamma regression model with a cure rate, encompassing, as special cases, the log-exponential, log-Weibull and log-normal regression models with a cure rate typically used to model such data. The models attempt to simultaneously estimate the effects of explanatory variables on the timing acceleration/deceleration of a given event and the surviving fraction, that is, the proportion of the population for which the event never occurs. The normal curvatures of local influence are derived under some usual perturbation schemes and two martingale-type residuals are proposed to assess departures from the generalized log-gamma error assumption as well as to detect outlying observations. Finally, a data set from the medical area is analyzed.

75 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss regression analysis of panel count data that often arise in longitudinal studies concerning occurrence rates of certain recurrent events and propose some shared frailty models and estimating equations are developed for estimation of regression parameters.
Abstract: This paper discusses regression analysis of panel count data that often arise in longitudinal studies concerning occurrence rates of certain recurrent events. Panel count data mean that each study subject is observed only at discrete time points rather than under continuous observation. Furthermore, both observation and follow-up times can vary from subject to subject and may be correlated with the recurrent events. For inference, we propose some shared frailty models and estimating equations are developed for estimation of regression parameters. The proposed estimates are consistent and have asymptotically a normal distribution. The finite sample properties of the proposed estimates are investigated through simulation and an illustrative example from a cancer study is provided.

63 citations


Journal ArticleDOI
TL;DR: This paper studies the Wiener process with negative drift as a possible cure rate model but the resulting defective inverse Gaussian model is found to provide a poor fit in some cases, and several possible modifications are suggested, which improve the defective inverseGaussian.
Abstract: The development of models and methods for cure rate estimation has recently burgeoned into an important subfield of survival analysis. Much of the literature focuses on the standard mixture model. Recently, process-based models have been suggested. We focus on several models based on first passage times for Wiener processes. Whitmore and others have studied these models in a variety of contexts. Lee and Whitmore (Stat Sci 21(4):501–513, 2006) give a comprehensive review of a variety of first hitting time models and briefly discuss their potential as cure rate models. In this paper, we study the Wiener process with negative drift as a possible cure rate model but the resulting defective inverse Gaussian model is found to provide a poor fit in some cases. Several possible modifications are then suggested, which improve the defective inverse Gaussian. These modifications include: the inverse Gaussian cure rate mixture model; a mixture of two inverse Gaussian models; incorporation of heterogeneity in the drift parameter; and the addition of a second absorbing barrier to the Wiener process, representing an immunity threshold. This class of process-based models is a useful alternative to the standard model and provides an improved fit compared to the standard model when applied to many of the datasets that we have studied. Implementation of this class of models is facilitated using expectation-maximization (EM) algorithms and variants thereof, including the gradient EM algorithm. Parameter estimates for each of these EM algorithms are given and the proposed models are applied to both real and simulated data, where they perform well.

61 citations


Journal ArticleDOI
TL;DR: A rank based semiparametric estimation method is developed to obtain the maximum likelihood estimates of the parameters in the model and it is shown that the new model provides a useful addition to the cure model literature.
Abstract: We propose a new cure model for survival data with a surviving or cure fraction. The new model is a mixture cure model where the covariate effects on the proportion of cure and the distribution of the failure time of uncured patients are separately modeled. Unlike the existing mixture cure models, the new model allows covariate effects on the failure time distribution of uncured patients to be negligible at time zero and to increase as time goes by. Such a model is particularly useful in some cancer treatments when the treat effect increases gradually from zero, and the existing models usually cannot handle this situation properly. We develop a rank based semiparametric estimation method to obtain the maximum likelihood estimates of the parameters in the model. We compare it with existing models and methods via a simulation study, and apply the model to a breast cancer data set. The numerical studies show that the new model provides a useful addition to the cure model literature.

50 citations


Journal ArticleDOI
Robert Gray1
TL;DR: Weighted analysis methods are considered for cohort sampling designs that allow subsampling of both cases and non-cases, but with cases generally sampled more intensively, and methods for evaluating the representativeness of the sample and for estimating event-free probabilities are given.
Abstract: Weighted analysis methods are considered for cohort sampling designs that allow subsampling of both cases and non-cases, but with cases generally sampled more intensively. The methods fit into the general framework for the analysis of survey sampling designs considered by Lin (Biometrika 87:37–47, 2000). Details are given for applying the general methodology in this setting. In addition to considering proportional hazards regression, methods for evaluating the representativeness of the sample and for estimating event-free probabilities are given. In a small simulation study, the one-sample cumulative hazard estimator and its variance estimator were found to be nearly unbiased, but the true coverage probabilities of confidence intervals computed from these sometimes deviated significantly from the nominal levels. Methods for cross-validation and for bootstrap resampling, which take into account the dependencies in the sample, are also considered.

43 citations


Journal ArticleDOI
TL;DR: Results from a simulation study suggest that the Gini index is useful in some situations, and that it should be considered together with existing tests (in particular, the Log-rank, Wilcoxon, and Gray–Tsiatis tests).
Abstract: We apply the well known Gini index to the measurement of concentration in survival times within groups of patients, and as a way to compare the distribution of survival times across groups of patients in clinical studies. In particular, we propose an estimator of a restricted version of the index from right censored data. We derive the asymptotic distribution of the resulting Gini statistic, and construct an estimator for its asymptotic variance. We use these results to propose a novel test for differences in the heterogeneity of survival distributions, which may suggest the presence of a differential treatment effect for some groups of patients. We focus in particular on traditional and generalized cure rate models, i.e., mixture models with a distribution of the lifetimes of the cured patients that is either degenerate at infinity or has a density. Results from a simulation study suggest that the Gini index is useful in some situations, and that it should be considered together with existing tests (in particular, the Log-rank, Wilcoxon, and Gray–Tsiatis tests). Use of the test is illustrated on the classic data arising from the Eastern Cooperative Oncology Group melanoma clinical trial E1690.

35 citations


Journal ArticleDOI
TL;DR: This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements, and studies the partial least squares regression method.
Abstract: This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study the partial least squares regression method. It turns out that it is naturally adapted to this setting via the so-called Krylov sequence. The resulting PLS estimator is shown to be consistent provided that the number of terms included is taken to be equal to the number of relevant components in the regression model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary cirrhosis clinical data.

27 citations


Journal ArticleDOI
TL;DR: A randomized two-stage adaptive Bayesian design is proposed and studied for allocation and comparison in a phase III clinical trial with survival time as treatment response and the applicability of the proposed methodology is illustrated.
Abstract: A randomized two-stage adaptive Bayesian design is proposed and studied for allocation and comparison in a phase III clinical trial with survival time as treatment response. Several exact and limiting properties of the design and the follow-up inference are studied, both numerically and theoretically, and are compared with a single-stage randomized procedure. The applicability of the proposed methodology is illustrated by using some real data.

21 citations


Journal ArticleDOI
TL;DR: An improved approximation to the asymptotic null distribution of the goodness-of-fit tests for panel observed multi-state Markov models and hidden Markovmodels performs well and is a substantial improvement over the simple χ2 approximation.
Abstract: We develop an improved approximation to the asymptotic null distribution of the goodness-of-fit tests for panel observed multi-state Markov models (Aguirre-Hernandez and Farewell, Stat Med 21:1899-1911, 2002) and hidden Markov models (Titman and Sharples, Stat Med 27:2177-2195, 2008). By considering the joint distribution of the grouped observed transition counts and the maximum likelihood estimate of the parameter vector it is shown that the distribution can be expressed as a weighted sum of independent chi(1)(2) random variables, where the weights are dependent on the true parameters. The performance of this approximation for finite sample sizes and where the weights are calculated using the maximum likelihood estimates of the parameters is considered through simulation. In the scenarios considered, the approximation performs well and is a substantial improvement over the simple chi(2) approximation.

Journal ArticleDOI
TL;DR: A likelihood approach based on joint models for the multi-type recurrent events where parameter estimation is obtained from a Monte-Carlo EM algorithm is described.
Abstract: In many clinical studies, subjects are at risk of experiencing more than one type of potentially recurrent event. In some situations, however, the occurrence of an event is observed, but the specific type is not determined. We consider the analysis of this type of incomplete data when the objectives are to summarize features of conditional intensity functions and associated treatment effects, and to study the association between different types of event. Here we describe a likelihood approach based on joint models for the multi-type recurrent events where parameter estimation is obtained from a Monte-Carlo EM algorithm. Simulation studies show that the proposed method gives unbiased estimators for regression coefficients and variance–covariance parameters, and the coverage probabilities of confidence intervals for regression coefficients are close to the nominal level. When the distribution of the frailty variable is misspecified, the method still provides estimators of the regression coefficients with good properties. The proposed method is applied to a motivating data set from an asthma study in which exacerbations were to be sub-typed by cellular analysis of sputum samples as eosinophilic or non-eosinophilic.

Journal ArticleDOI
TL;DR: The proposed approach selects variables and estimates regression coefficients simultaneously and an algorithm is presented for this process and it is shown that the proposed approach performs as well as the oracle procedure in that it yields the estimates as if the correct submodel was known.
Abstract: Variable selection is an important issue in all regression analysis and in this paper, we discuss this in the context of regression analysis of recurrent event data. Recurrent event data often occur in long-term studies in which individuals may experience the events of interest more than once and their analysis has recently attracted a great deal of attention (Andersen et al., Statistical models based on counting processes, 1993; Cook and Lawless, Biometrics 52:1311-1323, 1996, The analysis of recurrent event data, 2007; Cook et al., Biometrics 52:557-571, 1996; Lawless and Nadeau, Technometrics 37:158-168, 1995; Lin et al., J R Stat Soc B 69:711-730, 2000). However, it seems that there are no established approaches to the variable selection with respect to recurrent event data. For the problem, we adopt the idea behind the nonconcave penalized likelihood approach proposed in Fan and Li (J Am Stat Assoc 96:1348-1360, 2001) and develop a nonconcave penalized estimating function approach. The proposed approach selects variables and estimates regression coefficients simultaneously and an algorithm is presented for this process. We show that the proposed approach performs as well as the oracle procedure in that it yields the estimates as if the correct submodel was known. Simulation studies are conducted for assessing the performance of the proposed approach and suggest that it works well for practical situations. The proposed methodology is illustrated by using the data from a chronic granulomatous disease study.

Journal ArticleDOI
TL;DR: This work describes how simple computations provide sensitivity for unmeasured confounding in a Cox proportional hazards MSM with point exposure by translating the general framework for sensitivity analysis for MSMs by Robins and colleagues to survival time data.
Abstract: Sensitivity analysis for unmeasured confounding should be reported more often, especially in observational studies. In the standard Cox proportional hazards model, this requires substantial assumptions and can be computationally difficult. The marginal structural Cox proportional hazards model (Cox proportional hazards MSM) with inverse probability weighting has several advantages compared to the standard Cox model, including situations with only one assessment of exposure (point exposure) and time-independent confounders. We describe how simple computations provide sensitivity for unmeasured confounding in a Cox proportional hazards MSM with point exposure. This is achieved by translating the general framework for sensitivity analysis for MSMs by Robins and colleagues to survival time data. Instead of bias-corrected observations, we correct the hazard rate to adjust for a specified amount of unmeasured confounding. As an additional bonus, the Cox proportional hazards MSM is robust against bias from differential loss to follow-up. As an illustration, the Cox proportional hazards MSM was applied in a reanalysis of the association between smoking and depression in a population-based cohort of Norwegian adults. The association was moderately sensitive for unmeasured confounding.

Journal ArticleDOI
TL;DR: Methods to quantify the degree of bias corrected by the weighting procedure in the partial likelihood and Breslow-Aalen estimators are proposed and applied to data from a national organ failure registry to evaluate the bias in a post-kidney transplant survival model.
Abstract: Often in observational studies of time to an event, the study population is a biased (i.e., unrepresentative) sample of the target population. In the presence of biased samples, it is common to weight subjects by the inverse of their respective selection probabilities. Pan and Schaubel (Can J Stat 36:111–127, 2008) recently proposed inference procedures for an inverse selection probability weighted (ISPW) Cox model, applicable when selection probabilities are not treated as fixed but estimated empirically. The proposed weighting procedure requires auxiliary data to estimate the weights and is computationally more intense than unweighted estimation. The ignorability of sample selection process in terms of parameter estimators and predictions is often of interest, from several perspectives: e.g., to determine if weighting makes a significant difference to the analysis at hand, which would in turn address whether the collection of auxiliary data is required in future studies; to evaluate previous studies which did not correct for selection bias. In this article, we propose methods to quantify the degree of bias corrected by the weighting procedure in the partial likelihood and Breslow-Aalen estimators. Asymptotic properties of the proposed test statistics are derived. The finite-sample significance level and power are evaluated through simulation. The proposed methods are then applied to data from a national organ failure registry to evaluate the bias in a post-kidney transplant survival model.

Journal ArticleDOI
TL;DR: A nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are evaluated and applied to two clinical trial datasets.
Abstract: The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte–Carlo simulations and applied to two clinical trial datasets.

Journal ArticleDOI
Pang Du1
TL;DR: A penalized likelihood model is proposed to estimate the hazard as a function of both gap time and covariate, and method for smoothing parameter selection is developed from subject-wise cross-validation.
Abstract: Recurrent event data arise in many biomedical and engineering studies when failure events can occur repeatedly over time for each study subject. In this article, we are interested in nonparametric estimation of the hazard function for gap time. A penalized likelihood model is proposed to estimate the hazard as a function of both gap time and covariate. Method for smoothing parameter selection is developed from subject-wise cross-validation. Confidence intervals for the hazard function are derived using the Bayes model of the penalized likelihood. An eigenvalue analysis establishes the asymptotic convergence rates of the relevant estimates. Empirical studies are performed to evaluate various aspects of the method. The proposed technique is demonstrated through an application to the well-known bladder tumor cancer data.

Journal ArticleDOI
TL;DR: In this article statistical inference for the failure time distribution of a product from “field return data”, that records the time between the product being shipped and returned for repair or replacement, is described.
Abstract: In this article statistical inference for the failure time distribution of a product from “field return data”, that records the time between the product being shipped and returned for repair or replacement, is described. The problem that is addressed is that the data are not failure times because they also include the time that it took to ship and install the product and then to return it to the manufacturer for repair or replacement. The inference attempts to infer the distribution of time to failure (that is, from installation to failure) from the data when in addition there are separate data on the times from shipping to installation, and from failure to return. The method is illustrated with data from units installed in a telecommunications network.

Journal ArticleDOI
TL;DR: This work introduces a new important aspect of the generalisibility of a prognostic index: the heterogeneity of the prognosticindex risk group hazard ratios over different centers, and investigates different ways to summarize the information available from this marginal posterior distribution.
Abstract: A major issue when proposing a new prognostic index is its generalisibility to daily clinical practice. Validation is therefore required. Most validation techniques assess whether "on average" the results obtained by the prognostic index in classifying patients in a new sample of patients are similar to the results obtained in the construction set. We introduce a new important aspect of the generalisibility of a prognostic index: the heterogeneity of the prognostic index risk group hazard ratios over different centers. If substantial variability between centers exists, the prognostic index may have no discriminatory capability in some of the centers. To model such heterogeneity, we use a frailty model including a random center effect and a random prognostic index by center interaction. Statistical inference is based on a Bayesian approach using a Laplacian approximation for the marginal posterior distribution of the variances of the random effects. We investigate different ways to summarize the information available from this marginal posterior distribution. Our approach is applied to a real bladder cancer database for which we demonstrate how to investigate and interpret heterogeneity in prognostic index effect over centers.

Journal ArticleDOI
TL;DR: This paper compares the approximate MLE against alternative estimators using limited simulation and demonstrates the utility of Laplace’s approximation approach by analyzing U.S. patient waiting time to deceased kidney transplant data.
Abstract: Relative risk frailty models are used extensively in analyzing clustered and/or recurrent time-to-event data. In this paper, Laplace’s approximation for integrals is applied to marginal distributions of data arising from parametric relative risk frailty models. Under regularity conditions, the approximate maximum likelihood estimators (MLE) are consistent with a rate of convergence that depends on both the number of subjects and number of members per subject. We compare the approximate MLE against alternative estimators using limited simulation and demonstrate the utility of Laplace’s approximation approach by analyzing U.S. patient waiting time to deceased kidney transplant data.

Journal ArticleDOI
TL;DR: The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest and the proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution.
Abstract: The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.

Journal ArticleDOI
TL;DR: Estimation methods are developed using a constant cure- death hazard ratio, in which the cure-death hazard ratio is a parameter of interest, and a profile likelihood-based technique is proposed for estimating the case fatality rate.
Abstract: The case fatality rate is an important indicator of the severity of a disease, and unbiased and accurate estimates of it during an outbreak are important in the study of epidemic diseases, including severe acute respiratory syndrome (SARS). In this paper, estimation methods are developed using a constant cure-death hazard ratio. A semiparametric model is presented, in which the cure-death hazard ratio is a parameter of interest, and a profile likelihood-based technique is proposed for estimating the case fatality rate. An extensive simulation was carried out to investigate the performance of this technique for small and medium sample sizes, using both summary and individual data. The results show that the performance depends on the model validity but is not heavily dependent on the sample size. The method was applied to summary SARS data obtained from Hong Kong and Singapore.

Journal ArticleDOI
TL;DR: In this article, the authors introduce directed goodness-of-fit tests for Cox-type regression models in survival analysis, which are based on sums of weighted martingale residuals and their asymptotic distributions.
Abstract: We introduce directed goodness-of-fit tests for Cox-type regression models in survival analysis. “Directed” means that one may choose against which alternatives the tests are particularly powerful. The tests are based on sums of weighted martingale residuals and their asymptotic distributions. We derive optimal tests against certain competing models which include Cox-type regression models with different covariates and/or a different link function. We report results from several simulation studies and apply our test to a real dataset.

Journal ArticleDOI
TL;DR: These methods derive estimators of the mean of a function of a quality-of-life adjusted failure time, in the presence of competing right censoring mechanisms, and generalize from a single to many censoring processes and from ignorable to non-ignorablecensoring processes.
Abstract: We derive estimators of the mean of a function of a quality-of-life adjusted failure time, in the presence of competing right censoring mechanisms. Our approach allows for the possibility that some or all of the competing censoring mechanisms are associated with the endpoint, even after adjustment for recorded prognostic factors, with the degree of residual association possibly different for distinct censoring processes. Our methods generalize from a single to many censoring processes and from ignorable to non-ignorable censoring processes.

Journal ArticleDOI
TL;DR: A simulation experiment and unemployment example justify the value of the partially linear approach over methods based on the Cox proportional hazards model and on methods not permitting nonlinearity.
Abstract: Censored regression quantile (CRQ) methods provide a powerful and flexible approach to the analysis of censored survival data when standard linear models are felt to be appropriate. In many cases however, greater flexibility is desired to go beyond the usual multiple regression paradigm. One area of common interest is that of partially linear models: one (or more) of the explanatory covariates are assumed to act on the response through a non-linear function. Here the CRQ approach of Portnoy (J Am Stat Assoc 98:1001–1012, 2003) is extended to this partially linear setting. Basic consistency results are presented. A simulation experiment and unemployment example justify the value of the partially linear approach over methods based on the Cox proportional hazards model and on methods not permitting nonlinearity.

Journal ArticleDOI
TL;DR: Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens’ failure behaviour and a model is recommended, which appears an appropriate one is considered.
Abstract: Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens' failure behaviour. An important model, developed on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum-Saunders fatigue model that incorporates size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation.

Journal ArticleDOI
TL;DR: The estimation of the expected value of the quality-adjusted survival, based on multistate models, is discussed, considering the sojourn times in health states are not identically distributed, for a given vector of covariates.
Abstract: We discuss the estimation of the expected value of the quality-adjusted survival, based on multistate models We generalize an earlier work, considering the sojourn times in health states are not identically distributed, for a given vector of covariates Approaches based on semiparametric and parametric (exponential and Weibull distributions) methodologies are considered A simulation study is conducted to evaluate the performance of the proposed estimator and the jackknife resampling method is used to estimate the variance of such estimator An application to a real data set is also included

Journal ArticleDOI
TL;DR: This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions, and assumes that covariates are available, making the model identifiable.
Abstract: A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.

Journal ArticleDOI
TL;DR: Nonparametric estimators of the mean residual life function where both upper and lower bounds are given are proposed and simulation study shows that the proposed estimators have uniformly smaller mean squared error compared to the unrestricted empirical mrl functions.
Abstract: Situations frequently arise in practice in which mean residual life (mrl) functions must be ordered. For example, in a clinical trial of three experiments, let e1, e2 and e3 be the mrl functions, respectively, for the disease groups under the standard and experimental treatments, and for the disease-free group. The well-documented mrl functions e1 and e3 can be used to generate a better estimate for e2 under the mrl restriction e1 ≤ e2 ≤ e3. In this paper we propose nonparametric estimators of the mean residual life function where both upper and lower bounds are given. Small and large sample properties of the estimators are explored. Simulation study shows that the proposed estimators have uniformly smaller mean squared error compared to the unrestricted empirical mrl functions. The proposed estimators are illustrated using a real data set from a cancer clinical trial study.