scispace - formally typeset
Search or ask a question

Showing papers in "The International Journal of Biostatistics in 2016"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new method of sample size estimation for Bland-Altman agreement assessment, which is based on the width of the confidence interval for LOAs (limits of agreement) in comparison to predefined clinical agreement limit.
Abstract: The Bland-Altman method has been widely used for assessing agreement between two methods of measurement. However, it remains unsolved about sample size estimation. We propose a new method of sample size estimation for Bland-Altman agreement assessment. According to the Bland-Altman method, the conclusion on agreement is made based on the width of the confidence interval for LOAs (limits of agreement) in comparison to predefined clinical agreement limit. Under the theory of statistical inference, the formulae of sample size estimation are derived, which depended on the pre-determined level of α, β, the mean and the standard deviation of differences between two measurements, and the predefined limits. With this new method, the sample sizes are calculated under different parameter settings which occur frequently in method comparison studies, and Monte-Carlo simulation is used to obtain the corresponding powers. The results of Monte-Carlo simulation showed that the achieved powers could coincide with the pre-determined level of powers, thus validating the correctness of the method. The method of sample size estimation can be applied in the Bland-Altman method to assess agreement between two methods of measurement.

131 citations


Journal ArticleDOI
TL;DR: In this paper, a model-based recursive partitioning is proposed for the automated detection of patient subgroups that are identifiable by predictive factors, which is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect.
Abstract: The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.

116 citations


Journal ArticleDOI
TL;DR: This work proposes data adaptive estimators of this optimal dynamic two time-point treatment rule defined as the rule that maximizes the mean outcome under the dynamic treatment, where the candidate rules are restricted to depend only on a user-supplied subset of the baseline and intermediate covariates.
Abstract: We consider the estimation of an optimal dynamic two time-point treatment rule defined as the rule that maximizes the mean outcome under the dynamic treatment, where the candidate rules are restricted to depend only on a user-supplied subset of the baseline and intermediate covariates. This estimation problem is addressed in a statistical model for the data distribution that is nonparametric, beyond possible knowledge about the treatment and censoring mechanisms. We propose data adaptive estimators of this optimal dynamic regime which are defined by sequential loss-based learning under both the blip function and weighted classification frameworks. Rather than a priori selecting an estimation framework and algorithm, we propose combining estimators from both frameworks using a super-learning based cross-validation selector that seeks to minimize an appropriate cross-validated risk. The resulting selector is guaranteed to asymptotically perform as well as the best convex combination of candidate algorithms in terms of loss-based dissimilarity under conditions. We offer simulation results to support our theoretical findings.

112 citations


Journal ArticleDOI
TL;DR: The theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods.
Abstract: Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.

58 citations


Journal ArticleDOI
TL;DR: This work presents an estimator of the mean outcome under the optimal stochastic ITR in a large semiparametric model that at most places restrictions on the probability of treatment assignment given covariates and gives conditions under which this estimator is efficient among all regular and asymptotically linear estimators.
Abstract: An individualized treatment rule (ITR) is a treatment rule which assigns treatments to individuals based on (a subset of) their measured covariates. An optimal ITR is the ITR which maximizes the population mean outcome. Previous works in this area have assumed that treatment is an unlimited resource so that the entire population can be treated if this strategy maximizes the population mean outcome. We consider optimal ITRs in settings where the treatment resource is limited so that there is a maximum proportion of the population which can be treated. We give a general closed-form expression for an optimal stochastic ITR in this resource-limited setting, and a closed-form expression for the optimal deterministic ITR under an additional assumption. We also present an estimator of the mean outcome under the optimal stochastic ITR in a large semiparametric model that at most places restrictions on the probability of treatment assignment given covariates. We give conditions under which our estimator is efficient among all regular and asymptotically linear estimators. All of our results are supported by simulations.

58 citations


Journal ArticleDOI
TL;DR: Mendelian randomization based on public data sources is useful and easy to perform, but care must be taken to avoid false precision or bias.
Abstract: Mendelian randomization (MR) is a technique that seeks to establish causation between an exposure and an outcome using observational data. It is an instrumental variable analysis in which genetic variants are used as the instruments. Many consortia have meta-analysed genome-wide associations between variants and specific traits and made their results publicly available. Using such data, it is possible to derive genetic risk scores for one trait and to deduce the association of that same risk score with a second trait. The properties of this approach are investigated by simulation and by evaluating the potentially causal effect of birth weight on adult glucose level. In such analyses, it is important to decide whether one is interested in the risk score based on a set of estimated regression coefficients or the score based on the true underlying coefficients. MR is primarily concerned with the latter. Methods designed for the former question will under-estimate the variance if used for MR. This variance can be corrected but it needs to be done with care to avoid introducing bias. MR based on public data sources is useful and easy to perform, but care must be taken to avoid false precision or bias.

57 citations


Journal ArticleDOI
TL;DR: A one-dimensional universal least favorable submodel for which the TMLE only takes one step is constructed, and thereby requires minimal extra data fitting to achieve its goal of solving the efficient influence curve equation.
Abstract: Consider a study in which one observes n independent and identically distributed random variables whose probability distribution is known to be an element of a particular statistical model, and one is concerned with estimation of a particular real valued pathwise differentiable target parameter of this data probability distribution. The targeted maximum likelihood estimator (TMLE) is an asymptotically efficient substitution estimator obtained by constructing a so called least favorable parametric submodel through an initial estimator with score, at zero fluctuation of the initial estimator, that spans the efficient influence curve, and iteratively maximizing the corresponding parametric likelihood till no more updates occur, at which point the updated initial estimator solves the so called efficient influence curve equation. In this article we construct a one-dimensional universal least favorable submodel for which the TMLE only takes one step, and thereby requires minimal extra data fitting to achieve its goal of solving the efficient influence curve equation. We generalize these to universal least favorable submodels through the relevant part of the data distribution as required for targeted minimum loss-based estimation. Finally, remarkably, given a multidimensional target parameter, we develop a universal canonical one-dimensional submodel such that the one-step TMLE, only maximizing the log-likelihood over a univariate parameter, solves the multivariate efficient influence curve equation. This allows us to construct a one-step TMLE based on a one-dimensional parametric submodel through the initial estimator, that solves any multivariate desired set of estimating equations.

55 citations


Journal ArticleDOI
TL;DR: It is demonstrated that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree and the results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC- maximizingMetalearning metalearning methods, with respect to ensemble AUC.
Abstract: Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

46 citations


Journal ArticleDOI
TL;DR: After reviewing current practice to address confounding in neuroimaging studies, an alternative approach based on inverse probability weighting is proposed, which is broadly applicable to many problems in machine learning and predictive modeling.
Abstract: Understanding structural changes in the brain that are caused by a particular disease is a major goal of neuroimaging research. Multivariate pattern analysis (MVPA) comprises a collection of tools that can be used to understand complex disease efxcfects across the brain. We discuss several important issues that must be considered when analyzing data from neuroimaging studies using MVPA. In particular, we focus on the consequences of confounding by non-imaging variables such as age and sex on the results of MVPA. After reviewing current practice to address confounding in neuroimaging studies, we propose an alternative approach based on inverse probability weighting. Although the proposed method is motivated by neuroimaging applications, it is broadly applicable to many problems in machine learning and predictive modeling. We demonstrate the advantages of our approach on simulated and real data examples.

44 citations


Journal ArticleDOI
TL;DR: It is concluded that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment.
Abstract: This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.

35 citations


Journal ArticleDOI
TL;DR: Results of a simulation study confirm Super Learner works well in practice under a variety of sample sizes, sampling designs, and data-generating functions, and it is applied to non-spatial data.
Abstract: Spatial prediction is an important problem in many scientific disciplines. Super Learner is an ensemble prediction approach related to stacked generalization that uses cross-validation to search for the optimal predictor amongst all convex combinations of a heterogeneous candidate set. It has been applied to non-spatial data, where theoretical results demonstrate it will perform asymptotically at least as well as the best candidate under consideration. We review these optimality properties and discuss the assumptions required in order for them to hold for spatial prediction problems. We present results of a simulation study confirming Super Learner works well in practice under a variety of sample sizes, sampling designs, and data-generating functions. We also apply Super Learner to a real world dataset.

Journal ArticleDOI
TL;DR: Results suggest both potential reduction in bias and increase in efficiency at the cost of an increase in computing time when using Super Learning to implement Inverse Probability Weighting estimators to draw causal inferences.
Abstract: OBJECTIVE: Consistent estimation of causal effects with inverse probability weighting estimators is known to rely on consistent estimation of propensity scores. To alleviate the bias expected from incorrect model specification for these nuisance parameters in observational studies, data-adaptive estimation and in particular an ensemble learning approach known as Super Learning has been proposed as an alternative to the common practice of estimation based on arbitrary model specification. While the theoretical arguments against the use of the latter haphazard estimation strategy are evident, the extent to which data-adaptive estimation can improve inferences in practice is not. Some practitioners may view bias concerns over arbitrary parametric assumptions as academic considerations that are inconsequential in practice. They may also be wary of data-adaptive estimation of the propensity scores for fear of greatly increasing estimation variability due to extreme weight values. With this report, we aim to contribute to the understanding of the potential practical consequences of the choice of estimation strategy for the propensity scores in real-world comparative effectiveness research. METHOD: We implement secondary analyses of Electronic Health Record data from a large cohort of type 2 diabetes patients to evaluate the effects of four adaptive treatment intensification strategies for glucose control (dynamic treatment regimens) on subsequent development or progression of urinary albumin excretion. Three Inverse Probability Weighting estimators are implemented using both model-based and data-adaptive estimation strategies for the propensity scores. Their practical performances for proper confounding and selection bias adjustment are compared and evaluated against results from previous randomized experiments. CONCLUSION: Results suggest both potential reduction in bias and increase in efficiency at the cost of an increase in computing time when using Super Learning to implement Inverse Probability Weighting estimators to draw causal inferences. Language: en

Journal ArticleDOI
TL;DR: Evidence is provided that the IV methods may result in biased treatment effects if applied on a data set in which subjects are preselected based on their received treatments and a procedure is proposed that identifies the treatment effect of interest as a function of a vector of sensitivity parameters.
Abstract: Instrumental variable (IV) methods are widely used to adjust for the bias in estimating treatment effects caused by unmeasured confounders in observational studies. It is common that a comparison between two treatments is focused on and that only subjects receiving one of these two treatments are considered in the analysis even though more than two treatments are available. In this paper, we provide empirical and theoretical evidence that the IV methods may result in biased treatment effects if applied on a data set in which subjects are preselected based on their received treatments. We frame this as a selection bias problem and propose a procedure that identifies the treatment effect of interest as a function of a vector of sensitivity parameters. We also list assumptions under which analyzing the preselected data does not lead to a biased treatment effect estimate. The performance of the proposed method is examined using simulation studies. We applied our method on The Health Improvement Network (THIN) database to estimate the comparative effect of metformin and sulfonylureas on weight gain among diabetic patients.

Journal ArticleDOI
TL;DR: An integer-valued ARCH model which can be used for modeling time series of counts with under-, equi-, or overdispersion is presented and a generalization of the introduced model is considered by introducing aninteger-valued GARCH model.
Abstract: We present an integer-valued ARCH model which can be used for modeling time series of counts with under-, equi-, or overdispersion. The introduced model has a conditional binomial distribution, and it is shown to be strictly stationary and ergodic. The unknown parameters are estimated by three methods: conditional maximum likelihood, conditional least squares and maximum likelihood type penalty function estimation. The asymptotic distributions of the estimators are derived. A real application of the novel model to epidemic surveillance is briefly discussed. Finally, a generalization of the introduced model is considered by introducing an integer-valued GARCH model.

Journal ArticleDOI
TL;DR: An asymptotic linearity theorem is provided which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model.
Abstract: Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.

Journal ArticleDOI
TL;DR: For estimation of the marginal expectation of the outcome under a fixed treatment, TMLE and IPW estimators employing the same treatment model specification may perform differently due to differential sensitivity to practical positivity violations; however, TMle, being doubly robust, shows improved performance with richer specifications of the outcomes model.
Abstract: Inverse probability of treatment weighting (IPW) and targeted maximum likelihood estimation (TMLE) are relatively new methods proposed for estimating marginal causal effects. TMLE is doubly robust, yielding consistent estimators even under misspecification of either the treatment or the outcome model. While IPW methods are known to be sensitive to near violations of the practical positivity assumption (e. g., in the case of data sparsity), the consequences of this violation in the TMLE framework for binary outcomes have been less widely investigated. As near practical positivity violations are particularly likely in high-dimensional covariate settings, a better understanding of the performance of TMLE is of particular interest for pharmcoepidemiological studies using large databases. Using plasmode and Monte-Carlo simulation studies, we evaluated the performance of TMLE compared to that of IPW estimators based on a point-exposure cohort study of the marginal causal effect of post-myocardial infarction statin use on the 1-year risk of all-cause mortality from the Clinical Practice Research Datalink. A variety of treatment model specifications were considered, inducing different degrees of near practical non-positivity. Our simulation study showed that the performance of the TMLE and IPW estimators were comparable when the dimension of the fitted treatment model was small to moderate; however, they differed when a large number of covariates was considered. When a rich outcome model was included in the TMLE, estimators were unbiased. In some cases, we found irregular bias and large standard errors with both methods even with a correctly specified high-dimensional treatment model. The IPW estimator showed a slightly better root MSE with high-dimensional treatment model specifications in our simulation setting. In conclusion, for estimation of the marginal expectation of the outcome under a fixed treatment, TMLE and IPW estimators employing the same treatment model specification may perform differently due to differential sensitivity to practical positivity violations; however, TMLE, being doubly robust, shows improved performance with richer specifications of the outcome model. Although TMLE is appealing for its double robustness property, such violations in a high-dimensional covariate setting are problematic for both methods.

Journal ArticleDOI
TL;DR: This paper proposes two approaches based on the profile likelihood and Wilson score that are simple extensions of well-known methods such as the likelihood, the generalized estimating equation of Zeger and Liang and the ratio estimator approach.
Abstract: Interval estimation of the proportion parameter in the analysis of binary outcome data arising in cluster studies is often an important problem in many biomedical applications. In this paper, we propose two approaches based on the profile likelihood and Wilson score. We compare them with two existing methods recommended for complex survey data and some other methods that are simple extensions of well-known methods such as the likelihood, the generalized estimating equation of Zeger and Liang and the ratio estimator approach of Rao and Scott. An extensive simulation study is conducted for a variety of parameter combinations for the purposes of evaluating and comparing the performance of these methods in terms of coverage and expected lengths. Applications to biomedical data are used to illustrate the proposed methods.

Journal ArticleDOI
TL;DR: A second-order estimator of the mean of a variable subject to missingness, under the missing at random assumption, is presented and an illustration of the methods using a publicly available dataset to determine the effect of an anticoagulant on health outcomes of patients undergoing percutaneous coronary intervention is provided.
Abstract: We present a second-order estimator of the mean of a variable subject to missingness, under the missing at random assumption. The estimator improves upon existing methods by using an approximate second-order expansion of the parameter functional, in addition to the first-order expansion employed by standard doubly robust methods. This results in weaker assumptions about the convergence rates necessary to establish consistency, local efficiency, and asymptotic linearity. The general estimation strategy is developed under the targeted minimum loss-based estimation (TMLE) framework. We present a simulation comparing the sensitivity of the first and second-order estimators to the convergence rate of the initial estimators of the outcome regression and missingness score. In our simulation, the second-order TMLE always had a coverage probability equal or closer to the nominal value 0.95, compared to its first-order counterpart. In the best-case scenario, the proposed second-order TMLE had a coverage probability of 0.86 when the first-order TMLE had a coverage probability of zero. We also present a novel first-order estimator inspired by a second-order expansion of the parameter functional. This estimator only requires one-dimensional smoothing, whereas implementation of the second-order TMLE generally requires kernel smoothing on the covariate space. The first-order estimator proposed is expected to have improved finite sample performance compared to existing first-order estimators. In the best-case scenario of our simulation study, the novel first-order TMLE improved the coverage probability from 0 to 0.90. We provide an illustration of our methods using a publicly available dataset to determine the effect of an anticoagulant on health outcomes of patients undergoing percutaneous coronary intervention. We provide R code implementing the proposed estimator.

Journal ArticleDOI
TL;DR: In this article, a general, modular method for significance testing of groups (or clusters) of variables in a high-dimensional linear model is proposed, which relies on repeated sample splitting and sequential rejection and asymptotically controls the familywise error rate.
Abstract: We propose a general, modular method for significance testing of groups (or clusters) of variables in a high-dimensional linear model. In presence of high correlations among the covariables, due to serious problems of identifiability, it is indispensable to focus on detecting groups of variables rather than singletons. We propose an inference method which allows to build in hierarchical structures. It relies on repeated sample splitting and sequential rejection, and we prove that it asymptotically controls the familywise error rate. It can be implemented on any collection of clusters and leads to improved power in comparison to more standard non-sequential rejection methods. We complement the theoretical analysis with empirical results for simulated and real data.

Journal ArticleDOI
TL;DR: This article proposed a pooled targeted maximum likelihood estimator (TMLE) for estimating the hazard function under longitudinal dynamic treatment regimes, which is shown to be semiparametric efficient and doubly robust.
Abstract: In social and health sciences, many research questions involve understanding the causal effect of a longitudinal treatment on mortality (or time-to-event outcomes in general). Often, treatment status may change in response to past covariates that are risk factors for mortality, and in turn, treatment status may also affect such subsequent covariates. In these situations, Marginal Structural Models (MSMs), introduced by Robins (1997. Marginal structural models Proceedings of the American Statistical Association. Section on Bayesian Statistical Science, 1-10), are well-established and widely used tools to account for time-varying confounding. In particular, a MSM can be used to specify the intervention-specific counterfactual hazard function, i. e. the hazard for the outcome of a subject in an ideal experiment where he/she was assigned to follow a given intervention on their treatment variables. The parameters of this hazard MSM are traditionally estimated using the Inverse Probability Weighted estimation Robins (1999. Marginal structural models versus structural nested models as tools for causal inference. In: Statistical models in epidemiology: the environment and clinical trials. Springer-Verlag, 1999:95-134), Robins et al. (2000), (IPTW, van der Laan and Petersen (2007. Causal effect models for realistic individualized treatment and intention to treat rules. Int J Biostat 2007;3:Article 3), Robins et al. (2008. Estimaton and extrapolation of optimal treatment and testing strategies. Statistics in Medicine 2008;27(23):4678-721)). This estimator is easy to implement and admits Wald-type confidence intervals. However, its consistency hinges on the correct specification of the treatment allocation probabilities, and the estimates are generally sensitive to large treatment weights (especially in the presence of strong confounding), which are difficult to stabilize for dynamic treatment regimes. In this paper, we present a pooled targeted maximum likelihood estimator (TMLE, van der Laan and Rubin (2006. Targeted maximum likelihood learning. The International Journal of Biostatistics 2006;2:1-40)) for MSM for the hazard function under longitudinal dynamic treatment regimes. The proposed estimator is semiparametric efficient and doubly robust, offering bias reduction over the incumbent IPTW estimator when treatment probabilities may be misspecified. Moreover, the substitution principle rooted in the TMLE potentially mitigates the sensitivity to large treatment weights in IPTW. We compare the performance of the proposed estimator with the IPTW and a on-targeted substitution estimator in a simulation study.

Journal ArticleDOI
TL;DR: This work investigates the effect of smoothing using semiparametric mixed models on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data and compares the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function.
Abstract: Besides being mainly used for analyzing clustered or longitudinal data, generalized linear mixed models can also be used for smoothing via restricting changes in the fit at the knots in regression splines. The resulting models are usually called semiparametric mixed models (SPMMs). We investigate the effect of smoothing using SPMMs on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data. Through simulations, we compare the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function. Simulation results suggest that, in general, the SPMMs recover the true curves very well and yield reasonable estimates of the correlation and variance parameters. However, for binary outcomes, SPMMs produce biased estimates of the variance parameters for high serially correlated data. We apply these methods to a dataset investigating the association between CD4 cell count and time since seroconversion for HIV infected men enrolled in the Multicenter AIDS Cohort Study.

Journal ArticleDOI
TL;DR: Two statistical methods to enhance data analysis of genome-wide association studies by identifying a latent confounding factor, using a profile of whole genome SNPs, and eliminating confounding effects through matching or stratified statistical analysis are developed.
Abstract: Genome-wide association studies (GWAS) examine a large number of genetic variants, e. g., single nucleotide polymorphisms (SNP), and associate them with a disease of interest. Traditional statistical methods for GWASs can produce spurious associations, due to limited information from individual SNPs and confounding effects. This paper develops two statistical methods to enhance data analysis of GWASs. The first is a multiple-SNP association test, which is a weighted chi-square test derived for big contingency tables. The test assesses combinatorial effects of multiple SNPs and improves conventional methods of single SNP analysis. The second is a method that corrects for confounding effects, which may come from population stratification as well as other ambiguous (unknown) factors. The proposed method identifies a latent confounding factor, using a profile of whole genome SNPs, and eliminates confounding effects through matching or stratified statistical analysis. Simulations and a GWAS of rheumatoid arthritis demonstrate that the proposed methods dramatically remove the number of significant tests, or false positives, and outperforms other available methods.

Journal ArticleDOI
TL;DR: Schatzkin et al. as discussed by the authors showed that the true positive fraction is equal to the ratio of unconditional statistics, such as disease detection rates, and therefore they can calculate these ratios between two screening tests on the same population even if negative test patients are not followed with a reference procedure and the true and false negative rates are unknown.
Abstract: Schatzkin et al. and other authors demonstrated that the ratios of some conditional statistics such as the true positive fraction are equal to the ratios of unconditional statistics, such as disease detection rates, and therefore we can calculate these ratios between two screening tests on the same population even if negative test patients are not followed with a reference procedure and the true and false negative rates are unknown. We demonstrate that this same property applies to an expected utility metric. We also demonstrate that while simple estimates of relative specificities and relative areas under ROC curves (AUC) do depend on the unknown negative rates, we can write these ratios in terms of disease prevalence, and the dependence of these ratios on a posited prevalence is often weak particularly if that prevalence is small or the performance of the two screening tests is similar. Therefore we can estimate relative specificity or AUC with little loss of accuracy, if we use an approximate value of disease prevalence.

Journal ArticleDOI
TL;DR: It is shown through theoretical results, numerical comparisons, and two microarray examples that when the rejection regions for the ODP test statistics are chosen such that the procedure is guaranteed to uniformly control a Type I error rate measure, the technique is generally less powerful than competing methods.
Abstract: The Optimal Discovery Procedure (ODP) is a method for simultaneous hypothesis testing that attempts to gain power relative to more standard techniques by exploiting multivariate structure [1]. Specializing to the example of testing whether components of a Gaussian mean vector are zero, we compare the power of the ODP to a Bonferroni-style method and to the Benjamini-Hochberg method when the testing procedures aim to respectively control certain Type I error rate measures, such as the expected number of false positives or the false discovery rate. We show through theoretical results, numerical comparisons, and two microarray examples that when the rejection regions for the ODP test statistics are chosen such that the procedure is guaranteed to uniformly control a Type I error rate measure, the technique is generally less powerful than competing methods. We contrast and explain these results in light of previously proven optimality theory for the ODP. We also compare the ordering given by the ODP test statistics to the standard rankings based on sorting univariate p-values from smallest to largest. In the cases we considered the standard ordering was superior, and ODP rankings were adversely impacted by correlation.

Journal ArticleDOI
TL;DR: This article considers semiparametric regression methods for the occurrence rate function of recurrent events when the covariates may be measured with errors, and proposes two corrected approaches based on different ideas that are numerically identical when estimating the regression parameters.
Abstract: Recurrent event data arise frequently in many longitudinal follow-up studies. Hence, evaluating covariate effects on the rates of occurrence of such events is commonly of interest. Examples include repeated hospitalizations, recurrent infections of HIV, and tumor recurrences. In this article, we consider semiparametric regression methods for the occurrence rate function of recurrent events when the covariates may be measured with errors. In contrast to the existing works, in our case the conventional assumption of independent censoring is violated since the recurrent event process is interrupted by some correlated events, which is called informative drop-out. Further, some covariates may be measured with errors. To accommodate for both informative censoring and measurement error, the occurrence of recurrent events is modelled through an unspecified frailty distribution and accompanied with a classical measurement error model. We propose two corrected approaches based on different ideas, and we show that they are numerically identical when estimating the regression parameters. The asymptotic properties of the proposed estimators are established, and the finite sample performance is examined via simulations. The proposed methods are applied to the Nutritional Prevention of Cancer trial for assessing the effect of the plasma selenium treatment on the recurrence of squamous cell carcinoma.

Journal ArticleDOI
TL;DR: A model-free approach with using the generalized odds ratio (GOR) to measure the relative treatment effect is proposed and procedures for testing equality of treatment effects and interval estimators for the GOR are developed.
Abstract: In randomized clinical trials, we often encounter ordinal categorical responses with repeated measurements. We propose a model-free approach with using the generalized odds ratio (GOR) to measure the relative treatment effect. We develop procedures for testing equality of treatment effects and derive interval estimators for the GOR. We further develop a simple procedure for testing the treatment-by-period interaction. To illustrate the use of test procedures and interval estimators developed here, we consider two real-life data sets, one studying the gender effect on pain scores on an ordinal scale after hip joint resurfacing surgeries, and the other investigating the effect of an active hypnotic drug in insomnia patients on ordinal categories of time to falling asleep.

Journal ArticleDOI
TL;DR: A class of adaptive designs of staggered-start clinical trials are proposed, in which it is shown that as long as the initial sizes at the beginning of the successive trials are not too large relative to the total sample size, the proposed design can still achieve optimality criterion asymptotically for the allocation proportions as the ordinary trials.
Abstract: In phase II and/or III clinical trial study, there are several competing treatments, the goal is to assess the performances of the treatments at the end of the study, the trial design aims to minimize risks to the patients in the trial, according to some given allocation optimality criterion. Recently, a new type of clinical trial, the staggered-start trial has been proposed in some studies, in which different treatments enter the same trial at different times. Some basic questions for this trial are whether optimality can still be kept? under what conditions? and if so how to allocate the the coming patients to treatments to achieve such optimality? Here we propose and study a class of adaptive designs of staggered-start clinical trials, in which for given optimality criterion object, we show that as long as the initial sizes at the beginning of the successive trials are not too large relative to the total sample size, the proposed design can still achieve optimality criterion asymptotically for the allocation proportions as the ordinary trials; if these initial sample sizes have about the same magnitude as the total sample size, full optimality cannot be achieved. The proposed method is simple to use and is illustrated with several examples and a simulation study.

Journal ArticleDOI
TL;DR: A Wald-type test statistic is derived and it is shown that this test maintains proper Type I Error under the null fit, and can be used as a general test of relative fit for any semi-parametric model alternative.
Abstract: Comparing the relative fit of competing models can be used to address many different scientific questions. In classical statistics one can, if appropriate, use likelihood ratio tests and information based criterion, whereas clinical medicine has tended to rely on comparisons of fit metrics like C-statistics. However, for many data adaptive modelling procedures such approaches are not suitable. In these cases, statisticians have used cross-validation, which can make inference challenging. In this paper we propose a general approach that focuses on the "conditional" risk difference (conditional on the model fits being fixed) for the improvement in prediction risk. Specifically, we derive a Wald-type test statistic and associated confidence intervals for cross-validated test sets utilizing the independent validation within cross-validation in conjunction with a test for multiple comparisons. We show that this test maintains proper Type I Error under the null fit, and can be used as a general test of relative fit for any semi-parametric model alternative. We apply the test to a candidate gene study to test for the association of a set of genes in a genetic pathway.

Journal ArticleDOI
TL;DR: This paper proposes data-adaptive weighting schemes that serve to decrease the impact of influential points and thus stabilize the estimator, providing a doubly robust g-estimator that is also robust in the sense of Hampel (15).
Abstract: Individualized medicine is an area that is growing, both in clinical and statistical settings, where in the latter, personalized treatment strategies are often referred to as dynamic treatment regimens. Estimation of the optimal dynamic treatment regime has focused primarily on semi-parametric approaches, some of which are said to be doubly robust in that they give rise to consistent estimators provided at least one of two models is correctly specified. In particular, the locally efficient doubly robust g-estimation is robust to misspecification of the treatment-free outcome model so long as the propensity model is specified correctly, at the cost of an increase in variability. In this paper, we propose data-adaptive weighting schemes that serve to decrease the impact of influential points and thus stabilize the estimator. In doing so, we provide a doubly robust g-estimator that is also robust in the sense of Hampel (15).

Journal ArticleDOI
TL;DR: In this article, the problem of multiple hypothesis testing for correlated clustered data was studied and the existing multiple comparison procedures based on maximum likelihood estimation could be computationally intensive, so the authors proposed to construct multiple comparison procedure based on composite likelihood method.
Abstract: We study the problem of multiple hypothesis testing for correlated clustered data. As the existing multiple comparison procedures based on maximum likelihood estimation could be computationally intensive, we propose to construct multiple comparison procedures based on composite likelihood method. The new test statistics account for the correlation structure within the clusters and are computationally convenient to compute. Simulation studies show that the composite likelihood based procedures maintain good control of the familywise type I error rate in the presence of intra-cluster correlation, whereas ignoring the correlation leads to erratic performance.