scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Biopharmaceutical Statistics in 2006"


Journal ArticleDOI
TL;DR: Three particular areas where it is felt that adaptive designs can be utilized beneficially are discussed: dose finding, seamless Phase II/III trials designs, and sample size reestimation.
Abstract: A PhRMA Working Group on adaptive clinical trial designs has been formed to investigate and facilitate opportunities for wider acceptance and usage of adaptive designs and related methodologies. A White Paper summarizing the findings of the group is in preparation; this article is an Executive Summary for that full White Paper, and summarizes the findings and recommendations of the group. Logistic, operational, procedural, and statistical challenges associated with adaptive designs are addressed. Three particular areas where it is felt that adaptive designs can be utilized beneficially are discussed: dose finding, seamless Phase II/III trials designs, and sample size reestimation.

294 citations


Journal ArticleDOI
TL;DR: Several modeling strategies for vaccine adverse event count data in which the data are characterized by excess zeroes and heteroskedasticity are compared, illustrating that the ZINB and NBH models are preferred but these models are indistinguishable with respect to fit.
Abstract: We compared several modeling strategies for vaccine adverse event count data in which the data are characterized by excess zeroes and heteroskedasticity. Count data are routinely modeled using Poisson and Negative Binomial (NB) regression but zero-inflated and hurdle models may be advantageous in this setting. Here we compared the fit of the Poisson, Negative Binomial (NB), zero-inflated Poisson (ZIP), zero-inflated Negative Binomial (ZINB), Poisson Hurdle (PH), and Negative Binomial Hurdle (NBH) models. In general, for public health studies, we may conceptualize zero-inflated models as allowing zeroes to arise from at-risk and not-at-risk populations. In contrast, hurdle models may be conceptualized as having zeroes only from an at-risk population. Our results illustrate, for our data, that the ZINB and NBH models are preferred but these models are indistinguishable with respect to fit. Choosing between the zero-inflated and hurdle modeling framework, assuming Poisson and NB models are inadequate because of excess zeroes, should generally be based on the study design and purpose. If the study's purpose is inference then modeling framework should be considered. For example, if the study design leads to count endpoints with both structural and sample zeroes then generally the zero-inflated modeling framework is more appropriate, while in contrast, if the endpoint of interest, by design, only exhibits sample zeroes (e.g., at-risk participants) then the hurdle model framework is generally preferred. Conversely, if the study's primary purpose it is to develop a prediction model then both the zero-inflated and hurdle modeling frameworks should be adequate.

212 citations


Journal ArticleDOI
TL;DR: This article considers a unified strategy for designing and analyzing dose-finding studies, including the testing of proof-of-concept and the selection of one or more doses to take into further development, consisting of a multi-stage procedure.
Abstract: The search for an adequate dose involves some of the most complex series of decisions to be made in developing a clinically viable product. Typically decisions based on such dose-finding studies reside in two domains: (i) "proof" of evidence that the treatment is effective and (ii) the need to choose dose(s) for further development. We consider a unified strategy for designing and analyzing dose-finding studies, including the testing of proof-of-concept and the selection of one or more doses to take into further development. The methodology combines the advantages of multiple comparisons and modeling approaches, consisting of a multi-stage procedure. Proof-of-concept is tested in the first stage, using multiple comparison methods to identify statistically significant contrasts corresponding to a set of candidate models. If proof-of-concept is established in the first stage, the best model is then used for dose selection in subsequent stages. This article describes and illustrates practical considerations related to the implementation of this methodology. We discuss how to determine sample sizes and perform power calculations based on the proof-of-concept step. A relevant topic in this context is how to obtain good prior values for the model parameters: different methods to translate prior clinical knowledge into parameter values are presented and discussed. In addition, different possibilities of performing sensitivity analyses to assess the consequences of misspecifying the true parameter values are introduced. All methods are illustrated by a real dose-response phase II study for an anti-anxiety compound.

75 citations


Journal ArticleDOI
TL;DR: An outcome-adaptive Bayesian procedure, proposed by Thall and Cook (2004), for assigning doses of an experimental treatment to successive cohorts of patients, using elicited probability pairs to construct a family of trade-off contours that provide a basis for determining a best dose for each cohort.
Abstract: The purpose of this paper is to describe and illustrate an outcome-adaptive Bayesian procedure, proposed by Thall and Cook (2004), for assigning doses of an experimental treatment to successive cohorts of patients. The method uses elicited (efficacy, toxicity) probability pairs to construct a family of trade-off contours that are used to quantify the desirability of each dose. This provides a basis for determining a best dose for each cohort. The method combines the goals of conventional Phase I and Phase II trials, and thus may be called a "Phase I-II" design. We first give a general review of the probability model and dose-finding algorithm. We next describe an application to a trial of a biologic agent for treatment of acute myelogenous leukemia, including a computer simulation study to assess the design's average behavior. To illustrate how the method may work in practice, we present a cohort-by-cohort example of a particular trial. We close with a discussion of some practical issues that may arise during implementation.

52 citations


Journal ArticleDOI
TL;DR: I review the designs available for Phase I dose-finding studies of chemotherapeutic agents in cancer patients based on the assumption that both efficacy and toxicity increase with dose, and thus attempt to minimize the number of patients treated at low doses.
Abstract: I review the designs available for Phase I dose-finding studies of chemotherapeutic agents in cancer patients. The designs are based on the assumption that both efficacy and toxicity increase with dose, and thus attempt to minimize the number of patients treated at low doses, and also to minimize the chance that patients will be treated at excessively toxic or lethal doses. The designs fall into two classes: rule-based and model-guided. Rule-based designs can always determine a reasonable maximum tolerable dose based on observed toxicity, but when model assumptions are not satisfied, many model-guided designs will not.

41 citations


Journal ArticleDOI
TL;DR: The issues and opportunities of adaptive designs are discussed, and recommendations are made in the following aspects: study planning, trial monitoring, analysis and reporting, trial simulation, and regulatory perspectives.
Abstract: The issues and opportunities of adaptive designs are discussed. Starting with the definitions of an adaptive design, its validity and integrity are discussed. The three key components of an adaptive design, i.e., Type I error control, p-value adjustment, and unbiased estimation and confidence interval are addressed. Various seamless designs are investigated. Recommendations are made in the following aspects: study planning, trial monitoring, analysis and reporting, trial simulation, and regulatory perspectives.

41 citations


Journal ArticleDOI
Neal Thomas1
TL;DR: The sigmoid E max model is used to create several contrasts that have high power to detect an increasing trend from placebo to address deficiencies in confidence intervals and tests derived from asymptotic-based maximum likelihood estimation when some parameters are poorly determined.
Abstract: Application of a sigmoid E max model is described for the assessment of dose-response with designs containing a small number of doses (typically, three to six). The expanded model is a common E max model with a power (Hill) parameter applied to dose and the ED 50 parameter. The model will be evaluated following a strategy proposed by Bretz et al. (2005). The sigmoid E max model is used to create several contrasts that have high power to detect an increasing trend from placebo. Alpha level for the hypothesis of no dose-response is controlled using multiple comparison methods applied to the p-values obtained from the contrasts. Subsequent to establishing drug activity, Bayesian methods are used to estimate the dose-response curve from the sparse dosing design. Bayesian estimation applied to the sigmoid model represents uncertainty in model selection that is missed when a single simpler model is selected from a collection of non-nested models. The goal is to base model selection on substantive knowledge and ...

40 citations


Journal ArticleDOI
TL;DR: An overview of the key statistical issues and recent developments for noninferiority/equivalence vaccine trials is given.
Abstract: Noninferioritylequivalence designs are often used in vaccine clinical trials. The goal of these designs is to demonstrate that a new vaccine, or new formulation or regimen of an existing vaccine, is similar in terms of effectiveness to the existing vaccine, while offering such advantages as easier manufacturing, easier administration, lower cost, or improved safety profile. These noninferioritylequivalence designs are particularly useful in four common types of immunogenicity trials: vaccine bridging trials, combination vaccine trials, vaccine concomitant use trials, and vaccine consistency lot trials. In this paper, we give an overview of the key statistical issues and recent developments for noninferioritylequivalence vaccine trials. Specifically, we cover the following topics: (i) selection of study endpoints; (ii) formulation of the null and alternative hypotheses; (iii) determination of the noninferioritylequivalence margin; (iv) selection of efficient statistical methods for the statistical analysis of noninferioritylequivalence vaccine trials, with particular emphasis on adjustment for stratification factors and missing pre-or post-vaccination data; and (v) the calculation of sample size and power.

37 citations


Journal ArticleDOI
TL;DR: The proposed method for calculating the sample size of a pharmacokinetic study analyzed using a mixed effects model within a hypothesis testing framework allows unequal allocation of subjects to the groups and accounts for situations where different blood sampling schedules are required in different groups of patients.
Abstract: We present a method for calculating the sample size of a pharmacokinetic study analyzed using a mixed effects model within a hypothesis testing framework. A sample size calculation method for repeated measurement data analyzed using generalized estimating equations has been modified for nonlinear models. The Wald test is used for hypothesis testing of pharmacokinetic parameters. A marginal model for the population pharmacokinetic is obtained by linearizing the structural model around the subject specific random effects. The proposed method is general in that it allows unequal allocation of subjects to the groups and accounts for situations where different blood sampling schedules are required in different groups of patients. The proposed method has been assessed using Monte Carlo simulations under a range of scenarios. NONMEM was used for simulations and data analysis and the results showed good agreement.

24 citations


Journal ArticleDOI
Jianjun Li1, Ivan S. F. Chan1
TL;DR: A new statistical test is developed for detecting qualitative interaction in clinical trials, an extension of the well-known range test, but utilizes all observed treatment differences rather than only the maximum and the minimum values.
Abstract: To help interpret a treatment effect in clinical trials, investigators usually examine whether the observed treatment effect is the same in various subsets of patients. The qualitative interaction, which means that the treatment is beneficial in some subsets and harmful in others, is of major importance. In this paper, a new statistical test is developed for detecting such interactions. The new test is an extension of the well-known range test, but utilizes all observed treatment differences rather than only the maximum and the minimum values. Extensive simulations indicate that the proposed extended range test generally outperforms the range test and is even better than the likelihood ratio test in the sense that the extended range test is much more powerful than the likelihood test when one treatment is superior to the other in most subsets and yet does not lose much power otherwise. It is also illustrated through a real clinical trial example that the extended range test detects the qualitative interac...

23 citations


Journal ArticleDOI
TL;DR: A statistical quality control method to assess a proposed consistency index of raw materials from different sources and/or final products manufactured at different sites is developed and an example concerning the development of a TCM is presented.
Abstract: The statistical quality control process on raw materials and/or the final product of traditional Chinese medicine (TCM) is examined. We develop a statistical quality control (QC) method to assess a proposed consistency index of raw materials from different sources and/or final products manufactured at different sites. The idea is to construct a 95% confidence interval for a proposed consistency index under a sampling plan. If the constructed 95% confidence lower limit is greater than a prespecified QC lower limit, then we claim that the raw material or final products have passed the QC and hence can be released for further processing or use; otherwise, the raw materials and/or final product should be rejected. For a given component (the most active component if possible), a sampling plan is derived to ensure that there is a desired probability for establishing consistency between sites when there is truly no difference in raw materials or final products between sites. An example concerning the development...

Journal ArticleDOI
TL;DR: This work uses simulations to investigate seven omnibus test statistics and finds that the Anderson–Darling and Fisher's statistics are superior to the others.
Abstract: Tests of the overall null hypothesis in datasets with one outcome variable and many covariates can be based on various methods to combine the p-values for univariate tests of association of each covariate with the outcome. The overall p-value is computed by permuting the outcome variable. We discuss the situations in which this approach is useful and provide several examples. We use simulations to investigate seven omnibus test statistics and find that the Anderson–Darling and Fisher's statistics are superior to the others.

Journal ArticleDOI
TL;DR: It is shown in this article that the multi-stage fallback procedure can be formulated as a closed testing procedure and thus controls the Type I error rate with respect to multiple dose-control comparisons as well as multiple endpoints.
Abstract: This article introduces a general testing procedure for performing dose-control comparisons in dose-response trials with one or more endpoints. The procedure (termed multi-stage fallback procedure) is an extension of the fallback test proposed by Wiens (2003). The multi-stage fallback procedure features a simple stepwise form and improves the power of dose-control tests at higher doses by taking into account the ordering of the doses. It also serves as an efficient tool for handling multiplicity caused by multiple endpoints. It is shown in this article that the multi-stage fallback procedure can be formulated as a closed testing procedure and thus controls the Type I error rate with respect to multiple dose-control comparisons as well as multiple endpoints. The proposed testing method is illustrated using examples from dose-response clinical trials with single and multiple endpoints.

Journal ArticleDOI
Mike K. Smith1, Scott Marshall1
TL;DR: Simulations show that relatively modest sample sizes can yield informative results about the magnitude of the relative potency using this approach, and the operating characteristics are good when assessing model estimates against clinically important changes in relative potency.
Abstract: We wish to use prior information on an existing drug in the design and analysis of a dose-response study for a new drug candidate within the same pharmacological class. Using the Bayesian methodology, this prior information can be used quantitatively and the randomization can be weighted in favor of the new compound, where there is less information. An E max model is used to describe the dose-response of the existing drug. The estimates from this model are used to provide informative prior information used for the design and analysis of the new study to establish the relative potency between the new compound and the existing drug therapy. The assumption is made that the data from previous trials and the new study are exchangeable. The impact of departures from this assumption can be quantified through simulations and by assessing the operating characteristics of various scenarios. Simulations show that relatively modest sample sizes can yield informative results about the magnitude of the relative potency...

Journal ArticleDOI
TL;DR: Some of the various measures and study designs for evaluating different effects of vaccination are reviewed.
Abstract: Vaccination produces many different types of effects in individuals and in populations. The scientific and public health questions of interest determine the choice of measures of effect and study designs. Here we review some of the various measures and study designs for evaluating different effects of vaccination.

Journal ArticleDOI
TL;DR: Simulations show that the mixture model with diffuse priors can have better coverage probabilities for the prediction interval than the nonmixture models if a treatment effect is present and that with few events, these approaches produce substantially different results.
Abstract: Because power is primarily determined by the number of events in event-based clinical trials, the timing for interim or final analysis of data is often determined based on the accrual of events during the course of the study. Thus, it is of interest to predict early and accurately the time of a landmark interim or terminating event. Existing Bayesian methods may be used to predict the date of the landmark event, based on current enrollment, event, and loss to follow-up, if treatment arms are known. This work extends these methods to the case where the treatment arms are masked by using a parametric mixture model with a known mixture proportion. Posterior simulation using the mixture model is compared with methods assuming a single population. Comparison of the mixture model with the single-population approach shows that with few events, these approaches produce substantially different results and that these results converge as the prediction time is closer to the landmark event. Simulations show that the ...

Journal ArticleDOI
TL;DR: This work investigates some approaches to p -value calculation in analyzing multi-stage Phase II clinical trials that have a binary variable, such as response, as the primary endpoint, and considers the orderings based on the maximum likelihood estimator and the uniformly minimum variance unbiased estimator.
Abstract: Due to ethical and practical issues, clinical trials are conducted in multiple stages, but the reported p-values often fail to reflect the design aspect of the trials. We investigate some approaches to p-value calculation in analyzing multi-stage Phase II clinical trials that have a binary variable, such as response, as the primary endpoint. The sample space consists of the paired outcomes of the stopping stage and the number of responses, which jointly define a complete and sufficient statistic for the true binomial proportion. Calculating a p-value requires an ordering of the paired outcomes so that outcomes more extreme than the observed can be identified. We consider the orderings based on the maximum likelihood estimator and the uniformly minimum variance unbiased estimator. We will compare, using some examples, the p-values based on these alternative orderings and the one ignoring the multistage design aspect of phase II trials.

Journal ArticleDOI
TL;DR: The basic approach involved obtaining multiple contrasts for different problem-related contrast definitions for superiority or noninferiority and simultaneous confidence intervals for ratio to placebo were used.
Abstract: According to the ICH E9 recommendation, the evaluation of randomized dose-finding trials focuses on the graphical presentation of different kinds of simultaneous confidence intervals: i) superiority of at least one dose vs. placebo with and without the assumption of order restriction, ii) noninferiority of at least one dose vs. active control, iii) identification of the minimum effective dose, iv) identification of the peak dose, v) identification of the maximum safe dose for a safety endpoint, and vi) estimation of simultaneous confidence intervals for “many-to-one-by-condition interaction contrasts.” Moreover, global tests for a monotone trend or a trend with a possible downturn effect are discussed. The basic approach involved obtaining multiple contrasts for different problem-related contrast definitions. For all approaches, definitions of relevance margins for superiority or noninferiority are needed. Because consensus on margins only exists for selected therapeutic areas and the definition of absolu...

Journal ArticleDOI
TL;DR: The local influence sensitivity tool is applied to a longitudinal depression trial, thereby applying it to continuous outcomes from clinical trials and finding the optimal place for MNAR analyses is within a sensitivity analysis context.
Abstract: In the analyses of incomplete longitudinal clinical trial data, there has been a shift, away from simple ad hoc methods that are valid only if the data are missing completely at random (MCAR), to more principled (likelihood-based or Bayesian) ignorable analyses, which are valid under the less restrictive missing at random (MAR) assumption. The availability of the necessary standard statistical software allows for such analyses in practice. Although the possibility of data missing not at random (MNAR) cannot be ruled out, it is argued that analyses valid under MNAR are not well suited for the primary analysis in clinical trials. Therefore, rather than either forgetting about or blindly shifting to an MNAR framework, the optimal place for MNAR analyses is within a sensitivity analysis context. Such analyses can be used, for example, to assess how sensitive results from an ignorable analysis are to possible departures from MAR and how much results are affected by influential observations. In this article, we...

Journal ArticleDOI
TL;DR: A workshop on statistical thinking for scientists involved in pharmaceutical discovery research to improve the quality of research data by developing a structured approach to bias and variability and to establish a collaborative and informed relationship between scientists and statisticians by broadening their common basis.
Abstract: We describe a workshop on statistical thinking for scientists involved in pharmaceutical discovery research. The objectives were 1) to improve the quality of research data by developing a structured approach to bias and variability and 2) to establish a collaborative and informed relationship between scientists and statisticians by broadening their common basis. The corner stone was the introduction of statistical thinking and the didactical route to achieve this goal.

Journal ArticleDOI
TL;DR: This Executive Summary of the PhRMA Working Group report discusses how the newly developed statistical methodology of adaptive design can be of help, identifying opportunities where these methods might be applied and addressing various statistical, logistical, and procedural issues that arise.
Abstract: The development process for a new drug is so lengthy and expensive that any acceleration of this process, however slight, to identify beneficial or problematic drugs early can lead to large savings...

Journal ArticleDOI
TL;DR: A conditional logistic regression model that accounts for within-randomization unit correlation over time is described, which models risk of disease as a function of community-level covariates and forms the covariates of interest for the investigation of indirect effects.
Abstract: When a sufficiently high proportion of a population is immunized with a vaccine, reduction in secondary transmission of disease can confer significant protection to unimmunized population members. We propose a straightforward method to estimate the degree of this indirect effect of vaccination in the context of a community-randomized vaccine trial. A conditional logistic regression model that accounts for within-randomization unit correlation over time is described, which models risk of disease as a function of community-level covariates. The approach is applied to an example data set from a pneumococcal conjugate vaccine study, with study arm and immunization levels forming the covariates of interest for the investigation of indirect effects.

Journal ArticleDOI
TL;DR: The construction of optimal designs for dose-ranging trials with multiple periods is considered, where the outcome of the trial is considered to be a binary response: the success or failure of a drug to bring about a particular change in the subject after a given time.
Abstract: Pharmacodynamics (PD) is the study of the biochemical and physiological effects of drugs. The construction of optimal designs for dose-ranging trials with multiple periods is considered in this paper, where the outcome of the trial (the effect of the drug) is considered to be a binary response: the success or failure of a drug to bring about a particular change in the subject after a given amount of time. The carryover effect of each dose from one period to the next is assumed to be proportional to the direct effect. It is shown for a logistic regression model that the efficiency of optimal parallel (single-period) or crossover (two-period) design is substantially greater than a balanced design. The optimal designs are also shown to be robust to misspecification of the value of the parameters. Finally, the parallel and crossover designs are combined to provide the experimenter with greater flexibility.

Journal ArticleDOI
TL;DR: A randomized two-treatment single period response adaptive design is developed by combining two contrasting aspects (i.e., ethics and optimality), where optimality is defined in a meaningful way.
Abstract: In the present work, we develop a randomized two-treatment single period response adaptive design by combining two contrasting aspects (i.e., ethics and optimality), where optimality is defined in a meaningful way. We compare this rule with some of the existing rules by computing various performance measures of the rules.

Journal ArticleDOI
TL;DR: A ratio hypothesis is defined directly in terms of the hazard and is shown to have the desired asymptotic normality and the demand on sample size is much reduced.
Abstract: There are essentially two kinds of non-inferiority hypotheses in an active control trial: fixed margin and ratio hypotheses. In a fixed margin hypothesis, the margin is a prespecified constant and the hypothesis is defined in terms of a single parameter that represents the effect of the active treatment relative to the control. The statistical inference for a fixed margin hypothesis is straightforward. The outstanding issue for a fixed margin non-inferiority hypothesis is how to select the margin, a task that may not be as simple as it appears. The selection of a fixed non-inferiority margin has been discussed in a few articles (Chi et al., 2003; Hung et al., 2003; Ng, 1993). In a ratio hypothesis, the control effect is also considered as an unknown parameter, and the noninferiority hypothesis is then formulated as a ratio in terms of these two parameters, the treatment effect and the control effect. This type of non-inferiority hypothesis has also been called the fraction retention hypothesis because the ratio hypothesis can be interpreted as a retention of certain fraction of the control effect. Rothmann et al. (2003) formulated a ratio non-inferiority hypothesis in terms of log hazards in the time-to-event setting. To circumvent the complexity of having to deal with a ratio test statistic, the ratio hypothesis was linearized to an equivalent hypothesis under the assumption that the control effect is positive. An associated test statistic for this linearized hypothesis was developed. However, there are three important issues that are not addressed by this method. First, the retention fraction being defined in terms of log hazard is difficult to interpret. Second, in order to linearize the ratio hypothesis, Rothmann's method has to assume that the true control effect is positive. Third, the test statistic is not powerful and thus requires a huge sample size, which renders the method impractical. In this paper, a ratio hypothesis is defined directly in terms of the hazard. A natural ratio test statistic can be defined and is shown to have the desired asymptotic normality. The demand on sample size is much reduced. In most commonly encountered situations, the sample size required is less than half of those needed by either the fixed margin approach or Rothmann's method.

Journal ArticleDOI
TL;DR: This work investigates the efficiencies of the conventional optimal designs that do not incorporate potential missing information relative to the proposed designs and examines the impact of restricted dose range on the resulting optimal designs.
Abstract: In a dose-response study, there are frequently multiple goals and not all planned observations are realized at the end of the study. Subjects drop out and the initial design can be quite different from the final design. Consequently, the final design can be inefficient. Single- and multiple-objective Bayesian optimal designs that account for potentially missing observations in quantal response models were recently proposed in Baek (2005). In this work, we investigate the efficiencies of the conventional optimal designs that do not incorporate potential missing information relative to our proposed designs. Furthermore, we examine the impact of restricted dose range on the resulting optimal designs. As an application, we used missing data information from a study by Yocum et al. (2003) to design a study for estimating dose levels of tacrolimus that will result in a certain percentage of rheumatoid arthritis patients having an ACR20 response at 6 months.

Journal ArticleDOI
TL;DR: This article takes a systematic approach to find an efficient estimate of the maximum tolerated dose under the assumption that the dose-response curve has a true underlying logistic distribution.
Abstract: Both parametric and nonparametric sequential designs and estimation methods are implemented in phase I clinical trials. In this article, we take a systematic approach, consisting of a start-up design, a follow-on design, a sequential dose-finding design, and an estimation method, to find an efficient estimate of the maximum tolerated dose under the assumption that the dose-response curve has a true underlying logistic distribution. In particular, for the problem of the nonexistence of the maximum likelihood estimates of the logistic parameters, a constraint on the probability of an undetermined maximum likelihood estimator (MLE) is incorporated into the parametric sequential designs. In addition, this approach can also be extended to incorporate ethical considerations, which prohibit an administered dose from exceeding the maximum acceptable dose. Comparison based on simulation studies between the systematic designs and nonparametric designs are described both for continuous dose spaces and discrete dose ...

Journal ArticleDOI
TL;DR: The present paper provides a version of the newly proposed adaptive design, drop-the-loser rule, but for continuous responses and by incorporating the covariate information in the allocation procedure.
Abstract: One adaptive design is proposed and studied by Bandyopadhyay and Biswas (2001) for comparing two treatments having continuous responses with covariates at hand in a phase III clinical trial. On the other hand, a drop-the-loser urn design is recently proposed by Ivanova (2003), which is known to have the least variability among urn-based adaptive designs for binary responses. The drop-the-loser rule for continuous data was recently introduced by Ivanova et al. (2006). But neither of the works considered covariates for the allocation design. The present paper provides a version of the newly proposed adaptive design, drop-the-loser rule, but for continuous responses and by incorporating the covariate information in the allocation procedure. Several exact and limiting properties of the design, and also of a simpler version of it, are studied. We compare the design of Bandyopadhyay and Biswas (2001) with the covariate-adjusted drop-the-loser-type rule for continuous responses and conclude that, althou...

Journal ArticleDOI
TL;DR: Two different treatment effect parameters that are adopted in most analyses of clinical data: the study-end treatment effect and the last-observed treatment effect are discussed and compared.
Abstract: Patient's dropout often occurs in clinical trials with multiple scheduled visits, which results in a great challenge in the analysis of incomplete data. As the first step, one has to define a relevant treatment effect parameter, which is not straightforward in the presence of dropout. We discuss and compare two different treatment effect parameters that are adopted in most analyses of clinical data: the study-end treatment effect and the last-observed treatment effect. Some related issues, such as the estimability of causal parameters, the dependence of study parameters on the dropout patterns, and the use of the last observation carry forward, are also discussed.

Journal ArticleDOI
TL;DR: The effectiveness of the methodology based on the data analyzed by Thompson and Pocock (1987) is demonstrated, demonstrating the power of the new approach to meta-analysis to find statistical agreement in what looks like great disagreement via a chi-squared test.
Abstract: This article addresses the problem of heterogeneity among various studies to be combined in a meta-analysis. We adopt quasi-empirical Bayes methodology to predict the odds ratios for each study. As a result, the predicted odds ratios are pulled toward the estimated common odds ratio of the various studies under consideration. With strong heterogeneity among the studies, we jointly consider the display of the 95% CIs of the ORs and a Dixon's test (1950) for “outliers” to exclude the “extreme” estimated ORs. We demonstrate the effectiveness of our methodology based on the data analyzed by Thompson and Pocock (1987) demonstrating the power of the new approach to meta-analysis to find statistical agreement in what looks like great disagreement via a chi-squared test. We believe our technique (i.e., minimum mean-square sense) will go a long way toward increasing the trustworthiness of meta-analysis.