scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Biopharmaceutical Statistics in 2018"


Journal ArticleDOI
TL;DR: This article proposes a stratified win ratio statistic in a similar way as the Mantel-Haenszel stratified odds ratio, derive a general form of its variance estimator with a plug-in of existing or potentially new variance/covariance estimators of the number of wins for the two treatment groups, and assess its statistical performance using simulation studies.
Abstract: The win ratio was first proposed in 2012 by Pocock and his colleagues to analyze a composite endpoint while considering the clinical importance order and the relative timing of its components. It has attracted considerable attention since then, in applications as well as methodology. It is not uncommon that some clinical trials require a stratified analysis. In this article, we propose a stratified win ratio statistic in a similar way as the Mantel-Haenszel stratified odds ratio, derive a general form of its variance estimator with a plug-in of existing or potentially new variance/covariance estimators of the number of wins for the two treatment groups, and assess its statistical performance using simulation studies. Our simulations show that our proposed Mantel-Haenszel-type stratified win ratio performs similarly to the Mantel-Haenszel stratified odds ratio for the simplified situation when the win ratio reduces to the odds ratio, and our proposed stratified win ratio is preferred compared to the inverse-variance weighted win ratio and unweighted win ratio particularly when the data are sparse. We also formulate a homogeneity test following Cochran's approach that assesses whether the stratum-specific win ratios are homogeneous across strata, as this method is used frequently in meta-analyses and a better test for the win ratio homogeneity is not available yet.

42 citations


Journal ArticleDOI
TL;DR: The most commonly used extensions of the CART and random forest algorithms to right-censored outcomes are described, and how the different splitting rules and methods for cost-complexity pruning impact these algorithms are focused on.
Abstract: A crucial component of making individualized treatment decisions is to accurately predict each patient’s disease risk. In clinical oncology, disease risks are often measured through time-to-event data, such as overall survival and progression/recurrence-free survival, and are often subject to censoring. Risk prediction models based on recursive partitioning methods are becoming increasingly popular largely due to their ability to handle nonlinear relationships, higher-order interactions, and/or high-dimensional covariates. The most popular recursive partitioning methods are versions of the Classification and Regression Tree (CART) algorithm, which builds a simple interpretable tree structured model. With the aim of increasing prediction accuracy, the random forest algorithm averages multiple CART trees, creating a flexible risk prediction model. Risk prediction models used in clinical oncology commonly use both traditional demographic and tumor pathological factors as well as high-dimensional gene...

36 citations


Journal ArticleDOI
TL;DR: This article reviews recent advances in -value-based multiple test procedures (MTPs) and presents gatekeeping MTPs (Dmitrienko and Tamhane, 2007) for hierarchically ordered families of hypotheses with logical relations among them.
Abstract: In this article we review recent advances in [Formula: see text]-value-based multiple test procedures (MTPs). We begin with a brief review of the basic tests of Bonferroni and Simes. Standard stepwise MTPs derived from them using the closure method of Marcus et al. (1976) are discussed next. They include the well-known MTPs of Holm (1979), Hochberg (1988) and Hommel (1988), and their extensions and improvements. This is followed by stepwise MTPs for a priori ordered hypotheses. Next we present gatekeeping MTPs (Dmitrienko and Tamhane, 2007) for hierarchically ordered families of hypotheses with logical relations among them. Finally, we give a brief review of the graphical approach (Bretz et al., 2009) to constructing and visualizing gatekeeping and other MTPs. Simple numerical examples are given to illustrate the various procedures.

35 citations


Journal ArticleDOI
TL;DR: This paper shows that one-sample estimation, two-sample comparison and regression analysis of conditional survival distributions can be conducted using the regular methods for unconditional survival distributions that are provided by the standard statistical software, such as SAS and SPSS.
Abstract: We investigate the survival distribution of the patients who have survived over a certain time period. This is called a conditional survival distribution. In this paper, we show that one-sample estimation, two-sample comparison and regression analysis of conditional survival distributions can be conducted using the regular methods for unconditional survival distributions that are provided by the standard statistical software, such as SAS and SPSS. We conduct extensive simulations to evaluate the finite sample property of these conditional survival analysis methods. We illustrate these methods with real clinical data.

27 citations


Journal ArticleDOI
TL;DR: An overview of the class of trials known as “master protocols,” including basket trials, umbrella trials, and platform trials, is provided, including standardized terminology, and a motivating example with modeling details and decision rules are provided.
Abstract: Within the field of cancer research, discovery of biomarkers and genetic mutations that are potentially predictive of treatment benefit is motivating a paradigm shift in how cancer clinical trials are conducted. In this review, we provide an overview of the class of trials known as "master protocols," including basket trials, umbrella trials, and platform trials. For each, we describe standardized terminology, provide a motivating example with modeling details and decision rules, and discuss statistical advantages and limitations. We conclude with a discussion of general statistical considerations and challenges encountered across these types of trials.

25 citations


Journal ArticleDOI
Yongqiang Tang1
TL;DR: This paper derived the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and established an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics.
Abstract: We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

19 citations


Journal ArticleDOI
TL;DR: A detailed review of multiplicity issues arising in exploratory subgroup analysis and a case study based on a Phase III oncology trial will be presented to discuss the details of subgroup search algorithms with resampling-based multiplicity adjustment procedures.
Abstract: The general topic of subgroup identification has attracted much attention in the clinical trial literature due to its important role in the development of tailored therapies and personalized medicine. Subgroup search methods are commonly used in late-phase clinical trials to identify subsets of the trial population with certain desirable characteristics. Post-hoc or exploratory subgroup exploration has been criticized for being extremely unreliable. Principled approaches to exploratory subgroup analysis based on recent advances in machine learning and data mining have been developed to address this criticism. These approaches emphasize fundamental statistical principles, including the importance of performing multiplicity adjustments to account for selection bias inherent in subgroup search. This article provides a detailed review of multiplicity issues arising in exploratory subgroup analysis. Multiplicity corrections in the context of principled subgroup search will be illustrated using the family of SIDES (subgroup identification based on differential effect search) methods. A case study based on a Phase III oncology trial will be presented to discuss the details of subgroup search algorithms with resampling-based multiplicity adjustment procedures.

18 citations


Journal ArticleDOI
TL;DR: Some clinical trial designs finding active use in co-development of therapeutics and predictive biomarkers to inform their use in oncology are reviewed.
Abstract: The established molecular heterogeneity of human cancers has had profound effects on the design of cancer therapeutics. Most cancer drugs are today targeted to molecular alterations present in cancer cells. Tumors of the same primary site, however, often differ with regard to the alterations that they harbor. Consequently, this heterogeneity has required the development of new paradigms for clinical development. In this paper, we review some clinical trial designs finding active use in co-development of therapeutics and predictive biomarkers to inform their use in oncology.

17 citations


Journal ArticleDOI
TL;DR: The design, data monitoring, and analyses of clinical trials with co-primary endpoints, and recently developed methods for fixed-sample and group-sequential settings are described.
Abstract: We review the design, data monitoring, and analyses of clinical trials with co-primary endpoints. Recently developed methods for fixed-sample and group-sequential settings are described. Practical considerations are discussed, and guidance for the application of these methods is provided.

15 citations


Journal ArticleDOI
TL;DR: Two new designs with the BSD design are compared in terms of the number of randomized patients and the cost of trial under scenarios mimicking real biomarker stratified trials.
Abstract: In the era of precision medicine, drugs are increasingly developed to target subgroups of patients with certain biomarkers. In large all-comer trials using a biomarker stratified design, the cost of treating and following patients for clinical outcomes may be prohibitive. With a fixed number of randomized patients, the efficiency of testing certain treatments parameters, including the treatment effect among biomarker-positive patients and the interaction between treatment and biomarker, can be improved by increasing the proportion of biomarker positives on study, especially when the prevalence rate of biomarker positives is low in the underlying patient population. When the cost of assessing the true biomarker is prohibitive, one can further improve the study efficiency by oversampling biomarker positives with a cheaper auxiliary variable or a surrogate biomarker that correlates with the true biomarker. To improve efficiency and reduce cost, we can adopt an enrichment strategy for both scenarios by concentrating on testing and treating patient subgroups that contain more information about specific treatment parameters of primary interest to the investigators. In the first scenario, an enriched biomarker stratified design enriches the cohort of randomized patients by directly oversampling the relevant patients with the true biomarker, while in the second scenario, an auxiliary-variable-enriched biomarker stratified design enriches the randomized cohort based on an inexpensive auxiliary variable, thereby avoiding testing the true biomarker on all screened patients and reducing treatment waiting time. For both designs, we discuss how to choose the optimal enrichment proportion when testing a single hypothesis or two hypotheses simultaneously. At a requisite power, we compare the two new designs with the BSD design in terms of the number of randomized patients and the cost of trial under scenarios mimicking real biomarker stratified trials. The new designs are illustrated with hypothetical examples for designing biomarker-driven cancer trials.

15 citations


Journal ArticleDOI
TL;DR: It is shown by comprehensive simulation studies that the proposed Bayesian adaptive design for dose finding in cancer phase I clinical trials is safe and can estimate the maximum tolerated dose (MTD) more efficiently than the original EWOC design.
Abstract: We present a Bayesian adaptive design for dose finding in cancer phase I clinical trials. The goal is to estimate the maximum tolerated dose (MTD) after possible modification of the dose range during the trial. Parametric models are used to describe the relationship between the dose and the probability of dose-limiting toxicity (DLT). We investigate model reparameterization in terms of the probabilities of DLT at the minimum and maximum available doses at the start of the trial. Trial design proceeds using escalation with overdose control (EWOC), where at each stage of the trial we seek the dose of the agent such that the posterior probability of exceeding the MTD of this agent is bounded by a feasibility bound. At any time during the trial, we test whether the MTD is below or above the minimum and maximum doses, respectively. If during the trial there is evidence that the MTD is outside the range of doses, we extend the range of doses and complete the trial with the planned sample size. At the end of the trial, a Bayes estimate of the MTD is proposed. We evaluate design operating characteristics in terms of safety of the trial design and efficiency of the MTD estimate under various scenarios and model misspecification. The methodology is further compared to the original EWOC design. We showed by comprehensive simulation studies that the proposed method is safe and can estimate the MTD more efficiently than the original EWOC design.

Journal ArticleDOI
TL;DR: The book by Dr. Herson as discussed by the authors concentrates on pharmaceutical industry-sponsored confirmatory clinical trials and can serve as excellent sources of knowledge for all the medical community, and can be used as a reference for all medical applications.
Abstract: The book by Dr. Herson is written amazingly well. The book concentrates on pharmaceutical industry-sponsored confirmatory clinical trials and can serve as excellent sources of knowledge for all the...

Journal ArticleDOI
TL;DR: An extensive simulation study to examine comparative performances of six multiple imputation methods available in the SAS MI procedure for longitudinal binary data suggested that results from naive approaches of a single imputation with non-responders and a complete case analysis could be very sensitive against missing data.
Abstract: Longitudinal binary data are commonly encountered in clinical trials. Multiple imputation is an approach for getting a valid estimation of treatment effects under an assumption of missing at random mechanism. Although there are a variety of multiple imputation methods for the longitudinal binary data, a limited number of researches have reported on relative performances of the methods. Moreover, when focusing on the treatment effect throughout a period that has often been used in clinical evaluations of specific disease areas, no definite investigations comparing the methods have been available. We conducted an extensive simulation study to examine comparative performances of six multiple imputation methods available in the SAS MI procedure for longitudinal binary data, where two endpoints of responder rates at a specified time point and throughout a period were assessed. The simulation study suggested that results from naive approaches of a single imputation with non-responders and a complete case analysis could be very sensitive against missing data. The multiple imputation methods using a monotone method and a full conditional specification with a logistic regression imputation model were recommended for obtaining unbiased and robust estimations of the treatment effect. The methods were illustrated with data from a mental health research.

Journal ArticleDOI
TL;DR: This work considers observational studies with a survival outcome and proposes to use Random Survival Forest with Weighted Bootstrap (RSFWB) to model the counterfactual outcomes while marginalizing over the auxiliary covariates.
Abstract: A personalized treatment policy requires defining the optimal treatment for each patient based on their clinical and other characteristics. Here we consider a commonly encountered situation in practice, when analyzing data from observational cohorts, that there are auxiliary variables which affect both the treatment and the outcome, yet these variables are not of primary interest to be included in a generalizable treatment strategy. Furthermore, there is not enough prior knowledge of the effect of the treatments or of the importance of the covariates for us to explicitly specify the dependency between the outcome and different covariates, thus we choose a model that is flexible enough to accommodate the possibly complex association of the outcome on the covariates. We consider observational studies with a survival outcome and propose to use Random Survival Forest with Weighted Bootstrap (RSFWB) to model the counterfactual outcomes while marginalizing over the auxiliary covariates. By maximizing the restricted mean survival time, we estimate the optimal regime for a target population based on a selected set of covariates. Simulation studies illustrate that the proposed method performs reliably across a range of different scenarios. We further apply RSFWB to a prostate cancer study.

Journal ArticleDOI
TL;DR: Five imputation algorithms are described for imputing partially observed recurrent events modeled by a negative binomial process, or more generally by a mixed Poisson process when the mean function for the recurrent events is continuous over time.
Abstract: Five algorithms are described for imputing partially observed recurrent events modeled by a negative binomial process, or more generally by a mixed Poisson process when the mean function for the recurrent events is continuous over time We also discuss how to perform the imputation when the mean function of the event process has jump discontinuities The validity of these algorithms is assessed by simulations These imputation algorithms are potentially very useful in the implementation of pattern mixture models, which have been popularly used as sensitivity analysis under the non-ignorability assumption in clinical trials A chronic granulomatous disease trial is analyzed for illustrative purposes

Journal ArticleDOI
TL;DR: Optimizing the regulatory rules, in terms of minimal required sample size and the Type I error in Phase III, has to consider how these rules will modify the commercial optimization made by the sponsor.
Abstract: For a new candidate drug to become an approved medicine, several decision points have to be passed. In this article, we focus on two of them: First, based on Phase II data, the commercial sponsor decides to invest (or not) in Phase III. Second, based on the outcome of Phase III, the regulator determines whether the drug should be granted market access. Assuming a population of candidate drugs with a distribution of true efficacy, we optimize the two stakeholders' decisions and study the interdependence between them. The regulator is assumed to seek to optimize the total public health benefit resulting from the efficacy of the drug and a safety penalty. In optimizing the regulatory rules, in terms of minimal required sample size and the Type I error in Phase III, we have to consider how these rules will modify the commercial optimization made by the sponsor. The results indicate that different Type I errors should be used depending on the rarity of the disease.

Journal ArticleDOI
TL;DR: This article studies the properties of two extreme CRS methods, i.e., combining multiple reference test results by the "any positive" rule or by the “all-positive" rule, and proposes a new approach “dual composite reference standards (dCRS)” based on these two extreme methods to reduce the biases of the estimates.
Abstract: A main challenge in molecular diagnostic research is to accurately evaluate the performance of a new nucleic acid amplification test when the reference standard is imperfect. Several approaches, such as discrepant analysis, composite reference standard (CRS) method, or latent class analysis (LCA), are commonly applied for this purpose by combining multiple imperfect (reference) test results. In discrepant analysis or LCA, test results from the new assay are often involved in the construction of a new pseudo-reference standard, which results in the potential risk of overestimating the parameters of interest. On the contrary, the CRS methods only combine the results of reference tests, which is more preferable in practice. In this article, we study the properties of two extreme CRS methods, i.e., combining multiple reference test results by the "any positive" rule or by the "all-positive" rule, and propose a new approach "dual composite reference standards (dCRS)" based on these two extreme methods to reduce the biases of the estimates. Simulations are performed for various scenarios and the proposed approach is applied to two real datasets. The results demonstrate that our approach outperforms other commonly used approaches and therefore is recommended for future applications.

Journal ArticleDOI
TL;DR: A simulated annealing approach is presented to search over the space of decision rules and other parameters for an adaptive enrichment design to minimize the expected number enrolled or expected duration, while preserving the appropriate power and Type I error rate.
Abstract: An adaptive enrichment design is a randomized trial that allows enrollment criteria to be modified at interim analyses, based on a preset decision rule. When there is prior uncertainty regarding treatment effect heterogeneity, these trial designs can provide improved power for detecting treatment effects in subpopulations. We present a simulated annealing approach to search over the space of decision rules and other parameters for an adaptive enrichment design. The goal is to minimize the expected number enrolled or expected duration, while preserving the appropriate power and Type I error rate. We also explore the benefits of parallel computation in the context of this goal. We find that optimized designs can be substantially more efficient than simpler designs using Pocock or O'Brien-Fleming boundaries.

Journal ArticleDOI
TL;DR: Simple methods to address sample size calculations for a “new” study with different research questions and scenarios are provided and framed in terms of estimation/precision or statistical testing to allow investigators to choose the best suited method for their goals.
Abstract: Blinding is a critical component in randomized clinical trials along with treatment effect estimation and comparisons between the treatments. Various methods have been proposed for the statistical analyses of blinding-related data, but there is little guidance for determining the sample size for this type of data, especially if blinding assessment is done in pilot studies. In this paper, we try to fill this gap and provide simple methods to address sample size calculations for a "new" study with different research questions and scenarios. The proposed methods are framed in terms of estimation/precision or statistical testing to allow investigators to choose the best suited method for their goals. We illustrate the methods using worked examples with real data.

Journal ArticleDOI
TL;DR: A “zone-finding stage” is proposed that determines the most admissible toxicity zone on the dose combination matrix and subsequently select the doses allocated to the next patient from that zone in Phase I, which would eventually improve the accuracy of optimal dose combination identification in Phase II.
Abstract: In Phase I/II trials for a combination therapy of two agents, we ideally want to explore as many dose combinations as possible with limited sample size in Phase I and to reduce the number of untrie...

Journal ArticleDOI
TL;DR: This research seeks to use gene expression profiling in the development of a statistical decision-making algorithm to identify patients whose survival rates will improve from ACT treatment.
Abstract: In treating patients diagnosed with early Stage I non-small-cell lung cancer (NSCLC), doctors must choose surgery alone, Adjuvant Cisplatin-Based Chemotherapy (ACT) alone or both. For patients with resected stages IB to IIIA, clinical trials have shown a survival advantage from 4-15% with the adoption of ACT. However, due to the inherent toxicity of chemotherapy, it is necessary for doctors to identify patients whose chance of success with ACT is sufficient to justify the risks. This research seeks to use gene expression profiling in the development of a statistical decision-making algorithm to identify patients whose survival rates will improve from ACT treatment. Using the data from the National Cancer Institute, the lasso method in the Cox-Proportional-Hazards regression model is used as the main method to determine a feasible number of genes that are strongly associated with the treatment-related patient survival. Considering treatment groups separately, the patients are assigned a risk category based on the estimation of survival times. These risk categories are used to develop a Random Forests classification model to identify patients who are likely to benefit from chemotherapy treatment. This model allows the prediction of a new patient's prognosis and the likelihood of survival benefit from ACT treatment based on a feasible number of genomic biomarkers. The proposed methods are evaluated using a simulation study.

Journal ArticleDOI
TL;DR: A biomarker threshold adaptive design with survival endpoints that determines subgroups for one or more biomarkers such that patients in these subgroups benefit the most from the new treatment.
Abstract: Due to the importance of precision medicine, it is essential to identify the right patients for the right treatment. Biomarkers, which have been commonly used in clinical research as well as in clinical practice, can facilitate selection of patients with a good response to the treatment. In this paper, we describe a biomarker threshold adaptive design with survival endpoints. In the first stage, we determine subgroups for one or more biomarkers such that patients in these subgroups benefit the most from the new treatment. The analysis in this stage can be based on historical or pilot studies. In the second stage, we sample subjects from the subgroups determined in the first stage and randomly allocate them to the treatment or control group. Extensive simulation studies are conducted to examine the performance of the proposed design. Application to a real data example is provided for implementation of the first-stage algorithms.

Journal ArticleDOI
Steven Sun1
TL;DR: A large number of the patients in these trials had no known prior cancer history, so the design and analysis of the trials was very straightforward, and the results were very straightforward to interpret.
Abstract: Statisticians often face unique challenges with design or analysis of cancer clinical trials. While many books are available on the topic of cancer clinical trials, most of them are either for gene...

Journal ArticleDOI
TL;DR: Three variations of the regularization methods for response-adaptive randomization (RAR) are examined and the burn-in method showed smallest variability compared with the clip method and the PT method, and all three methods performed more similarly.
Abstract: We examine three variations of the regularization methods for response-adaptive randomization (RAR) and compare their operating characteristics. A power transformation (PT) is applied to refine the randomization probability. The clip method is used to bound the randomization probability within specified limits. A burn-in period of equal randomization (ER) can be added before adaptive randomization (AR). For each method, more patients are assigned to the superior arm and overall response rate increase as the scheme approximates simple AR, while statistical power increases as it approximates ER. We evaluate the performance of the three methods by varying the tuning parameter to control the extent of AR to achieve the same statistical power. When there is no early stopping rule, PT method generally performed the best in yielding higher proportion to the superior arm and higher overall response rate, but with larger variability. The burn-in method showed smallest variability compared with the clip method and the PT method. With the efficacy early stopping rule, all three methods performed more similarly. The PT and clip methods are better than the burn-in method in achieving higher proportion randomized to the superior arm and higher overall response rate but burn-in method required fewer patients in the trial. By carefully choosing the method and the tuning parameter, RAR methods can be tailored to strike a balance between achieving the desired statistical power and enhancing the overall response rate.

Journal ArticleDOI
TL;DR: A statistical strategy is proposed that takes into account the similarity evidence from analytical assessments and PK studies in the design and analysis of the clinical efficacy study in order to address residual uncertainty and enhance statistical power and precision.
Abstract: To improve patients’ access to safe and effective biological medicines, abbreviated licensure pathways for biosimilar and interchangeable biological products have been established in the US...

Journal ArticleDOI
TL;DR: In this article, the Fieller method has been applied to obtain a confidence interval for the ratio between the combined weighted kappa coefficient and the weighting index of each diagnostic test.
Abstract: The combination of two binary diagnostic tests in order to increase the accuracy of the diagnosis of a disease is a frequent procedure in clinical practice. When considering the losses associated w...

Journal ArticleDOI
TL;DR: This paper assesses 97 methods to make two-tailed asymptotic inferences for the difference d with independent proportions and selects the optimal methods, both for one tail and for two ( methods which are related to arcsine transformation and the Wald method).
Abstract: Two-tailed asymptotic inferences for the difference d = p2 − p1 with independent proportions have been widely studied in the literature. Nevertheless, the case of one tail has received less attenti...

Journal ArticleDOI
TL;DR: A novel decision tree-based approach applicable in randomized clinical trials that model the prognostic effects of the biomarkers using additive regression trees and the biomarker-by-treatment effect using a single regression tree.
Abstract: Personalized medicine, or tailored therapy, has been an active and important topic in recent medical research. Many methods have been proposed in the literature for predictive biomarker detection and subgroup identification. In this article, we propose a novel decision tree-based approach applicable in randomized clinical trials. We model the prognostic effects of the biomarkers using additive regression trees and the biomarker-by-treatment effect using a single regression tree. Bayesian approach is utilized to periodically revise the split variables and the split rules of the decision trees, which provides a better overall fitting. Gibbs sampler is implemented in the MCMC procedure, which updates the prognostic trees and the interaction tree separately. We use the posterior distribution of the interaction tree to construct the predictive scores of the biomarkers and to identify the subgroup where the treatment is superior to the control. Numerical simulations show that our proposed method performs well under various settings comparing to existing methods. We also demonstrate an application of our method in a real clinical trial.

Journal ArticleDOI
TL;DR: The regulatory principles related to multiplicity issues in confirmatory clinical trials intended to support a marketing authorization application in the EU are outlined and the reasons for an increasing complexity regarding multiple hypotheses testing are described.
Abstract: Recently, new draft guidelines on multiplicity issues in clinical trials have been issued by European Medicine Agency (EMA) and Food and Drug Administration (FDA), respectively. Multiplicity is an issue in clinical trials, if the probability of a false-positive decision is increased by insufficiently accounting for testing multiple hypotheses. We outline the regulatory principles related to multiplicity issues in confirmatory clinical trials intended to support a marketing authorization application in the EU, describe the reasons for an increasing complexity regarding multiple hypotheses testing and discuss the specific multiplicity issues emerging within the regulatory context and being relevant for drug approval.

Journal ArticleDOI
TL;DR: A new Bayesian method, Beta prior BInomial model for Risk Differences (B-BIRD), which takes into account the prior information of rare events is proposed, which performs well in low event rate settings.
Abstract: Bayesian meta-analysis has been more frequently utilized for synthesizing safety and efficacy information to support landmark decision-making due to its flexibility of incorporating prior information and availability of computing software. However, when the outcome is binary and the events are rare, where event counts can be zero, conventional meta-analysis methods including Bayesian methods may not work well. Several methods have been proposed to tackle this issue but the prior knowledge of event rate was not utilized to increase precision of risk difference estimates. To better estimate risk differences, we propose a new Bayesian method, Beta prior BInomial model for Risk Differences (B-BIRD), which takes into account the prior information of rare events. B-BIRD is illustrated using a real data set of 48 clinical trials about a type 2 diabetes drug. In simulation studies, it performs well in low event rate settings.