scispace - formally typeset
Search or ask a question
Author

Dominic Magirr

Other affiliations: Lancaster University, Medical University of Vienna, AstraZeneca  ...read more
Bio: Dominic Magirr is an academic researcher from Novartis. The author has contributed to research in topics: Sample size determination & Mathematics. The author has an hindex of 9, co-authored 24 publications receiving 369 citations. Previous affiliations of Dominic Magirr include Lancaster University & Medical University of Vienna.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors generalize the Dunnett test to derive efficacy and futility boundaries for a flexible multi-arm multi-stage clinical trial for a normally distributed endpoint with known variance.
Abstract: We generalize the Dunnett test to derive efficacy and futility boundaries for a flexible multi-arm multi-stage clinical trial for a normally distributed endpoint with known variance. We show that the boundaries control the familywise error rate in the strong sense. The method is applicable for any number of treatment arms, number of stages and number of patients per treatment per stage. It can be used for a wide variety of boundary types or rules derived from α-spending functions. Additionally, we show how sample size can be computed under a least favourable configuration power requirement and derive formulae for expected sample sizes.

127 citations

Journal ArticleDOI
TL;DR: A broad range of statistical issues related to multi-arm multi-stage trials are explored including a comparison of different ways to power a multi- arm multi- stage trial; choosing the allocation ratio to the control group compared to other experimental arms; the consequences of adding additional experimental arms during aMulti-armmulti-stage trial, and how one might control the type-I error rate when this is necessary.
Abstract: Multi-arm multi-stage designs can improve the efficiency of the drug-development process by evaluating multiple experimental arms against a common control within one trial. This reduces the number of patients required compared to a series of trials testing each experimental arm separately against control. By allowing for multiple stages experimental treatments can be eliminated early from the study if they are unlikely to be significantly better than control. Using the TAILoR trial as a motivating example, we explore a broad range of statistical issues related to multi-arm multi-stage trials including a comparison of different ways to power a multi-arm multi-stage trial; choosing the allocation ratio to the control group compared to other experimental arms; the consequences of adding additional experimental arms during a multi-arm multi-stage trial, and how one might control the type-I error rate when this is necessary; and modifying the stopping boundaries of a multi-arm multi-stage design to account for unknown variance in the treatment outcome. Multi-arm multi-stage trials represent a large financial investment, and so considering their design carefully is important to ensure efficiency and that they have a good chance of succeeding.

82 citations

Journal ArticleDOI
10 Feb 2016-PLOS ONE
TL;DR: It is shown that the final test statistic may ignore a substantial subset of the observed event times, and an alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control.
Abstract: Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies.

39 citations

Journal ArticleDOI
TL;DR: In this paper, a new class of weighted logrank tests (WLRTs) is proposed to control the risk of concluding that a new drug is more efficacious than standard of care, when, in fact, it is uniformly inferior.
Abstract: We propose a new class of weighted logrank tests (WLRTs) that control the risk of concluding that a new drug is more efficacious than standard of care, when, in fact, it is uniformly inferior. Perhaps surprisingly, this risk is not controlled for WLRT in general. Tests from this new class can be constructed to have high power under a delayed-onset treatment effect scenario, as well as being almost as efficient as the standard logrank test under proportional hazards.

33 citations

Journal ArticleDOI
TL;DR: The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection.
Abstract: Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection.

30 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors present a comparison of multiple comparative methods in theory and methods for quality assurance in the field of quality assurance. Journal of Quality Technology: Vol. 29, No. 3, No 3, pp. 359-359.
Abstract: (1997). Multiple Comparisons: Theory and Methods. Journal of Quality Technology: Vol. 29, No. 3, pp. 359-359.

402 citations

Journal ArticleDOI
TL;DR: This tutorial paper provides guidance on key aspects of adaptive designs that are relevant to clinical triallists, and emphasises the general principles of transparency and reproducibility and suggest how best to put them into practice.
Abstract: Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial’s course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented. We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results. This tutorial paper provides guidance on key aspects of adaptive designs that are relevant to clinical triallists. We explain the basic rationale behind adaptive designs, clarify ambiguous terminology and summarise the utility and pitfalls of adaptive designs. We discuss practical aspects around funding, ethical approval, treatment supply and communication with stakeholders and trial participants. Our focus, however, is on the interpretation and reporting of results from adaptive design trials, which we consider vital for anyone involved in medical research. We emphasise the general principles of transparency and reproducibility and suggest how best to put them into practice.

368 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a bandit-based patient allocation rule that overcomes the issue of low power, thus removing a potential barrier for their use in practice, and evaluated their performance compared to other allocation rules, including fixed randomization.
Abstract: Multi-armed bandit problems (MABPs) are a special type of optimal control problem well suited to model resource allocation under uncertainty in a wide variety of contexts. Since the first publication of the optimal solution of the classic MABP by a dynamic index rule, the bandit literature quickly diversified and emerged as an active research topic. Across this literature, the use of bandit models to optimally design clinical trials became a typical motivating application, yet little of the resulting theory has ever been used in the actual design and analysis of clinical trials. To this end, we review two MABP decision-theoretic approaches to the optimal allocation of treatments in a clinical trial: the infinite-horizon Bayesian Bernoulli MABP and the finite-horizon variant. These models possess distinct theoretical properties and lead to separate allocation rules in a clinical trial design context. We evaluate their performance compared to other allocation rules, including fixed randomization. Our results indicate that bandit approaches offer significant advantages, in terms of assigning more patients to better treatments, and severe limitations, in terms of their resulting statistical power. We propose a novel bandit-based patient allocation rule that overcomes the issue of low power, thus removing a potential barrier for their use in practice.

260 citations

Journal ArticleDOI
TL;DR: A flexible method of extending a study based on conditional power, where the significance of the treatment difference at the planned end is used to determine the number of additional observations needed and the critical value necessary after accruing those additional observations.

252 citations