scispace - formally typeset
Search or ask a question

Showing papers by "Mark W. Fraser published in 2017"


Journal ArticleDOI
TL;DR: Monte-Carlo simulations are used to demonstrate a Bayesian approach—which tends to require smaller samples than the classical frequentist approach—in the development of interventions from one study to the next.
Abstract: Objective: In intervention research, the decision to continue developing a new program or treatment is dependent on both the change-inducing potential of a new strategy (i.e., its effect size) and the methods used to measure change, including the size of samples. This article describes a Bayesian approach to determining sample sizes in the sequential development of interventions. Description: Because sample sizes are related to the likelihood of detecting program effects, large samples are preferred. But in the design and development process that characterizes intervention research, smaller scale studies are usually required to justify more costly, larger scale studies. We present 4 scenarios designed to address common but complex questions regarding sample-size determination and the risk of observing misleading (e.g., false-positive) findings. From a Bayesian perspective, this article describes the use of decision rules composed of different target probabilities and prespecified effect sizes. Mon...

9 citations


Journal ArticleDOI
TL;DR: In this paper, a simulation study that compares Bayesian and classical frequentist approaches to research design is presented, where the authors describe and demonstrate a Bayesian perspective on intervention research.
Abstract: Objective: By presenting a simulation study that compares Bayesian and classical frequentist approaches to research design, this paper describes and demonstrates a Bayesian perspective on intervention research. Method: Using hypothetical pilot-study data where an effect size of 0.2 had been observed, we designed a 2-arm trial intended to compare an intervention with a control condition (e.g., usual services). We determined the trial sample size by a power analysis with a Type I error probability of 2.5% (1-sided) at 80% power. Following a Monte-Carlo computational algorithm, we simulated 1 million outcomes for this study and then compared the performance of the Bayesian perspective with the performance of the frequentist analytic perspective. Treatment effectiveness was assessed using a frequentist t-test and an empirical Bayesian t-test. Statistical power was calculated as the criterion for comparison of the 2 approaches to analysis. Results: In the simulations, the classical frequentist t-test y...

7 citations