scispace - formally typeset
Search or ask a question

Showing papers by "Stanley E. Lazic published in 2018"


Journal ArticleDOI
TL;DR: It is argued that distinguishing between biological units, experimental units, and observational units clarifies where replication should occur, describe the criteria for genuine replication, and provide concrete examples of in vitro, ex vivo, and in vivo experimental designs.
Abstract: Biologists determine experimental effects by perturbing biological entities or units. When done appropriately, independent replication of the entity–intervention pair contributes to the sample size (N) and forms the basis of statistical inference. If the wrong entity–intervention pair is chosen, an experiment cannot address the question of interest. We surveyed a random sample of published animal experiments from 2011 to 2016 where interventions were applied to parents and effects examined in the offspring, as regulatory authorities provide clear guidelines on replication with such designs. We found that only 22% of studies (95% CI = 17%–29%) replicated the correct entity–intervention pair and thus made valid statistical inferences. Nearly half of the studies (46%, 95% CI = 38%–53%) had pseudoreplication while 32% (95% CI = 26%–39%) provided insufficient information to make a judgement. Pseudoreplication artificially inflates the sample size, and thus the evidence for a scientific claim, resulting in false positives. We argue that distinguishing between biological units, experimental units, and observational units clarifies where replication should occur, describe the criteria for genuine replication, and provide concrete examples of in vitro, ex vivo, and in vivo experimental designs.

143 citations


Journal ArticleDOI
Stanley E. Lazic1
TL;DR: It is shown how the design of an experiment and some analytical decisions can have a surprisingly large effect on power.
Abstract: Underpowered experiments have three problems: true effects are harder to detect, the true effects that are detected tend to have inflated effect sizes and as power decreases so does the probability that a statistically significant result represents a true effect. Many biology experiments are underpowered and recent calls to change the traditional 0.05 significance threshold to a more stringent value of 0.005 will further reduce the power of the average experiment. Increasing power by increasing the sample size is often the only option considered, but more samples increases costs, makes the experiment harder to conduct and is contrary to the 3Rs principles for animal research. We show how the design of an experiment and some analytical decisions can have a surprisingly large effect on power.

35 citations


Journal ArticleDOI
TL;DR: This paper argues that Bayesian methods provide answers to all of the problems of drug toxicity and uses hERG-mediated QT prolongation as a case study and includes R and Python code to encourage the adoption of these methods.

25 citations


Journal ArticleDOI
TL;DR: This Formal Comment responds to Jordan et al., and stresses that if scientific findings are to be robust, training in experimental design and statistics is critical to ensure that research questions, design considerations, and analyses are aligned.
Abstract: This Formal Comment responds to Jordan et al., and stresses that if scientific findings are to be robust, training in experimental design and statistics is critical to ensure that research questions, design considerations, and analyses are aligned.

1 citations