scispace - formally typeset
Search or ask a question
Book

Design of Experiments: An Introduction Based on Linear Models

31 May 2017-
TL;DR: In this article, a model matrix formulation is used to model the influence of design on estimation and hypothesis testing of CRDs, CBDs, LSDs, and BIBDs.
Abstract: Introduction Example: rainfall and grassland Basic elements of an experiment Experiments and experiment-like studies Models and data analysis Linear Statistical Models Linear vector spaces Basic linear model The hat matrix, least-squares estimates, and design information matrix The partitioned linear model The reduced normal equations Linear and quadratic forms Estimation and information Hypothesis testing and information Blocking and information Completely Randomized Designs Introduction Models Matrix formulation Influence of design on estimation Influence of design on hypothesis testing Randomized Complete Blocks and Related Designs Introduction A model Matrix formulation Influence of design on estimation Influence of design on hypothesis testing Orthogonality and "Condition E" Latin Squares and Related Designs Introduction Replicated Latin squares A model Matrix formulation Influence of design on quality of inference More general constructions: Graeco-Latin squares Some Data Analysis for CRDs and Orthogonally Blocked Designs Introduction Diagnostics Power transformations Basic inference Multiple comparisons Balanced Incomplete Block Designs Introduction A model Matrix formulation Influence of design on quality of inference More general constructions Random Block Effects Introduction Inter- and intra-block analysis CBDs and augmented CBDs BIBDs Combined estimator Why can information be "recovered"? CBD reprise Factorial Treatment Structure Introduction An overparameterized model An equivalent full-rank model Estimation Partitioning of variability and hypothesis testing Factorial experiments as CRDs, CBDs, LSDs, and BIBDs Model reduction Split-Plot Designs Introduction SPD(R,B) SPD(B,B) More than two experimental factors More than two strata of experimental units Two-Level Factorial Experiments: Basics Introduction Example: bacteria and nuclease Two-level factorial structure Estimation of treatment contrasts Testing factorial effects Additional guidelines for model editing Two-Level Factorial Experiments: Blocking Introduction Complete blocks Balanced incomplete block designs Regular blocks of size 2f-1 Regular blocks of size 2f-2 Regular blocks: general case Two-Level Factorial Experiments: Fractional Factorials Introduction Regular fractional factorial designs Analysis Example: bacteria and bacteriocin Comparison of fractions Blocking regular fractional factorial designs Augmenting regular fractional factorial designs Irregular fractional factorial designs Factorial Group Screening Experiments Introduction Example: semiconductors and simulation Factorial structure of group screening designs Group screening design considerations Case study Regression Experiments: First-Order Polynomial Models Introduction Polynomial models Designs for first-order models Blocking experiments for first-order models Split-plot regression experiments Diagnostics Regression Experiments: Second-Order Polynomial Models Introduction Quadratic polynomial models Designs for second-order models Design scaling and information Orthogonal blocking Split-plot designs Bias due to omitted model terms Introduction to Optimal Design Introduction Optimal design fundamentals Optimality criteria Algorithms Appendices References Index A Conclusion and Exercises appear at the end of each chapter.
Citations
More filters
Journal ArticleDOI
TL;DR: Hippocampal neurophysiology post-injury revealed reduced axonal function, synaptic dysfunction, and regional hyperexcitability at one week following even “mild” injury levels, and these neurophysiological changes occurred in the apparent absence of intra-hipp hippocampal neuronal or axonal degeneration.
Abstract: Hippocampal-dependent deficits in learning and memory formation are a prominent feature of traumatic brain injury (TBI); however, the role of the hippocampus in cognitive dysfunction after concussion (mild TBI) is unknown. We therefore investigated functional and structural changes in the swine hippocampus following TBI using a model of head rotational acceleration that closely replicates the biomechanics and neuropathology of closed-head TBI in humans. We examined neurophysiological changes using a novel ex vivo hippocampal slice paradigm with extracellular stimulation and recording in the dentate gyrus and CA1 occurring at 7 days following non-impact inertial TBI in swine. Hippocampal neurophysiology post-injury revealed reduced axonal function, synaptic dysfunction, and regional hyperexcitability at one week following even "mild" injury levels. Moreover, these neurophysiological changes occurred in the apparent absence of intra-hippocampal neuronal or axonal degeneration. Input-output curves demonstrated an elevated excitatory post-synaptic potential (EPSP) output for a given fiber volley input in injured versus sham animals, suggesting a form of homeostatic plasticity that manifested as a compensatory response to decreased axonal function in post-synaptic regions. These data indicate that closed-head rotational acceleration-induced TBI, the common cause of concussion in humans, may induce significant alterations in hippocampal circuitry function that have not resolved at 7 days post-injury. This circuitry dysfunction may underlie some of the post-concussion symptomatology associated with the hippocampus, such as post-traumatic amnesia and ongoing cognitive deficits.

37 citations

Journal ArticleDOI
TL;DR: The Fisher randomization test (FRT) is appropriate for any test statistic, under a sharp null hypothesis that can recover all missing potential outcomes as discussed by the authors, but it is often sought after to test a...
Abstract: The Fisher randomization test (FRT) is appropriate for any test statistic, under a sharp null hypothesis that can recover all missing potential outcomes. However, it is often sought after to test a...

34 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide an evaluation of current methodology and propose some initial guidelines for future research in the emerging discipline of equitation science, including guidelines for experimental design in studies involving the ridden horse.
Abstract: Within the emerging discipline of Equitation Science, the application of consistent methodology, including robust objective measures, is required for sound scientific evaluation. This report aims to provide an evaluation of current methodology and to propose some initial guidelines for future research. The value of research, especially that involving small sample sizes, can be enhanced by the application of consistent methodology and reporting enabling results to be compared across studies. This article includes guidelines for experimental design in studies involving the ridden horse. Equine ethograms currently used are reviewed and factors to be considered in the development of a ridden-horse ethogram are evaluated. An assessment of methods used to collect behavioral and physiological data is included and the use of equipment for measurements (e.g., rein-tension and pressure-sensing instruments) is discussed. Equitation science is a new discipline, subject to evolving viewpoints on research foci and design. Technological advances may improve the accuracy and detail of measurements but must be used within appropriate and valid experimental designs.

33 citations


Cites background from "Design of Experiments: An Introduct..."

  • ...In any experiment variable levels of skill and bias in the experimenters and other personnel, such as 131 riders and handlers, may affect the results (Kuehl, 2000; Morris, 2010)....

    [...]

  • ...In any experiment variable levels of skill and bias in the experimenters and other personnel, such as riders and handlers, may affect the results (Kuehl, 2000; Morris, 2010)....

    [...]

Journal ArticleDOI
Linxing Yao1, Wen Zhou1, Tong Wang1, Muhua Liu1, Chenxu Yu1 
TL;DR: A nonlinear prediction model, or a detection function, was developed using 182 measurements to predict yolk concentration with a known storage time to develop a statistical model for predicting egg yolk contamination level in egg white using a spectroscopic method.

24 citations

Journal ArticleDOI
TL;DR: In this paper, a Fisher randomization test (FRT) is used to estimate missing potential outcomes under a compatible sharp null hypothesis, where the treatment does not affect the units on average.
Abstract: The Fisher randomization test (FRT) is appropriate for any test statistic, under a sharp null hypothesis that can recover all missing potential outcomes. However, it is often sought after to test a weak null hypothesis that the treatment does not affect the units on average. To use the FRT for a weak null hypothesis, we must address two issues. First, we need to impute the missing potential outcomes although the weak null hypothesis cannot determine all of them. Second, we need to choose a proper test statistic. For a general weak null hypothesis, we propose an approach to imputing missing potential outcomes under a compatible sharp null hypothesis. Building on this imputation scheme, we advocate a studentized statistic. The resulting FRT has multiple desirable features. First, it is model-free. Second, it is finite-sample exact under the sharp null hypothesis that we use to impute the potential outcomes. Third, it conservatively controls large-sample type I errors under the weak null hypothesis of interest. Therefore, our FRT is agnostic to the treatment effect heterogeneity. We establish a unified theory for general factorial experiments. We also extend it to stratified and clustered experiments.

24 citations


Cites background or methods from "Design of Experiments: An Introduct..."

  • ...Although Morris (2010) has reiterated the usual OLS assumptions that justify the F test, practitioners do not always check them....

    [...]

  • ...sities. In both ANOVA and factorial designs, the FRT with B fails to control type I error, and dramatically so at level 0.02. A natural extension of the simulation just performed can be made to SREs. Morris (2010), for instance, suggests testing H 0N : Y¯(1) = = Y¯(J) with the F statistic from a linear regression of the observed response on stratum and treatment indicators, i.e. J + H predictors. Although Morr...

    [...]

  • ...The textbook suggestion Morris (2010) for testing the our null hypotheses in the SRE case involves the F statistic from a linear regression of the observed response on stratum and treatment indicators, that is, J + H predictors....

    [...]

  • ...C is a row vector, then B = X2. 3.3.Statistics From the Ordinary Least Squares It is common to analyze experimental data based on the ordinary least squares (OLS) fit of a (Normal) linear model (e.g., Morris 2010). The design matrix is a block diagonal matrix X= diag 1 N 1 ,. . .,1 N J , and the response vector has the corresponding observed outcomes from treatment groups 1,. . ., J. The OLS coefficients are gi...

    [...]

  • ...Ordinary least squares (OLS) tools are widespread in the analysis of experimental data (e.g., Morris 2010)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented in this paper, where the objective is to specify the benefits of randomization in estimating causal effects of treatments.
Abstract: A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented. The objective is to specify the benefits of randomization in estimating causal effects of treatments. The basic conclusion is that randomization should be employed whenever possible but that the use of carefully controlled nonrandomized data to estimate causal effects is a reasonable and necessary procedure in many cases. Recent psychological and educational literature has included extensive criticism of the use of nonrandomized studies to estimate causal effects of treatments (e.g., Campbell & Erlebacher, 1970). The implication in much of this literature is that only properly randomized experiments can lead to useful estimates of causal effects. If taken as applying to all fields of study, this position is untenable. Since the extensive use of randomized experiments is limited to the last half century,8 and in fact is not used in much scientific investigation today,4 one is led to the conclusion that most scientific "truths" have been established without using randomized experiments. In addition, most of us successfully determine the causal effects of many of our everyday actions, even interpersonal behaviors, without the benefit of randomization. Even if the position that causal effects of treatments can only be well established from randomized experiments is taken as applying only to the social sciences in which

8,377 citations


"Design of Experiments: An Introduct..." refers background in this paper

  • ...This language, which has its roots in Rothman’s “sufficient cause” classification (Rothman, 1976) and Rubin’s “potential outcome” framework (Rubin, 1974) does not recognize modeling notions such as “processes,” “omitted factors,” or “causal mechanisms” that guide scientific thoughts, but forces one…...

    [...]

Journal ArticleDOI
TL;DR: This commentary sheds broader light on this comparison by considering the cumulative effects of conditioning on multiple covariates and showing that bias amplification may build up at a faster rate than bias reduction, and derives a partial order on sets of covariates which reveals preference for conditioning on outcome- related, rather than exposure-related, confounders.
Abstract: In choosing covariates for adjustment or inclusion in propensity score analysis, researchers must weigh the benefit of reducing confounding bias carried by those covariates against the risk of amplifying residual bias carried by unmeasured confounders. The latter is characteristic of covariates that act like instrumental variables-that is, variables that are more strongly associated with the exposure than with the outcome. In this issue of the Journal (Am J Epidemiol. 2011;174(11):1213-1222), Myers et al. compare the bias amplification of a near-instrumental variable with its bias-reducing potential and suggest that, in practice, the latter outweighs the former. The author of this commentary sheds broader light on this comparison by considering the cumulative effects of conditioning on multiple covariates and showing that bias amplification may build up at a faster rate than bias reduction. The author further derives a partial order on sets of covariates which reveals preference for conditioning on outcome-related, rather than exposure-related, confounders.

162 citations


"Design of Experiments: An Introduct..." refers background or methods in this paper

  • ...If one assumes “ignorability,” bias disappears; if not, bias persists, and one remains at the mercy of the (wrong) assumption that adjusting for as many covariates as one can measure would reduce bias (Rubin, 2009; Pearl, 2009a, 2009b, 2011a)....

    [...]

  • ...The proper choice of covariates into the propensity-score is dependent critically on modeling assumptions (Pearl, 2009a, 2009b, 2011a; Rubin, 2009)....

    [...]

  • ...Most participants in a public discussion of the usages of principal strata, including former proponents of this framework now admit that principal strata has nothing to do with causal mediation (Joffe, 2011; Pearl, 2011b; Sjölander, 2011; VanderWeele, 2011)....

    [...]

Journal ArticleDOI
TL;DR: The conceptual basis for this framework is analyzed and response is invited to clarify the value of principal stratification in estimating causal effects of interest.
Abstract: Principal stratification has recently become a popular tool to address certain causal inference questions, particularly in dealing with post-randomization factors in randomized trials. Here, we analyze the conceptual basis for this framework and invite response to clarify the value of principal stratification in estimating causal effects of interest.

110 citations


"Design of Experiments: An Introduct..." refers background or methods in this paper

  • ...If one assumes “ignorability,” bias disappears; if not, bias persists, and one remains at the mercy of the (wrong) assumption that adjusting for as many covariates as one can measure would reduce bias (Rubin, 2009; Pearl, 2009a, 2009b, 2011a)....

    [...]

  • ...The proper choice of covariates into the propensity-score is dependent critically on modeling assumptions (Pearl, 2009a, 2009b, 2011a; Rubin, 2009)....

    [...]

  • ...Most participants in a public discussion of the usages of principal strata, including former proponents of this framework now admit that principal strata has nothing to do with causal mediation (Joffe, 2011; Pearl, 2011b; Sjölander, 2011; VanderWeele, 2011)....

    [...]

Journal ArticleDOI
TL;DR: The notion of principal stratification has shed light on problems of non-compliance, censoring-by-death, and the analysis of post-infection outcomes; it may be of use in considering problems of surrogacy but further development is needed; but it is not the appropriate tool for assessing “mediation.”
Abstract: Pearl (2011) asked for the causal inference community to clarify the role of the principal stratification framework in the analysis of causal effects. Here, I argue that the notion of principal stratification has shed light on problems of non-compliance, censoring-by-death, and the analysis of post-infection outcomes; that it may be of use in considering problems of surrogacy but further development is needed; that it is of some use in assessing “direct effects”; but that it is not the appropriate tool for assessing “mediation.” There is nothing within the principal stratification framework that corresponds to a measure of an “indirect” or “mediated” effect.

109 citations


"Design of Experiments: An Introduct..." refers background in this paper

  • ...Most participants in a public discussion of the usages of principal strata, including former proponents of this framework now admit that principal strata has nothing to do with causal mediation (Joffe, 2011; Pearl, 2011b; Sjölander, 2011; VanderWeele, 2011)....

    [...]

01 May 2009
TL;DR: In this paper, the authors argue that the practice of conditioning on all observed covariates should be treated with great caution, and provide the scientific basis for principled selection of covariates, using graph-based methods.
Abstract: This letter argues that the practice of conditioning on all observed covariates, recently advocated by several analysts, should be treated with great caution. Graphical methods explain why, and provide the scientific basis for principled selection of covariates.

82 citations


"Design of Experiments: An Introduct..." refers background or methods in this paper

  • ...If one assumes “ignorability,” bias disappears; if not, bias persists, and one remains at the mercy of the (wrong) assumption that adjusting for as many covariates as one can measure would reduce bias (Rubin, 2009; Pearl, 2009a, 2009b, 2011a)....

    [...]

  • ...Modern treatments of Simpson’s paradox can and should tell us how to make this determination directly from the causal story behind the example (See, for example, Pearl, 2009c, p. 383) without guessing relative sizes of strata and without going through the lengthy arithmetic....

    [...]

  • ...The proper choice of covariates into the propensity-score is dependent critically on modeling assumptions (Pearl, 2009a, 2009b, 2011a; Rubin, 2009)....

    [...]

  • ...Finally, the propensity-score is merely a powerful estimator, and conditioning on the propensity score would be theoretically equivalent (asymptotically) to controlling on its covariates, regardless of whether strong ignorability holds (Pearl, 2009c, p. 349)....

    [...]