scispace - formally typeset
Search or ask a question
BookDOI

Design and Analysis of Experiments with R

17 Dec 2014-
TL;DR: In this paper, the authors present an example of a two-factor Factorial Plan in R with fixed and random factors for estimating variance components of two-Factor Factorial Plans.
Abstract: Introduction Statistics and Data Collection Beginnings of Statistically Planned Experiments Definitions and Preliminaries Purposes of Experimental Design Types of Experimental Designs Planning Experiments Performing the Experiments Use of R Software Completely Randomized Designs with One Factor Introduction Replication and Randomization A Historical Example Linear Model for Completely Randomized Design (CRD) Verifying Assumptions of the Linear Model Analysis Strategies When Assumptions Are Violated Determining the Number of Replicates Comparison of Treatments after the F-Test Factorial Designs Introduction Classical One at a Time versus Factorial Plans Interpreting Interactions Creating a Two-Factor Factorial Plan in R Analysis of a Two-Factor Factorial in R Factorial Designs with Multiple Factors-Completely Randomized Factorial Design (CRFD) Two-Level Factorials Verifying Assumptions of the Model Randomized Block Designs Introduction Creating a Randomized Complete Block (RCB) Design in R Model for RCB An Example of a RCB Determining the Number of Blocks Factorial Designs in Blocks Generalized Complete Block Design Two Block Factors Latin Square Design (LSD) Designs to Study Variances Introduction Random Sampling Experiments (RSE) One-Factor Sampling Designs Estimating Variance Components Two-Factor Sampling Designs-Factorial RSE Nested SE Staggered Nested SE Designs with Fixed and Random Factors Graphical Methods to Check Model Assumptions Fractional Factorial Designs Introduction to Completely Randomized Fractional Factorial (CRFF) Half Fractions of 2k Designs Quarter and Higher Fractions of 2k Designs Criteria for Choosing Generators for 2k-p Designs Augmenting Fractional Factorials Plackett-Burman (PB) Screening Designs Mixed-Level Fractional Factorials Orthogonal Array (OA) Definitive Screening Designs Incomplete and Confounded Block Designs Introduction Balanced Incomplete Block (BIB) Designs Analysis of Incomplete Block Designs Partially Balanced Incomplete Block (PBIB) Designs-Balanced Treatment Incomplete Block (BTIB) Row Column Designs Confounded 2k and 2k-p Designs Confounding 3 Level and p Level Factorial Designs Blocking Mixed-Level Factorials and OAs Partially CBF Split-Plot Designs Introduction Split-Plot Experiments with CRD in Whole Plots (CRSP) RCB in Whole Plots (RBSP) Analysis Unreplicated 2k Split-Plot Designs 2k-p Fractional Factorials in Split Plots (FFSP) Sample Size and Power Issues for Split-Plot Designs Crossover and Repeated Measures Designs Introduction Crossover Designs (COD) Simple AB, BA Crossover Designs for Two Treatments Crossover Designs for Multiple Treatments Repeated Measures Designs Univariate Analysis of Repeated Measures Design Response Surface Designs Introduction Fundamentals of Response Surface Methodology Standard Designs for Second-Order Models Creating Standard Response Surface Designs in R Non-Standard Response Surface Designs Fitting the Response Surface Model with R Determining Optimum Operating Conditions Blocked Response Surface (BRS) Designs Response Surface Split-Plot (RSSP) Designs Mixture Experiments Introduction Models and Designs for Mixture Experiments Creating Mixture Designs in R Analysis of Mixture Experiment Constrained Mixture Experiments Blocking Mixture Experiments Mixture Experiments with Process Variables Mixture Experiments in Split-Plot Arrangements Robust Parameter Design Experiments Introduction Noise Sources of Functional Variation Product Array Parameter Design Experiments Analysis of Product Array Experiments Single Array Parameter Design Experiments Joint Modeling of Mean and Dispersion Effects Experimental Strategies for Increasing Knowledge Introduction Sequential Experimentation One-Step Screening and Optimization An Example of Sequential Experimentation Evolutionary Operation Concluding Remarks Appendix: Brief Introduction to R Answers to Selected Exercises Bibliography Index A Review and Exercises appear at the end of each chapter.
Citations
More filters
Journal ArticleDOI
TL;DR: In this work, a new and cost-effective concept of a mechanically assisted suction reed valve is introduced andCalorimeter and valve dynamics measurements show considerable improvements of the COP and the valve impact velocity.
Abstract: Reed valves are widely used in hermetic reciprocating compressors for domestic refrigeration. They are crucial components in terms of efficiency, cooling performance and reliability of the compressor. While reed valves already cause a significant proportion of the thermodynamic losses in fixed speed compressors, they cause even more challenges in variable speed compressors. Especially in variable speed compressors, a further improvement of the reed valve dynamics requires the consideration of a new valve concept. In this work, a new and cost-effective concept of a mechanically assisted suction reed valve is introduced. Simulation based response modelling and a multi-response optimization approach are applied to systematically optimize the design. Simulations between 1500 rpm and 5000 rpm indicate the strengths and weaknesses of individual optimized design variants over a wide compressor speed range. Calorimeter and valve dynamics measurements show considerable improvements of the COP and the valve impact velocity.

8 citations

Posted Content
TL;DR: This article discusses optimal Bayesian crossover designs for generalized linear models that minimize the log determinant of the variance of the estimated treatment effects over all possible allocation of the n subjects to the treatment sequences.
Abstract: This article discusses D-optimal Bayesian crossover designs for generalized linear models. Crossover trials with t treatments and p periods, for $t <= p$, are considered. The designs proposed in this paper minimize the log determinant of the variance of the estimated treatment effects over all possible allocation of the n subjects to the treatment sequences. It is assumed that the p observations from each subject are mutually correlated while the observations from different subjects are uncorrelated. Since main interest is in estimating the treatment effects, the subject effect is assumed to be nuisance, and generalized estimating equations are used to estimate the marginal means. To address the issue of parameter dependence a Bayesian approach is employed. Prior distributions are assumed on the model parameters which are then incorporated into the D-optimal design criterion by integrating it over the prior distribution. Three case studies, one with binary outcomes in a 4$\times$4 crossover trial, second one based on count data for a 2$\times$2 trial and a third one with Gamma responses in a 3$times$2 crossover trial are used to illustrate the proposed method. The effect of the choice of prior distributions on the designs is also studied.

7 citations


Cites methods from "Design and Analysis of Experiments ..."

  • ...For normal response when t = p and t is even, for reduced models LSD and for full models WSD are variance balanced designs (Lawson (2014), page 361)....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors proposed the use of symbolic regression for assessing the remaining capacity of perforated steel tubular members from aged offshore units subjected to axial compressive forces.
Abstract: Offshore steel tubular structures may be subjected to damage perforations by long-term operation in a corrosive environment, which reduces their structural strength. There is a lack of standards and procedures to quantify the loss of structural capacity in view of the damage dimension and geometrical and material characteristics of the tubular member. The present paper proposes the use of symbolic regression for assessing the remaining capacity of perforated steel tubular members from aged offshore units subjected to axial compressive forces. A Finite Element Analysis campaign was generated in combination with a full factorial design of experiments. Length-to-diameter, diameter-to-thickness and damage extension ratios have been addressed as potential preponderant factors for the experimental design. Results from numerical simulations were statistically evaluated and then symbolic regression was handled to generate an optimized expression by minimizing the worst error case between predicted and numerical results. Capacity responses from the generated expression lie close to the Finite Element Analysis and experimental results, suggesting that the proposed methodology can be alternatively employed to assess the remaining capacity of perforated steel tubular members subjected to axial compressive loads.

7 citations

Book ChapterDOI
D.R. Fox1
01 Jan 2016
TL;DR: In this article, the authors focus on contemporary statistical methods that are helping ecotoxicologists collect, process, and visualize data to develop meaningful, fair, and robust water quality objectives.
Abstract: The phenomenon known as “Big Data” has firmly entrenched itself into everyday life on the back of promises that “it” will transform our lives and lead to improvements in marketing, healthcare, finance, law enforcement, science, and even sports performance. But as we stand in awe of the scale and complexities of collecting and processing terabytes of data, we need to pause and reflect on aspects of our lives that rely on decision-making under extreme uncertainty—the sort of uncertainty that arises from working with incredibly tiny data sets such as those used in ecotoxicology. How to draw scientifically credible and statistical defensible inference from very small samples is no less challenging than how to extract fundamental insights from massive data collections. This chapter focuses on contemporary statistical methods that are helping ecotoxicologists collect, process, and visualize data to develop meaningful, fair, and robust water quality objectives.

6 citations

Journal ArticleDOI
TL;DR: The mixed logit regression model identified significant heterogeneity in SDM preferences among respondents, and sub-group analysis showed that some heterogeneities varied in respondents by sex, study programs and their experience of visiting doctors.
Abstract: The objective of this study was to ascertain the importance rankings of factors affecting the implementation of shared decision-making (SDM) in medical students in China and determine whether these factors were consistent across the respondents’ individual characteristics. Students studying clinical medicine were recruited from three medical universities in China. A cross-sectional online survey using best-worst object scaling with a balanced incomplete block design was adopted to investigate their preference towards implementing SDM in China. Count analysis, multinomial logit analysis and mixed logit analysis were used to estimate the preference heterogeneity of the SDM factors among respondents. A total of 574 medical students completed the online survey. The three most important factors for implementing SDM were trust and respect, (providing) high-quality medical information and multi-disciplinary collaboration. The mixed logit regression model identified significant heterogeneity in SDM preferences among respondents, and sub-group analysis showed that some heterogeneities varied in respondents by sex, study programs and their experience of visiting doctors. The importance rankings provide rich information for implementing SDM and facilitate the reform of education in medical schools in China. However, the heterogeneities in SDM preference need further explorations.

6 citations