scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A comparison of design and model selection methods for supersaturated experiments

01 Dec 2010-Computational Statistics & Data Analysis (North-Holland)-Vol. 54, Iss: 12, pp 3158-3167
TL;DR: Simulated experiments are used to evaluate the use of E(s^2)-optimal and Bayesian D-optimal designs and to compare three analysis strategies representing regression, shrinkage and a novel model-averaging procedure.
About: This article is published in Computational Statistics & Data Analysis.The article was published on 2010-12-01 and is currently open access. It has received 76 citations till now. The article focuses on the topics: Single-subject research & Model selection.

Summary (2 min read)

2.1. Design construction criteria

  • The two definitions are equivalent for balanced designs, that is, where each factor is set to +1 and -1 equally often.
  • The balanced E(s2)-optimal designs used in this paper were found using the algorithm of Ryan and Bulutoglu (2007).
  • (3) The prior information can be viewed as equivalent to having sufficient additional runs to allow estimation of all factor effects.
  • This method can generate supersaturated designs for any design size and any number of factors.

2.2. Model selection methods

  • Regression (forward selection), shrinkage (Gauss-Dantzig selector), and model-averaging, also known as Three methods are examined.
  • This procedure starts with the null model and adds the most significant factor main effect at each step according to an F -test (Miller, 2002, pp. 39-42).
  • Candes and Tao (2007) also developed a two-stage estimation approach, the Gauss-Dantzig selector, which reduces underestimation bias and was used for the analysis of supersaturated designs by Phoa et al. (2009).
  • Model-averaged coefficients are obtained by calculating estimates for a set of models and then computing a weighted average where the weights represent the plausibility of each model (Burnham and Anderson, 2002, ch. 4).
  • Retain the m1 < m factors with the highest summed weights.

3.2. Experiment simulation

  • To obtain the coefficients for the active factors, a sample of size c was drawn from a N(µ, 0.2), and ± signs randomly allocated to each number.
  • Coefficients for the inactive factors were obtained as a random draw from a N(0, 0.2).
  • Data were generated from model (1), with errors randomly drawn from a N(0, 1), and analysed by each of the three model selection methods.
  • The random assignment of active factors to columns is important to remove selection bias.
  • The choice of distributions at steps 2 and 3 ensures separation between the realised coefficients of the active and inactive factors.

3.3. Choice of tuning constants

  • For each method, a comparison of different values for the tuning constants was carried out prior to the main simulation studies.
  • The aim was to find values of the tuning parameters that did not rely on detailed information from each simulation setting.
  • The authors found that removing a single factor overcame this problem.
  • Attempting to fit too large models in step 5, i.e. setting m3 too high, can result in loss of power and also higher type I errors.
  • The authors suggest that m3 is chosen broadly in line with effect sparsity, and a little larger than the anticipated number of active factors.

3.4. Simulation results

  • These show that the Gauss-Dantzig selector has values of π1 and π3 as high, or higher, than the other analysis methods in almost all the simulations and often has very low values for π2.
  • From their comparisons, the Gauss-Dantzig selector is the most promising method, particularly in the more challenging settings.
  • The Bayesian D-optimal designs have consistently higher values for π1, . . . , π4 than the E(s 2)-optimal designs, although the differences are often small.
  • The results indicated poor performance with π1 and π3 less than 0.61 and 0.37 respectively.
  • In practice, the assignment of active factors to the columns of a design may influence the subsequent model selection.

3.5. No active factors

  • Further simulations were used to check the performance of the design and analysis methods when there are no active factors, a situation where π1 and π3 no longer apply.
  • From Table 5, the Gauss-Dantzig selector is clearly the best analysis method and rarely declares any factors active.
  • The other methods have considerably higher type I errors, typically declaring at least two factors active.
  • Table 5 also shows that the E(s2)optimal designs perform better than the Bayesian D-optimal designs for the Gauss-Dantzig selector, agreeing with the results for π2 in Section 3.4.

3.6. What is ‘effect sparsity’?

  • A set of simulations was performed to assess how many active factors could be identified reliably using supersaturated designs.
  • Both the E(s2)-optimal and the Bayesian D-optimal designs perform well for up to 8 active factors.
  • The Bayesian D-optimal design has slightly higher π1, π2 and π3 values and thus tends to select slightly larger models.
  • The performance, particularly under π1 and π3, declines more rapidly as the number of active factors increases.
  • Again, slightly larger models are selected using the Bayesian D-optimal design, a difference which is not observed for other analysis methods.

4. Discussion

  • The results in this paper provide evidence that supersaturated designs may be a useful tool for screening experiments, particularly marginally supersaturated designs (where m is only slightly larger than n).
  • However, evidence from their study suggests that 2 and 3 are conditions under which supersaturated designs are most likely to be successful.
  • Little difference was found in the performance of the E(s2)-optimal and Bayesian D-optimal designs, with the latter having slightly higher power to detect active effects at the cost of a slightly higher type I error rate.
  • Such designs are readily available in standard software such as SAS Proc Optex and JMP.
  • The simulations presented cover a broader range of conditions than previously considered, and investigate more aspects of design performance.

Acknowledgments

  • The authors wish to thank Bradley Jones (SAS) and Susan Lewis (University of Southampton) for helpful discussions.
  • The first author was supported by a PhD studentship from EPSRC and the Lubrizol Corporation.

Did you find this useful? Give us your feedback

Figures (8)
Citations
More filters
Journal ArticleDOI
TL;DR: This paper provides a tutorial on Latin hypercube design of experiments, highlighting potential reasons of its widespread use and going all the way to the pitfalls of the indiscriminate use of Latin hyper cube designs.
Abstract: The growing power of computers enabled techniques created for design and analysis of simulations to be applied to a large spectrum of problems and to reach high level of acceptance among practitioners. Generally, when simulations are time consuming, a surrogate model replaces the computer code in further studies (e.g., optimization, sensitivity analysis, etc.). The first step for a successful surrogate modeling and statistical analysis is the planning of the input configuration that is used to exercise the simulation code. Among the strategies devised for computer experiments, Latin hypercube designs have become particularly popular. This paper provides a tutorial on Latin hypercube design of experiments, highlighting potential reasons of its widespread use. The discussion starts with the early developments in optimization of the point selection and goes all the way to the pitfalls of the indiscriminate use of Latin hypercube designs. Final thoughts are given on opportunities for future research. Copyright © 2015 John Wiley & Sons, Ltd.

93 citations

Journal ArticleDOI
TL;DR: Supersaturated designs are fractional factorial designs in which the run size (n) is too small to estimate all the main effects under the effect sparsity assumption, the use of supersaturated design can provide the low-cost identification of the few, possibly dominating factors as mentioned in this paper.

58 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider screening experiments where an investigator wishes to study many factors using fewer observations, and they focus on experiments with two-level factors and a main effects model with intercept.
Abstract: We consider screening experiments where an investigator wishes to study many factors using fewer observations. Our focus is on experiments with two-level factors and a main effects model with intercept. Since the number of parameters is larger than the number of observations, traditional methods of inference and design are unavailable. In 1959, Box suggested the use of supersaturated designs and in 1962, Booth and Cox introduced measures for efficiency of these designs including E(s2), which is the average of squares of the off-diagonal entries of the information matrix, ignoring the intercept. For a design to be E(s2)-optimal, the main effect of every factor must be orthogonal to the intercept (factors are balanced), and among all designs that satisfy this condition, it should minimize E(s2). This is a natural approach since it identifies the most nearly orthogonal design, and orthogonal designs enjoy many desirable properties including efficient parameter estimation. Factor balance in an E(s2)-optimal d...

52 citations


Cites background or methods from "A comparison of design and model se..."

  • ...Marley and Woods (2010) have examined various design and model selection procedures by a careful simulation study....

    [...]

  • ...The goal of this work is to explore new approaches to obtain optimal supersaturated designs. We introduce two optimality criteria and study the properties and construction of optimal designs. We also establish connections between optimal designs derived from the two criteria, as well as the connections of these optimal designs with Bayes optimal designs and optimal chemical balance weighing designs. Our first approach is to obtain an optimal design by minimizing E(s2) in a larger class of designs. Note that E(s2)-optimality may be viewed as conditional optimality since it restricts the search to designs in which the factors are balanced. We introduce UE(s2)-optimality (unconditional E(s2)-optimality), where we use essentially the function E(s2), but do not restrict the search to designs with balanced factors. The second criterion is motivated by traditional Kiefer-style design optimality, even as we recognize that parameter estimation is not the primary goal of screening experiments. For experiments where the number of observations is greater than the number of parameters, an optimal design minimizes the covariance matrix of the best linear unbiased estimator of the parameters. Since for a supersaturated design all parameters do not have unbiased estimators, we first identify an estimator of the parameter vector that minimizes the estimation bias. We then obtain an optimal design by minimizing the covariance matrix of the minimum bias estimator. The minimization of the covariance matrix can be done using real-valued functions that are standard in design optimality theory, such as the A-, D-, or E-optimality criteria functions. Our focus in this work is on D-optimality. If X denotes the n × p model matrix of a supersaturated design with n < p and rank n, then for the D criterion, this reduces to maximizing ∣∣XX′∣∣. A design that achieves the maximum will be called a D-optimal supersaturated design. It turns out that a design is D-optimal supersaturated if and only if it is (essentially) the transpose of a D-optimal chemical balance weighing design. Since the latter designs have been studied comprehensively by several authors including Ehlich (1964a, b), Payne (1974), Mitchell (1974), Cheng (1980), and Galil and Kiefer (1980, 1982), D-optimal supersaturated designs can be obtained using the work of these authors....

    [...]

  • ...An UE(s2)-optimal design is given in Table 2, and an E(s2)-optimal design in Table 3 (E(s2)optimality may be established using Bulutoglu and Cheng (2004)); the column of 1’s is not shown in either table....

    [...]

Journal ArticleDOI
TL;DR: For example, this paper showed that DSDs have high power for detecting all the main effects as well as one two-factor interaction or one quadratic effect as long as the true effects are much larger than the error standard deviation.
Abstract: Since their introduction by Jones and Nachtsheim in 2011, definitive screening designs (DSDs) have seen application in fields as diverse as bio-manufacturing, green energy production, and laser etching. One barrier to their routine adoption for screening is due to the difficulties practitioners experience in model selection when both main effects and second-order effects are active. Jones and Nachtsheim showed that for six or more factors, DSDs project to designs in any three factors that can fit a full quadratic model. In addition, they showed that DSDs have high power for detecting all the main effects as well as one two-factor interaction or one quadratic effect as long as the true effects are much larger than the error standard deviation. However, simulation studies of model selection strategies applied to DSDs can disappoint by failing to identify the correct set of active second-order effects when there are more than a few such effects. Standard model selection strategies such as stepwise re...

47 citations

References
More filters
Book
01 Jan 1998
TL;DR: The first € price and the £ and $ price are net prices, subject to local VAT, and the €(D) includes 7% for Germany, the€(A) includes 10% for Austria.
Abstract: The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. K.P. Burnham, D.R. Anderson Model Selection and Multimodel Inference

4,406 citations

Journal ArticleDOI
TL;DR: In many important statistical applications, the number of variables or parameters p is much larger than the total number of observations n as discussed by the authors, and it is possible to estimate β reliably based on the noisy data y.
Abstract: In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y=Xβ+z, where β∈Rp is a parameter vector of interest, X is a data matrix with possibly far fewer rows than columns, n≪p, and the zi’s are i.i.d. N(0, σ^2). Is it possible to estimate β reliably based on the noisy data y?

3,539 citations

Book
15 Apr 2002
TL;DR: In this paper, Efroymson's algorithm was used to replace two variables at a time with all subsets using branch-and-bound techniques. But the results showed that one subset was better than another.
Abstract: OBJECTIVES Prediction, Explanation, Elimination or What? How Many Variables in the Prediction Formula? Alternatives to Using Subsets 'Black Box' Use of Best-Subsets Techniques LEAST-SQUARES COMPUTATIONS Using Sums of Squares and Products Matrices Orthogonal Reduction Methods Gauss-Jordan v. Orthogonal Reduction Methods Interpretation of Projections Appendix A: Operation Counts for All-Subsets Regression FINDING SUBSETS WHICH FIT WELL Objectives and Limitations of this Chapter Forward Selection Efroymson's Algorithm Backward Elimination Sequential Replacement Algorithm Replacing Two Variables at a Time Generating All Subsets Using Branch-and-Bound Techniques Grouping Variables Ridge Regression and Other Alternatives The Non-Negative Garrote and the Lasso Some Examples Conclusions and Recommendations HYPOTHESIS TESTING Is There any Information in the Remaining Variables? Is One Subset Better than Another? Appendix A: Spjftvoll's Method - Detailed Description WHEN TO STOP? What Criterion Should We Use? Prediction Criteria Cross-Validation and the PRESS Statistic Bootstrapping Likelihood and Information-Based Stopping Rules Appendix A. Approximate Equivalence of Stopping Rules ESTIMATION OF REGRESSION COEFFICIENTS Selection Bias Choice Between Two Variables Selection Bias in the General Case, and its Reduction Conditional Likelihood Estimation Estimation of Population Means Estimating Least-Squares Projections Appendix A: Changing Projections to Equate Sums of Squares BAYESIAN METHODS Bayesian Introduction 'Spike and Slab' Prior Normal prior for Regression Coefficients Model Averaging Picking the Best Model CONCLUSIONS AND SOME RECOMMENDATIONS REFERENCES INDEX

1,722 citations


"A comparison of design and model se..." refers methods in this paper

  • ...Forward selection This procedure starts with the null model and adds the most significant factor main effect at each step according to an F -test (Miller, 2002, pp. 39-42)....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of model selection and accounting for model uncertainty in high-dimensional contingency tables, motivated by expert system applications, and propose a panacea by the standard Bayesian formalism that averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities.
Abstract: We consider the problem of model selection and accounting for model uncertainty in high-dimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic P values leading to the selection of a single model; inference is then conditional on the selected model. The sampling properties of such a strategy are complex, and the failure to take account of model uncertainty leads to underestimation of uncertainty about quantities of interest. In principle, a panacea is provided by the standard Bayesian formalism that averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities. Furthermore, this approach is optimal in the sense of maximizing predictive ability. But this has not been used in practice, because computing the posterior model probabilities is hard and the number of models is very large (often greater than 1...

1,313 citations


"A comparison of design and model se..." refers background or methods in this paper

  • ...Further, many models will be scientifically implausible and therefore should be excluded (Madigan and Raftery, 1994)....

    [...]

  • ...Further, many models will be scientifically implausible and therefore should be excluded (Madigan and Raftery, 1994). Effect sparsity suggests restriction to a set of models each of which contains only a few factors. We propose a new iterative approach, motivated by the many-models method of Holcomb et al. (2007):...

    [...]

Journal ArticleDOI
TL;DR: A more formal analysis is presented here, which may be used to supplement such plots and hence to facilitate the use of these unreplicated experimental arrangements.
Abstract: Loss of markets to Japan has recently caused attention to return to the enormous potential that experimental design possesses for the improvement of product design, for the improvement of the manufacturing process, and hence for improvement of overall product quality. In the screening stage of industrial experimentation it is frequently true that the “Pareto Principle” applies; that is, a large proportion of process variation is associated with a small proportion of the process variables. In such circumstances of “factor sparsity,” unreplicated fractional designs and other orthogonal arrays have frequently been effective when used as a screen for isolating preponderant factors. A useful graphical analysis due to Daniel (1959) employs normal probability plotting. A more formal analysis is presented here, which may be used to supplement such plots and hence to facilitate the use of these unreplicated experimental arrangements.

528 citations


"A comparison of design and model se..." refers background in this paper

  • ...It is widely accepted that the effectiveness of supersaturated designs in detecting active factors requires there being only a small number of such factors, known as effect sparsity (Box and Meyer, 1986)....

    [...]

Frequently Asked Questions (2)
Q1. What are the contributions in "A comparison of design and model selection methods for supersaturated experiments" ?

In this paper, simulated experiments are used to evaluate the use of E ( s2 ) -optimal and Bayesian D-optimal designs, and to compare three analysis strategies representing regression, shrinkage and a novel model-averaging procedure. In this paper, simulated experiments are used to evaluate the use of E ( s ) -optimal and Bayesian D-optimal designs, and to compare three analysis strategies representing regression, shrinkage and a novel model-averaging procedure. Suggestions are made for choosing the values of the tuning constants for each approach. Suggestions are made for choosing the values of the tuning constants for each approach. 

Further studies of interest include incorporating interaction effects in the models, and Bayesian methods of analysis, see for example Beattie et al. ( 2002 ).