scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Study of the Group Screening Method

01 Aug 1961-Technometrics (Taylor & Francis Group)-Vol. 3, Iss: 3, pp 371-388
TL;DR: In this article, the problem of group screening methods is discussed, where f factors are sub-divided into groups of k factors each, forming g "group-factors" and the group factors are then studied using a Plackett and Burman design in Q + 1 runs.
Abstract: This paper discusses the problem of group screening methods wherein f factors are sub-divided into groups of k factors each, forming g “group-factors”. The group factors are then studied using a Plackett and Burman design in Q + 1 runs. The two versions of the group factors are formed by maintaining all component factors at their upper and lonrer levels respectively. All factors in groups found to have a large effects are then studied in a second stage of experiments. The author discusses the problems of detection and false detection of factors, optimum group size, size of program, and the role of costs in this sequential form of experimentation.
Citations
More filters
Journal Article
TL;DR: The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input.
Abstract: A computational model is a representation of some physical or other system of interest, first expressed mathematically and then implemented in the form of a computer program; it may be viewed as a function of inputs that, when evaluated, produces outputs. Motivation for this article comes from computational models that are deterministic, complicated enough to make classical mathematical analysis impractical and that have a moderate-to-large number of inputs. The problem of designing computational experiments to determine which inputs have important effects on an output is considered. The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input. Advantages of this approach include a lack of reliance on assumptions of relative sparsity of important inputs, monotonicity of outputs with respect to inputs, or ad...

3,396 citations


Cites background from "A Study of the Group Screening Meth..."

  • ...conventional fractional factorial experiments, Watson (1961) and others developed the idea of group screening for instances in which it is believed that most factors have little or no effect on the response....

    [...]

Journal ArticleDOI
TL;DR: In this article, the problem of designing computational experiments to determine which inputs have important effects on an output is considered, and experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects.
Abstract: A computational model is a representation of some physical or other system of interest, first expressed mathematically and then implemented in the form of a computer program; it may be viewed as a function of inputs that, when evaluated, produces outputs. Motivation for this article comes from computational models that are deterministic, complicated enough to make classical mathematical analysis impractical and that have a moderate-to-large number of inputs. The problem of designing computational experiments to determine which inputs have important effects on an output is considered. The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input. Advantages of this approach include a lack of reliance on assumptions of relative sparsity of important inputs, monotonicity of outputs with respect to inputs, or ad...

2,446 citations

Journal ArticleDOI
TL;DR: This work model the output of the computer code as the realization of a stochastic process, allowing nonlinear and interaction effects to emerge without explicitly modeling such effects.
Abstract: Many scientific phenomena are now investigated by complex computer models or codes. Given the input values, the code produces one or more outputs via a complex mathematical model. Often the code is expensive to run, and it may be necessary to build a computationally cheaper predictor to enable, for example, optimization of the inputs. If there are many input factors, an initial step in building a predictor is identifying (screening) the active factors. We model the output of the computer code as the realization of a stochastic process. This model has a number of advantages. First, it provides a statistical basis, via the likelihood, for a stepwise algorithm to determine the important factors. Second, it is very flexible, allowing nonlinear and interaction effects to emerge without explicitly modeling such effects. Third, the same data are used for screening and building the predictor, so expensive runs are efficiently used. We illustrate the methodology with two examples, both having 20 input variables. I...

663 citations

Journal ArticleDOI
TL;DR: A survey on related modeling and optimization strategies that may help to solve High-dimensional, Expensive (computationally), Black-box (HEB) problems and two promising approaches are identified to solve HEB problems.
Abstract: The integration of optimization methodologies with computational analyses/simulations has a profound impact on the product design. Such integration, however, faces multiple challenges. The most eminent challenges arise from high-dimensionality of problems, computationally-expensive analysis/simulation, and unknown function properties (i.e., black-box functions). The merger of these three challenges severely aggravates the difficulty and becomes a major hurdle for design optimization. This paper provides a survey on related modeling and optimization strategies that may help to solve High-dimensional, Expensive (computationally), Black-box (HEB) problems. The survey screens out 207 references including multiple historical reviews on relevant subjects from more than 1,000 papers in a variety of disciplines. This survey has been performed in three areas: strategies tackling high-dimensionality of problems, model approximation techniques, and direct optimization strategies for computationally-expensive black-box functions and promising ideas behind non-gradient optimization algorithms. Major contributions in each area are discussed and presented in an organized manner. The survey exposes that direct modeling and optimization strategies to address HEB problems are scarce and sporadic, partially due to the difficulty of the problem itself. Moreover, it is revealed that current modeling research tends to focus on sampling and modeling techniques themselves and neglect studying and taking the advantages of characteristics of the underlying expensive functions. Based on the survey results, two promising approaches are identified to solve HEB problems. Directions for future research are also discussed.

535 citations


Additional excerpts

  • ...Watson (1961) proposed a group screening method....

    [...]

Journal ArticleDOI
TL;DR: In this article, a general method for constructing supersaturated 2 N-1 designs with a large number of interaction columns was proposed, and the efficiency of the constructed designs was studied by using three criteria.
Abstract: SUMMARY An N x N Hadamard matrix can be used to construct a saturated 2 N-1 design with N runs. Furthermore, if an interaction column for two of the columns of the matrix is not fully aliased with any column of the matrix, this interaction column can be used as a supplementary column for studying an additional factor. For some small Hadamard matrices studied by Plackett & Burman (1946), the number of such interaction columns is very large, thus allowing the construction of supersaturated designs whose number of factors far exceeds the number of runs. A general method of construction along these lines is proposed. The efficiency of the constructed designs is studied by using three criteria.

245 citations

References
More filters