# On the Experimental Attainment of Optimum Conditions

01 Jan 1951-Journal of the royal statistical society series b-methodological (Springer, New York, NY)-Vol. 13, Iss: 1, pp 1-38

TL;DR: The work described in this article is the result of a study extending over the past few years by a chemist and a statistician, which has come about mainly in answer to problems of determining optimum conditions in chemical investigations, but they believe that the methods will be of value in other fields where experimentation is sequential and the error fairly small.

Abstract: The work described is the result of a study extending over the past few years by a chemist and a statistician. Development has come about mainly in answer to problems of determining optimum conditions in chemical investigations, but we believe that the methods will be of value in other fields where experimentation is sequential and the error fairly small.

##### Citations

More filters

••

ENSAE ParisTech

^{1}TL;DR: This work considers approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non‐Gaussian response variables and can directly compute very accurate approximations to the posterior marginals.

Abstract: Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged.

4,164 citations

•

TL;DR: The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input.

Abstract: A computational model is a representation of some physical or other system of interest, first expressed mathematically and then implemented in the form of a computer program; it may be viewed as a function of inputs that, when evaluated, produces outputs. Motivation for this article comes from computational models that are deterministic, complicated enough to make classical mathematical analysis impractical and that have a moderate-to-large number of inputs. The problem of designing computational experiments to determine which inputs have important effects on an output is considered. The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input. Advantages of this approach include a lack of reliance on assumptions of relative sparsity of important inputs, monotonicity of outputs with respect to inputs, or ad...

3,396 citations

### Cites background from "On the Experimental Attainment of O..."

...[An alternative justification might be based on the fact that since B is a foldover (Box and Wilson 1951) of a resolution III o....

[...]

••

TL;DR: In this paper, a class of incomplete three level factorial designs useful for estimating the coefficients in a second degree graduating polynomial are described and the designs either meet, or approximately meet, the criterion of rotatability and for the most part can be orthogonally blocked.

Abstract: A class of incomplete three level factorial designs useful for estimating the coefficients in a second degree graduating polynomial are described. The designs either meet, or approximately meet, the criterion of rotatability and for the most part can be orthogonally blocked. A fully worked example is included.

3,194 citations

•

22 Jun 2009

TL;DR: This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling.

Abstract: A unified view of metaheuristics This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling. It presents the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. Throughout the book, the key search components of metaheuristics are considered as a toolbox for: Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems) for optimization problems Designing efficient metaheuristics for multi-objective optimization problems Designing hybrid, parallel, and distributed metaheuristics Implementing metaheuristics on sequential and parallel machines Using many case studies and treating design and implementation independently, this book gives readers the skills necessary to solve large-scale optimization problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics.

2,735 citations

### Cites methods from "On the Experimental Attainment of O..."

...The most known methods belonging to this class are the least square method (quadratic polynomials) of Box and Wilson [90] and design of experiments (DOE) of Taguchi [668]....

[...]

••

TL;DR: In this article, the problem of designing computational experiments to determine which inputs have important effects on an output is considered, and experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects.

Abstract: A computational model is a representation of some physical or other system of interest, first expressed mathematically and then implemented in the form of a computer program; it may be viewed as a function of inputs that, when evaluated, produces outputs. Motivation for this article comes from computational models that are deterministic, complicated enough to make classical mathematical analysis impractical and that have a moderate-to-large number of inputs. The problem of designing computational experiments to determine which inputs have important effects on an output is considered. The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input. Advantages of this approach include a lack of reliance on assumptions of relative sparsity of important inputs, monotonicity of outputs with respect to inputs, or ad...

2,446 citations

##### References

More filters

••

01 Jan 1936TL;DR: In this paper, the authors considered the approximate representation of equidistant, equally weighted, and uncorrelated observations under the following assumptions: (i) the data being u1, u2, …, un, the representation is to be given by linear combinations (ii) the linear combinations are to be such as would reproduce any set of values that were already values of a polynomial of degree not higher than the kth.

Abstract: In a series of papers W. F. Sheppard (1912, 1914) has considered the approximate representation of equidistant, equally weighted, and uncorrelated observations under the following assumptions:– (i) The data being u1, u2, …, un , the representation is to be given by linear combinations (ii) The linear combinations are to be such as would reproduce any set of values that were already values of a polynomial of degree not higher than the kth. (iii) The sum of squared coefficients which measures the mean square error of yi , is to be a minimum for each value of i.

795 citations

••

01 Jan 1947441 citations

••

213 citations