scispace - formally typeset
Search or ask a question
Author

Robert J. Buck

Bio: Robert J. Buck is an academic researcher from Western Michigan University. The author has contributed to research in topics: Computer experiment & Circuit design. The author has an hindex of 6, co-authored 8 publications receiving 947 citations. Previous affiliations of Robert J. Buck include City University London & Northampton Community College.

Papers
More filters
Journal ArticleDOI
TL;DR: This work model the output of the computer code as the realization of a stochastic process, allowing nonlinear and interaction effects to emerge without explicitly modeling such effects.
Abstract: Many scientific phenomena are now investigated by complex computer models or codes. Given the input values, the code produces one or more outputs via a complex mathematical model. Often the code is expensive to run, and it may be necessary to build a computationally cheaper predictor to enable, for example, optimization of the inputs. If there are many input factors, an initial step in building a predictor is identifying (screening) the active factors. We model the output of the computer code as the realization of a stochastic process. This model has a number of advantages. First, it provides a statistical basis, via the likelihood, for a stepwise algorithm to determine the important factors. Second, it is very flexible, allowing nonlinear and interaction effects to emerge without explicitly modeling such effects. Third, the same data are used for screening and building the predictor, so expensive runs are efficiently used. We illustrate the methodology with two examples, both having 20 input variables. I...

663 citations

Journal ArticleDOI
TL;DR: A combination of system decomposition using a sparse matrix method, experimental design and modelling is applied to one example of an electrical circuit simulator producing a usable emulator of the circuit for use in optimization and sensitivity analysis.
Abstract: Large systems require new methods of experimental designs suitable for the highly adaptive models which are employed to cope with complex non-linear responses and high dimensionality of input spaces. The area of computer experiments has started to provide such designs especially Latin hypercube and lattice designs. System decomposition, prevalent in several branches of engineering, can be employed to decrease complexity. A combination of system decomposition using a sparse matrix method, experimental design and modelling is applied to one example of an electrical circuit simulator producing a usable emulator of the circuit for use in optimization and sensitivity analysis.

192 citations

Journal ArticleDOI
TL;DR: A modelling approach which is appropriate for the simulator’s deterministic input–output relationships is described, and non‐linearities and interactions are identified without explicit assumptions about the functional form.
Abstract: In electrical engineering, circuit designs are now often optimized via circuit simulation computer models. Typically, many response variables characterize the circuit’s performance. Each response is a function of many input variables, including factors that can be set in the engineering design and noise factors representing manufacturing conditions. We describe a modelling approach which is appropriate for the simulator’s deterministic input–output relationships. Non-linearities and interactions are identified without explicit assumptions about the functional form. These models lead to predictors to guide the reduction of the ranges of the designable factors in a sequence of experiments. Ultimately, the predictors are used to optimize the engineering design. We also show how a visualization of the fitted relationships facilitates an understanding of the engineering trade-offs between responses. The example used to demonstrate these methods, the design of a buffer circuit, has multiple targets for the responses, representing different trade-offs between the key performance measures.

53 citations

Journal ArticleDOI
TL;DR: A probabilistic, multimedia, multipathway exposure model and assessment for chlorpyrifos developed as part of the National Human Exposure Assessment Survey (NHEXAS), and the greatest source of uncertainty in the model stems from the definition of no household pesticide use as no use in the past year.
Abstract: This paper presents a probabilistic, multimedia, multipathway exposure model and assessment for chlorpyrifos developed as part of the National Human Exposure Assessment Survey (NHEXAS). The model was constructed using available information prior to completion of the NHEXAS study. It simulates the distribution of daily aggregate and pathway-specific chlorpyrifos absorbed dose in the general population of the State of Arizona (AZ) and in children aged 3–12 years residing in Minneapolis–St. Paul, Minnesota (MSP). Pathways included were inhalation of indoor and outdoor air, dietary ingestion, non-dietary ingestion of dust and soil, and dermal contact with dust and soil. Probability distributions for model input parameters were derived from the available literature, and input values were chosen to represent chlorpyrifos concentrations and demographics in AZ and MSP to the extent possible. When the NHEXAS AZ and MSP data become available, they can be compared to the distributions derived in this and other prototype modeling assessments to test the adequacy of this pre-NHEXAS model assessment. Although pathway-specific absorbed dose estimates differed between AZ and MSP due to differences in model inputs between simulated adults and children, the aggregate model results and general findings for simulated AZ and MSP populations were similar. The major route of chlorpyrifos intake was food ingestion, followed by indoor air inhalation. Two-stage Monte Carlo simulation was used to derive estimates of both inter-individual variability and uncertainty in the estimated distributions. The variability in the model results reflects the difference in activity patterns, exposure factors, and concentrations contacted by individuals during their daily activities. Based on the coefficient of variation, indoor air inhalation and dust ingestion were most variable relative to the mean, primarily because of variability in concentrations due to use or no-use of pesticides. Uncertainty analyses indicated a factor of 10–30 for uncertainty of model predictions of 10th, 50th, and 90th percentiles. The greatest source of uncertainty in the model stems from the definition of no household pesticide use as no use in the past year. Because chlorpyrifos persists in the residential environment for longer than a year, the modeled estimates are likely to be low. More information on pesticide usage and environmental concentrations measured at different post-application times is needed to refine and evaluate this and other pesticide exposure models.

35 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper introduces the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering and shows how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule.
Abstract: In many engineering optimization problems, the number of function evaluations is severely limited by time or cost. These problems pose a special challenge to the field of global optimization, since existing methods often require more function evaluations than can be comfortably afforded. One way to address this challenge is to fit response surfaces to data collected by evaluating the objective and constraint functions at a few points. These surfaces can then be used for visualization, tradeoff analysis, and optimization. In this paper, we introduce the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering. We then show how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule. The key to using response surfaces for global optimization lies in balancing the need to exploit the approximating surface (by sampling where it is minimized) with the need to improve the approximation (by sampling where prediction error may be high). Striking this balance requires solving certain auxiliary problems which have previously been considered intractable, but we show how these computational obstacles can be overcome.

6,914 citations

Journal ArticleDOI
TL;DR: This paper presents a meta-modelling framework for estimating Output from Computer Experiments-Predicting Output from Training Data and Criteria Based Designs for computer Experiments.
Abstract: Many scientific phenomena are now investigated by complex computer models or codes A computer experiment is a number of runs of the code with various inputs A feature of many computer experiments is that the output is deterministic--rerunning the code with the same inputs gives identical observations Often, the codes are computationally expensive to run, and a common objective of an experiment is to fit a cheaper predictor of the output to the data Our approach is to model the deterministic output as the realization of a stochastic process, thereby providing a statistical basis for designing experiments (choosing the inputs) for efficient prediction With this model, estimates of uncertainty of predictions are also available Recent work in this area is reviewed, a number of applications are discussed, and we demonstrate our methodology with an example

6,583 citations

Journal ArticleDOI
TL;DR: A Bayesian calibration technique which improves on this traditional approach in two respects and attempts to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best‐fitting parameter values is presented.
Abstract: We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.

3,745 citations

Journal ArticleDOI
TL;DR: The included papers present an interesting mixture of recent developments in the field as they cover fundamental research on the design of experiments, models and analysis methods as well as more applied research connected to real-life applications.
Abstract: The design and analysis of computer experiments as a relatively young research field is not only of high importance for many industrial areas but also presents new challenges and open questions for statisticians. This editorial introduces a special issue devoted to the topic. The included papers present an interesting mixture of recent developments in the field as they cover fundamental research on the design of experiments, models and analysis methods as well as more applied research connected to real-life applications.

2,583 citations

Journal ArticleDOI
TL;DR: This paper reviews the literature on Bayesian experimental design, both for linear and nonlinear models, and presents a uniied view of the topic by putting experimental design in a decision theoretic framework.
Abstract: This paper reviews the literature on Bayesian experimental design. A unified view of this topic is presented, based on a decision-theoretic approach. This framework casts criteria from the Bayesian literature of design as part of a single coherent approach. The decision-theoretic structure incorporates both linear and nonlinear design problems and it suggests possible new directions to the experimental design problem, motivated by the use of new utility functions. We show that, in some special cases of linear design problems, Bayesian solutions change in a sensible way when the prior distribution and the utility function are modified to allow for the specific structure of the experiment. The decision-theoretic approach also gives a mathematical justification for selecting the appropriate optimality criterion.

1,903 citations