scispace - formally typeset
Search or ask a question
Topic

Surrogate model

About: Surrogate model is a research topic. Over the lifetime, 5019 publications have been published within this topic receiving 77441 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a meta-modelling framework for estimating Output from Computer Experiments-Predicting Output from Training Data and Criteria Based Designs for computer Experiments.
Abstract: Many scientific phenomena are now investigated by complex computer models or codes A computer experiment is a number of runs of the code with various inputs A feature of many computer experiments is that the output is deterministic--rerunning the code with the same inputs gives identical observations Often, the codes are computationally expensive to run, and a common objective of an experiment is to fit a cheaper predictor of the output to the data Our approach is to model the deterministic output as the realization of a stochastic process, thereby providing a statistical basis for designing experiments (choosing the inputs) for efficient prediction With this model, estimates of uncertainty of predictions are also available Recent work in this area is reviewed, a number of applications are discussed, and we demonstrate our methodology with an example

6,583 citations

Book ChapterDOI
08 Sep 2018
TL;DR: In this article, a sequential model-based optimization (SMBO) strategy is proposed to search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space.
Abstract: We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.

1,592 citations

Journal ArticleDOI
TL;DR: This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space and is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions.
Abstract: Many engineering applications are characterized by implicit response functions that are expensive to evaluate and sometimes nonlinear in their behavior, making reliability analysis difficult. This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space. The method begins with a Gaussian process model built from a very small number of samples, and then adaptively chooses where to generate subsequent samples to ensure that the model is accurate in the vicinity of the limit state. The resulting Gaussian process model is then sampled using multimodal adaptive importance sampling to calculate the probability of exceeding (or failing to exceed) the response level of interest. By locating multiple points on or near the limit state, more complex and nonlinear limit states can be modeled, leading to more accurate probability integration. By concentrating the samples in the area where accuracy is important (i.e., in the vicinity of the limit state), only a small number of true function evaluations are required to build a quality surrogate model. The resulting method is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. This new method is applied to a collection of example problems including one that analyzes the reliability of a microelectromechanical system device that current available methods have difficulty solving either accurately or efficiently.

804 citations

Book
02 Dec 2013
TL;DR: Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines.
Abstract: The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers can find data used in the exercises and other supplementary material. Uncertainty Quantification: Theory, Implementation, and Applications includes a large number of definitions and examples that use a suite of relatively simple models to illustrate concepts; numerous references to current and open research issues; and exercises that illustrate basic concepts and guide readers through the numerical implementation of algorithms for prototypical problems. It also features a wide range of applications, including weather and climate models, subsurface hydrology and geology models, nuclear power plant design, and models for biological phenomena, along with recent advances and topics that have appeared in the research literature within the last 15 years, including aspects of Bayesian model calibration, surrogate model development, parameter selection techniques, and global sensitivity analysis. Audience: The text is intended for advanced undergraduates, graduate students, and researchers in mathematics, statistics, operations research, computer science, biology, science, and engineering. It can be used as a textbook for one- or two-semester courses on uncertainty quantification or as a resource for researchers in a wide array of disciplines. A basic knowledge of probability, linear algebra, ordinary and partial differential equations, and introductory numerical analysis techniques is assumed. Contents: Chapter 1: Introduction; Chapter 2: Large-Scale Applications; Chapter 3: Prototypical Models; Chapter 4: Fundamentals of Probability, Random Processes, and Statistics; Chapter 5: Representation of Random Inputs; Chapter 6: Parameter Selection Techniques; Chapter 7: Frequentist Techniques for Parameter Estimation; Chapter 8: Bayesian Techniques for Parameter Estimation; Chapter 9: Uncertainty Propagation in Models; Chapter 10: Stochastic Spectral Methods; Chapter 11: Sparse Grid Quadrature and Interpolation Techniques; Chapter 12: Prediction in the Presence of Model Discrepancy; Chapter 13: Surrogate Models; Chapter 14: Local Sensitivity Analysis; Chapter 15: Global Sensitivity Analysis; Appendix A: Concepts from Functional Analysis; Bibliography; Index

782 citations

Journal ArticleDOI
TL;DR: The utility of an ensemble of surrogate models is extended to identify regions of possible high errors at locations where predictions of surrogates widely differ, and provide a more robust approximation approach.
Abstract: The custom in surrogate-based modeling of complex engineering problems is to fit one or more surrogate models and select the one surrogate model that performs best. In this paper, we extend the utility of an ensemble of surrogates to (1) identify regions of possible high errors at locations where predictions of surrogates widely differ, and (2) provide a more robust approximation approach. We explore the possibility of using the best surrogate or a weighted average surrogate model instead of individual surrogate models. The weights associated with each surrogate model are determined based on the errors in surrogates. We demonstrate the advantages of an ensemble of surrogates using analytical problems and one engineering problem. We show that for a single problem the choice of test surrogate can depend on the design of experiments.

599 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
82% related
Finite element method
178.6K papers, 3M citations
79% related
Robustness (computer science)
94.7K papers, 1.6M citations
79% related
Artificial neural network
207K papers, 4.5M citations
78% related
Support vector machine
73.6K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023528
2022981
2021840
2020729
2019547
2018458