scispace - formally typeset
Topic

Surrogate model

About: Surrogate model is a(n) research topic. Over the lifetime, 5019 publication(s) have been published within this topic receiving 77441 citation(s).
Papers
More filters

Journal ArticleDOI
TL;DR: This paper presents a meta-modelling framework for estimating Output from Computer Experiments-Predicting Output from Training Data and Criteria Based Designs for computer Experiments.
Abstract: Many scientific phenomena are now investigated by complex computer models or codes A computer experiment is a number of runs of the code with various inputs A feature of many computer experiments is that the output is deterministic--rerunning the code with the same inputs gives identical observations Often, the codes are computationally expensive to run, and a common objective of an experiment is to fit a cheaper predictor of the output to the data Our approach is to model the deterministic output as the realization of a stochastic process, thereby providing a statistical basis for designing experiments (choosing the inputs) for efficient prediction With this model, estimates of uncertainty of predictions are also available Recent work in this area is reviewed, a number of applications are discussed, and we demonstrate our methodology with an example

6,351 citations


Book ChapterDOI
08 Sep 2018-
Abstract: We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.

1,359 citations


Book
02 Dec 2013-
TL;DR: Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines.
Abstract: The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers can find data used in the exercises and other supplementary material. Uncertainty Quantification: Theory, Implementation, and Applications includes a large number of definitions and examples that use a suite of relatively simple models to illustrate concepts; numerous references to current and open research issues; and exercises that illustrate basic concepts and guide readers through the numerical implementation of algorithms for prototypical problems. It also features a wide range of applications, including weather and climate models, subsurface hydrology and geology models, nuclear power plant design, and models for biological phenomena, along with recent advances and topics that have appeared in the research literature within the last 15 years, including aspects of Bayesian model calibration, surrogate model development, parameter selection techniques, and global sensitivity analysis. Audience: The text is intended for advanced undergraduates, graduate students, and researchers in mathematics, statistics, operations research, computer science, biology, science, and engineering. It can be used as a textbook for one- or two-semester courses on uncertainty quantification or as a resource for researchers in a wide array of disciplines. A basic knowledge of probability, linear algebra, ordinary and partial differential equations, and introductory numerical analysis techniques is assumed. Contents: Chapter 1: Introduction; Chapter 2: Large-Scale Applications; Chapter 3: Prototypical Models; Chapter 4: Fundamentals of Probability, Random Processes, and Statistics; Chapter 5: Representation of Random Inputs; Chapter 6: Parameter Selection Techniques; Chapter 7: Frequentist Techniques for Parameter Estimation; Chapter 8: Bayesian Techniques for Parameter Estimation; Chapter 9: Uncertainty Propagation in Models; Chapter 10: Stochastic Spectral Methods; Chapter 11: Sparse Grid Quadrature and Interpolation Techniques; Chapter 12: Prediction in the Presence of Model Discrepancy; Chapter 13: Surrogate Models; Chapter 14: Local Sensitivity Analysis; Chapter 15: Global Sensitivity Analysis; Appendix A: Concepts from Functional Analysis; Bibliography; Index

672 citations


Journal ArticleDOI
01 Oct 2008-AIAA Journal
TL;DR: This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space and is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions.
Abstract: Many engineering applications are characterized by implicit response functions that are expensive to evaluate and sometimes nonlinear in their behavior, making reliability analysis difficult. This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space. The method begins with a Gaussian process model built from a very small number of samples, and then adaptively chooses where to generate subsequent samples to ensure that the model is accurate in the vicinity of the limit state. The resulting Gaussian process model is then sampled using multimodal adaptive importance sampling to calculate the probability of exceeding (or failing to exceed) the response level of interest. By locating multiple points on or near the limit state, more complex and nonlinear limit states can be modeled, leading to more accurate probability integration. By concentrating the samples in the area where accuracy is important (i.e., in the vicinity of the limit state), only a small number of true function evaluations are required to build a quality surrogate model. The resulting method is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. This new method is applied to a collection of example problems including one that analyzes the reliability of a microelectromechanical system device that current available methods have difficulty solving either accurately or efficiently.

581 citations


Posted Content
Abstract: Bayesian optimization is an effective methodology for the global optimization of functions with expensive evaluations. It relies on querying a distribution over functions defined by a relatively cheap surrogate model. An accurate model for this distribution over functions is critical to the effectiveness of the approach, and is typically fit using Gaussian processes (GPs). However, since GPs scale cubically with the number of observations, it has been challenging to handle objectives whose optimization requires many evaluations, and as such, massively parallelizing the optimization. In this work, we explore the use of neural networks as an alternative to GPs to model distributions over functions. We show that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically. This allows us to achieve a previously intractable degree of parallelism, which we apply to large scale hyperparameter optimization, rapidly finding competitive models on benchmark object recognition tasks using convolutional networks, and image caption generation using neural language models.

524 citations


Network Information
Related Topics (5)
Polynomial chaos

3.7K papers, 86.2K citations

87% related
Multi-objective optimization

28.3K papers, 743.8K citations

86% related
Uncertainty quantification

8.5K papers, 132.5K citations

86% related
Discretization

53K papers, 1M citations

85% related
Latin hypercube sampling

2.6K papers, 74.7K citations

85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202237
2021826
2020725
2019547
2018458
2017373

Top Attributes

Show by:

Topic's top 5 most impactful authors

Slawomir Koziel

169 papers, 2K citations

Kwang-Yong Kim

44 papers, 699 citations

Tom Dhaene

33 papers, 1.2K citations

Sankaran Mahadevan

32 papers, 1.3K citations

Leifur Leifsson

31 papers, 409 citations