scispace - formally typeset
Open AccessJournal ArticleDOI

Surrogate-Based Optimization Using Multifidelity Models with Variable Parameterization and Corrected Space Mapping

TLDR
Four methods are presented to perform mapping between variable-parameterization spaces, the last three of which are new: space mapping, correctedspace mapping, a mapping based on proper orthogonal decomposition (POD), and a hybrid between POD mapping and space mapping.
Abstract
Surrogate-based-optimization methods provide a means to achieve high-fidelity design optimization at reduced computational cost by using a high-fidelity model in combination with lower-fidelity models that are less expensive to evaluate. This paper presents a provably convergent trust-region model-management methodology for variable-parameterization design models: that is, models for which the design parameters are defined over different spaces. Corrected space mapping is introduced as a method to map between the variable-parameterization design spaces. It is then used with a sequential-quadratic-programming-like trust-region method for two aerospace-related design optimization problems. Results for a wing design problem and a flapping-flight problem show that the method outperforms direct optimization in the high-fidelity space. On the wing design problem, the new method achieves 76% savings in high-fidelity function calls. On a bat-flight design problem, it achieves approximately 45% time savings, although it converges to a different local minimum than did the benchmark.

read more

Content maybe subject to copyright    Report

Surrogate-Based Optimization Using Multifidelity Models with Variable
Parameterization and Corrected Space Mapping
Robinson, T., Eldred, M. S., Willcox, K. E., & Haimes, R. (2008). Surrogate-Based Optimization Using
Multifidelity Models with Variable Parameterization and Corrected Space Mapping.
AIAA Journal
,
46
(11), 2814-
2822. https://doi.org/10.2514/1.36043
Published in:
AIAA Journal
Queen's University Belfast - Research Portal:
Link to publication record in Queen's University Belfast Research Portal
General rights
Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other
copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated
with these rights.
Take down policy
The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to
ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the
Research Portal that you believe breaches copyright or violates any law, please contact openaccess@qub.ac.uk.
Download date:10. Aug. 2022

Surrogate-Based Optimization Using Multidelity Models with
Variable Parameterization and Corrected Space Mapping
T. D. Robinson
Queens University of Belfast, Belfast, Northern Ireland BT7 1NN, United Kingdom
M. S. Eldred
Sandia National Laboratories, Albuquerque, New Mexico 87185
and
K. E. Willcox
and R. Haimes
§
Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
DOI: 10.2514/1.36043
Surrogate-based-optimization methods provide a means to achieve high-delity design optimization at reduced
computational cost by using a high-delity model in combination with lower-delity models that are less expensive to
evaluate. This paper presents a provably convergent trust-region model-management methodology for variable-
parameterization design models: that is, models for which the design parameters are dened over different spaces.
Corrected space mapping is introduced as a method to map between the variable-parameterization design spaces. It
is then used with a sequential-quadratic-programming-like trust-region method for two aerospace-related design
optimization problems. Results for a wing design problem and a apping-ight problem show that the method
outperforms direct optimization in the high-delity space. On the wing design problem, the new method achieves
76% savings in high-delity function calls. On a bat-ight design problem, it achieves approximately 45% time
savings, although it converges to a different local minimum than did the benchmark.
Introduction
A
S COMPUTATIONAL capabilities continue to grow,
designers of engineering systems have available an increasing
range of numerical analysis models. These models range from low-
delity simple-physics models to high-delity detailed computa-
tional simulation models. The drive toward including higher-delity
analyses in the design process (for example, through the use of
computational uid dynamic analyses) leads to an increase in
computational expense. As a result, design optimization, which
requires large numbers of analyses of objectives and constraints,
becomes prohibitively expensive for many systems of interest. This
paper presents a methodology for improving the computational
efciency of a high-delity design. This method exploits variable
delity and variable parameterization (that is, inexpensive models of
lower physical resolution combined with coarser design
descriptions) in a design optimization framework.
Surrogate-based optimization (SBO) methods have previously
been proposed to achieve high-delity design optimization at
reduced computational cost. In SBO, a surrogate, or less expensive
and lower-delity model, is used for the majority of the optimization,
with recourse to the high-delity analysis less frequently. The
surrogate can be developed in a number of ways: for example, by
using a simplied-physics model with a different set of governing
equations. However, an improvement in a design predicted by a low-
delity model does not guarantee an improvement in the high- delity
problem.
Past work has focused on providing surrogates that are
computationally efcient to evaluate. These models can be roughly
divided into three categories: data-t surrogates such as response
surfaces [1,2]; kriging [3]; radial basis functions [4] or extended
radial basis functions [5,6]; reduced-order models, derived using
techniques such as proper orthogonal decomposition [7] and modal
analysis [8]; and hierarchical models, also called multidelity,
variable-delity, or variable-complexity models. In the latter case, a
physics-based model of lower-delity and reduced computational
cost is used as the surrogate in place of the high-delity model. The
multidelity case can be further divided based on the means by which
the delity is reduced in the lower-delity model. The low-delity
model can be the same as the high-delity, but converged to a higher
residual tolerance [9]; it can be the same model on a coarser grid [10];
or it can use a simpler engineering model that neglects some physics
modeled by the high-delity method [11]. Jones [12] compared a
number of surrogates for use in global optimization.
Much work has been performed on developing SBO methods that
are provably convergent to an optimum of the high-delity problem.
Queipo et al. [13] reviewed a broad spectrum of SBO work. One
promising group of methods is based on trust-region model
management (TRMM), which imposes limits on the amount of
optimization performed using the low-delity model, based on a
quantitative assessment of that models predictive capability.
TRMM evolved from classical trust-region algorithms [14], which
use quadratic surrogates, and has more recently been used for
surrogates of any type [15]. These TRMM methods are provably
convergent to an optimum of the high-delity model [16,17],
provided the low- delity model is corrected to be at least rst-order
consistent with the high- delity model. Correcting to second-order
or quasi-second-order consistency provides improved performance
[18]. Yuan [19] presented a survey of unconstrained trust-region
methods.
A number of researchers have developed SBO methods for
constrained problems. Booker et al. [20] developed a direct-search
SBO framework that converges to a minimum of an expensive
objective function subject only to bounds on the design variables and
that does not require derivative evaluations. Audet et al. [21]
Received 7 December 2007; revision received 4 April 2008; accepted for
publication 5 May 2008. Copyright © 2008 by T. D. Robinson, M. S. Eldred,
K. E. Willcox, and R. Haimes. Published by the American Institute of
Aeronautics and Astronautics, Inc., with permission. Copies of this paper may
be made for personal or internal use, on condition that the copier pay the
$10.00 per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood
Drive, Danvers, MA 01923; include the code 0001-1452/08 $10.00 in
correspondence with the CCC.
Lecturer in Aerospace Engineering; t.d.robinson@qub.ac.uk. Member
AIAA.
Principal Member of the Technical Staff, Optimization and Uncertainty
Estimation Department. Associate Fellow AIAA.
Associate Professor of Aeronautics and Astronautics, Aerospace
Computational Design Laboratory. Senior Member AIAA.
§
Principal Research Engineer, Aerospace Computational Design
Laboratory. Member AIAA.
AIAA JOURNAL
Vol. 46, No. 11, November 2008
2814

extended that framework to handle general nonlinear constraints
using a lter method for step acceptance [22]. Rodriguez et al. [23]
developed a gradient-based TRMM augmented-Lagrangian strategy
using response surfaces and showed that using separate response
surfaces for the objective and constraints provided faster
convergence than using a single response surface for the augmented
Lagrangian. Alexandrov et al. [10] developed the MAESTRO class
of methods, which use gradient-based optimization and trust-region
model management, and compared them to a sequential quadratic
programming (SQP)-like TRMM method. Under fairly mild
conditions on the models, these methods are also convergent to a
local minimum of the constrained high-delity problem [16,24].
Sadjadi and Ponnambalam [25] reviewed a broad spectrum of trust-
region methods for constrained optimization, and Conn et al. [16]
gave an extensive bibliography relating to both classical trust-region
methods and more recent TRMM methods, for both the
unconstrained and the constrained cases.
The SBO methods developed to date achieve computational gain
by performing most of the analysis on the low-delity model;
however, they require that the high- and low-delity models operate
with the same set of design variables. For practical design
applications, however, multidelity models are often dened over
different design spaces.
New methodology is therefore required for expanding surrogate-
based design optimization to the case in which the low- and high-
delity models use different design variables. Further, combining a
low-delity model with a coarser parameterization of the design
offers the opportunity for additional reduction in computational
complexity and cost beyond current SBO methods. To achieve this,
new design methodology is required that incorporates variable-
parameterization models into SBO methods.
We consider a general design problem posed using the following
nonlinear optimization formulation:
min
x
fx subject to cx0 (1)
where f: R
n
! R represents the scalar objective to be minimized,
and x 2 R
n
is the vector of n design variables that describe the
design. The vector function c: R
n
! R
m
contains m constraints,
which provide a mathematical description of requirements that the
design must satisfy. Both f and c are assumed to be continuous and
differentiable over the design space of interest. For realistic design
problems of engineering relevance, the complexity of the
optimization problem (1) is twofold: rst, the simulations required
to evaluate fx and cx may be computationally expensive, and
second, the dimensionality of x may be large.
It is assumed in this discussion that a lower-delity model is
available. This model is both less accurate and less computationally
expensive. The lower-delity model for fx is denoted as
^
f
^
x and
that for cx is
^
c
^
x. The dimension of
^
x, denoted as
^
n, may be
different from the dimension of x, denoted as n.
Some terminology is required as part of this discussion. A
variable-delity design problem is a physical problem for which at
least two mathematical or computational models exist: fx with
cx and
^
f
^
x with
^
c
^
x. The parameterization of a model is the set of
design variables x
or
^
x used as inputs to the model. A variable-
parameterization problem is a variable-delity problem in which
each of the models has a different parameterization, meaning that for
the same physical design, x
^
x. A mapping is a method for linking
the design variables in a variable-parameterization problem. Given a
set of design variables in one parameterization, it provides a set of
design variables in another parameterization. The dimension of a
model is the number of design variables. A variable-dimensional
problem is a variable-parameterization problem in which each of the
models has a different dimension: that is, where n
^
n.
This paper rst presents the TRMM framework, including the
SQP-like constrained optimization method. It then outlines design
variable mapping and specically introduces corrected space
mapping. It then presents the results of two example problems: a
wing planform design and the design of a batlike apping wing.
Finally, it draws some conclusions.
Trust-Region Model Management
Surrogates can be incorporated into optimization by using a formal
model-management strategy. One such strategy is a TRMM
framework [26]. TRMM imposes limits on the amount of
optimization performed using the low-delity model, based on a
quantitative assessment of that models predictive capability.
TRMM developed from the classical trust-region optimization
method based on quadratic Taylor series models [27].
TRMM methods are provably convergent to an optimum of the
high-delity model, as long as the two models satisfy a number of
conditions, including that the low-delity model is corrected to be at
least rst-order consistent with the high-delity model. The complete
list of conditions and a proof are available in [16]. The general
approach in TRMM is to solve a sequence of optimization
subproblems using only the low-delity model, with an additional
constraint that requires the solution of the subproblem to lie within a
specied trust region. The radius of the trust region is adaptively
managed on each subproblem iteration using a merit function to
quantitatively assess the predictive capability of the low-delity
model.
The SQP-like method is modied from [10]. It is similar to
sequential quadratic programming (SQP) in that on each subproblem
it minimizes a surrogate of the Lagrangian subject to linear
approximations to the high-delity constraints.
The Lagrangian is dened as
Lx;fx
T
cx (2)
where is the vector of Lagrange multipliers.
The SQP-like TRMM algorithm is as follows:
1) Choose an initial point x
0
and an initial trust-region radius
0
> 0. Choose an initial approximation
0
to the Lagrange
multipliers. Set k 0. Choose constants >0, 0 <c
1
< 1, c
2
> 1,
0 <r
1
<r
2
< 1, and
0
.
2) Create a surrogate
~
L for the Lagrangian L. The surrogate must
be at least rst-order consistent with the Lagrangian at the center of
the trust region: that is,
~
L
k
x
k
;
k
L
k
x
k
;
k
(3)
r
x
~
L
k
x
k
;
k
r
x
L
k
x
k
;
k
(4)
In this work, the surrogate for the Lagrangian is created by using
separate surrogates for the objective and each constraint: that is,
~
Lx;
~
fx
T
~
cx (5)
The surrogates
~
f for f and
~
c
j
for each c
j
are created using mapping
and correction on the low-delity model
^
f and
^
c
j
. Mapping and
correction are described in the next section.
3) Solve the kth trust-region subproblem
min
s
~
L
k
x
k
s;
k
subject to c
k
x
k
sr
x
c
k
x
k
s
T
x x
k
0ksk
2
k
(6)
and set the trial step s
k
to the minimizing step. The method for solving
the problem must result in a step satisfying the fraction of Cauchy
decrease condition [16] on the subproblem.
4) Compute fx
k
s
k
and cx
k
s
k
. The acceptance criteria
uses dominance, a concept borrowed from multiobjective
optimization [28]. A point x
1
dominates a point x
2
if both of the
following conditions are satised:
fx
1
fx
2
; kc
x
1
k
2
kc
x
2
k
2
(7)
ROBINSON ET AL. 2815

where c
x is a vector with elements dened by
c
i
xmax0;c
i
x (8)
A lter is a set of points, none of which dominate any other. In a lter
method, the initial lter is empty. The trial point x
k
s
k
is accepted
and added to the lter if it is not dominated by any point in the lter. If
any element of the lter dominates the trial point, the trial point is
rejected and not added to the lter. Signicant detail on lter methods
is available in chapter 5 of [16]. If the trial step is accepted,
x
k1
x
k
s
k
. If the trial step is rejected, x
k1
x
k
.
5) Dene the trust-region ratio
k
Lx
k
;
k
Lx
k
s
k
;
k
~
Lx
k
;
k

~
L
k
x
k
s
k
;
k
(9)
Set
k1
8
<
:
c
1
ks
k
k if
k
<r
1
;
minc
2
k
;
if
k
>r
2
;
ks
k
k otherwise
(10)
If both the numerator and the denominator in Eq. (9) are zero or very
small, and the step is accepted using the lter rules, the trust-region
size is increased. If only the denominator is very small, the trust-
region size is decreased.
6) Calculate new values for the Lagrange multipliers. The
Lagrange multipliers are updated by solving the nonnegative least-
squares constraint problem:
min
kr
x
fx
k
0

X
i2S
i
r
x
c
i
x
k
0
k
2
2
subject to 0 (11)
where S is the set of active constraints, using the nonnegative least-
squares algorithm in Sec. 23.3 of [29]. The solution to this problem is
k1
. Increment k by 1 and go to step 2.
Mapping
SBO methods have until now been applicable only to models in
which both the high-delity model fx [cx] and the low-delity
model
^
f
^
x [
^
c
^
x] are dened over the same space x
^
x. To use a
low-delity model with a different number of design variables from
the high-delity function to be optimized, it is necessary to nd a
relationship between the two sets of design vectors: that is,
^
x Px.
Then
^
fPx is corrected to a surrogate for fx, and
^
cPx is
corrected to a surrogate for cx. The optimization algorithm then
calculates trial steps in the high-delity space. Another option is to
calculate steps in the low-delity space and to correct
^
f
^
x to a
surrogate for fQ
^
x and
^
c
^
x to a surrogate for cQ
^
x. The latter
option requires constraints on the Jacobian of the mapping to ensure
that the projection of the gradient is nite for a nite gradient [30] and
will not be addressed here.
In some cases, this design space mapping can be obvious and
problem-specic. For instance, if the high- and low-delity models
are the same set of physical equations but on a ne grid and a coarse
grid and the design vectors in each case are geometric parameters
dened on those grids, the low-delity design vector can be a subset
of the high-delity design vector or the high- delity design vector
can be an interpolation of the low-delity design vector. However, in
other problems, there is no obvious mathematical relationship
between the design vectors. In this case, an empirical mapping is
needed. One example of such a problem is the apping-ight
problem described in this paper. Another is the multidelity
supersonic business jet problem used by Choi et al. [31]. Because
then-existing SBO methods cannot be applied to problems in which
the low- and high-delity models use different design variables, Choi
et al. used the two models sequentially, optimizing rst using the
low-delity model, with kriging corrections applied, and using the
result of that optimization as a starting point for optimization using
the high-delity model. This also required an additional step of
manually mapping the low-delity optimum to the high-delity
space to provide a starting point for high-delity optimization.
Space Mapping
Space mapping, rst introduced by Bandler et al. [32], links the
high- and low-delity models through their input parameters. The
goal of space mapping is to vary the input parameters to the low-
delity model to match the output of the high-delity model. In
microwave circuit design, for which space mapping was rst
developed, it is often appropriate to make corrections to the input of a
model, rather than to its output.
The rst space-mapping-based optimization algorithm used a
linear mapping between the high- and low-delity design spaces. It
used a least-squares solution of the linear equations resulting from
associating corresponding data points in the two spaces. Space-
mapping optimization consists of optimizing in the low-delity
space and inverting the mapping to nd a trial point in the high-
delity space. New data points near the trial point are then used to
construct the mapping for the next iteration. This process is repeated
until no further progress is made. Although this method can result in
substantial improvement (as demonstrated by several design
problems, most in circuit design [33], but some in other disciplines
[34]), it is not provably convergent to even a local minimum of the
high-delity space. In fact, although improvement in the high-
delity model is often possible when the low-delity model is similar
to the high-delity model, it is not guaranteed.
Space mapping was further improved with the introduction of
aggressive space mapping [35]. Aggressive space mapping descends
more quickly toward the optimum than space mapping, but requires
the assumptions that the mapping between the spaces is bijective and
that it is always possible to nd a set of low-delity design vectors
that, when fed into the low-delity model, provide an output almost
identical to the high-delity model evaluated at any given high-
delity design vector. It also requires that the design variables are the
same dimension in both spaces. Because the method does not ensure
rst-order accuracy, the proofs of convergence of trust-region
methods do not extend to those methods using space mapping.
However, Madsen and Søndergaard [36] developed a provably
convergent algorithm by using a hybrid method in which the
surrogate is a convex combination of the space-mapped low-delity
function and a Taylor series approximation to the high-delity
function.
The space-mapping examples available in the literature consider
only the case in which the design vectors have the same length.
Therefore, this work expands it to include the variable-parameter-
ization case, including when the design vectors are not the same
length.
In space mapping, a particular form is assumed for the relationship
P between the high- and low-delity design vectors. This form is
described by some set of space-mapping parameters, contained here
in a vector p, that are found by solving an optimization problem:
p 2 arg min
p
X
q
i1
kx
i

^
Px
i
; pk
2
(12)
This optimization problem seeks to minimize the difference between
some high-delity function x and the corresponding low-delity
function
^
^
x
^
Px; p over a set of q sample points x
i
, where
x
i
denotes the ith sample point. Both the choice of sample points and
the particular form of the mapping P are left to the implementation.
Corrections
As mentioned in the preceding SQP-like algorithm, provable
convergence of the TRMM requires at least rst-order consistency
between the high-delity model and the surrogate model. This can be
accomplished using corrections. Corrections can be additive or
multiplicative; this work uses additive corrections. Although only
rst-order corrections are required, quasi-second-order corrections
have been shown to accelerate convergence of a TRMM [18] and are
therefore used in this work.
2816 ROBINSON ET AL.

For some low-delity function
^
^
x, the corresponding high-
delity function x, and a mapping
^
x Px, the kth additive-
corrected surrogate is dened as
~
k
x
^
Px A
k
x (13)
To obtain quasi-second-order consistency between
~
k
x
k
and
x
k
,wedene the correction function A
k
x using a quadratic
Taylor series expansion of the difference Ax between the two
functions and
^
about the point x
k
:
A
k
Ax
k
r
x
Ax
k
T
x x
k

1
2
x x
k
T
r
2
x
Ax
k
x x
k
(14)
The elements in this expansion are calculated using
A x
k
x
k

^
Px
k
 (15)
@Ax
k
@x
p
@
@x
p
x
k

X
^
n
j1
@
^
@
^
x
j
Px
k

@
^
x
j
@x
p
;p 1; ...;n
(16)
@
2
Ax
k
@x
p
@x
q
H
k
pq
X
^n
j1
@
^
@
^
x
j
Px
k

@
2
^
x
j
@x
p
@x
q
X
^n
1
^
H
k
j‘
@
^
x
j
@x
p
@
^
x
@x
q
;p 1; ...;n; q 1; ...;n
(17)
where x
p
denotes the pth element of the vector x, H
k
is the Broyden
FletcherGoldfarbShanno (BFGS) approximation to the Hessian
matrix of the high-delity function at x
k
,
^
H
k
is the BFGS
approximation to the Hessian matrix of the low-delity function
^
at
Px
k
, and H
k
pq
denotes the pqth element of the matrix H
k
.
For each subproblem k, Eq. (15) computes the difference between
the value of the high-delity function and the low-delity function at
the center of the trust region. Using the chain rule, Eq. (16) computes
the difference between the gradient of the high-delity function and
the gradient of the low-delity function at the center of the trust
region, in which the gradients are computed with respect to the high-
delity design vector x. The second term in Eq. (16) therefore
requires the Jacobian of the mapping, @
^
x
j
=@x
p
. Similarly, Eq. (17)
computes the difference between the BFGS approximation of the
Hessian matrices of the high-delity and low-delity functions at the
center of the trust region. Once again, derivatives are required with
respect to x and are computed using the chain rule.
Corrected Space Mapping
Because space mapping does not provide provable convergence
within a TRMM framework, but any surrogate that is rst-order
accurate does, one approach is to correct the space-mapping
framework to at least rst order. This can be done with the corrections
described previously. However, if the input parameters are rst
selected to match the output function at some number of control
points and a correction is subsequently applied, it is likely that the
correction will unnecessarily distort the match performed in the
space-mapping step. This can be resolved by performing the space
mapping and correction steps simultaneously, which is achieved by
embedding the correction within the space mapping.
This concept is illustrated in Fig. 1, which shows the available data
points () and the center of the trust region (). The dotted curve is a
cubic function found with a least-squares t to the available data. It
provides no consistency at the trust-region center. The dashed curve
shows the result of adding a linear additive correction to that tto
enforce rst-order accuracy at the center of the trust region. The local
correction distorts the global data tting. The solid curve is also a
cubic function, generated by rst enforcing rst-order accuracy at the
center and then performing a least-squares t with the remaining
degrees of freedom. This last curve is more globally accurate than the
sequential tting and correction steps.
Using this concept, corrected space mapping performs the space
mapping and correction steps simultaneously. That is, it incorporates
a correction, and with the remaining degrees of freedom it performs
the best match possible to the control points by varying the input
mapping.
The corrected space-mapping optimization problem is
p
k
2 arg min
p
X
q
i1
kx
i

~
k
Px
i
; pk
2
(18)
Equation (18) is the same as Eq. (12) with
^
, the uncorrected low-
delity function, replaced by
~
k
, the corrected surrogate to the high-
delity function on the kth subproblem. The optimization
problem (18) seeks to minimize the difference between the high-
delity and surrogate objective functions over a set of k sample
points x
i
, where x
i
denotes the ith sample (or control) point. Both the
choice of sample points and the particular form of the mapping P are
left to the implementation. The correction, because it depends on the
Jacobian and Hessian matrices of the mapping, must be updated for
each new value of p.
In the implementation employed in this work, the sample points
used in Eq. (12) are the previous q accepted steps in the TRMM
algorithm (x
kq1
; ...; x
k
) at which high-delity function values are
already available. A linear relationship is chosen for the mapping P:
^
x PxMx b (19)
where M is a matrix with
^
n n elements, and b is a vector of length
^
n
for a total of
^
n n 1 space-mapping parameters. It should be
noted that other forms of the mapping could also be used. The space-
mapping parameters must be determined at each iteration of the
TRMM method by solving the optimization problem (18). This
additional optimization problem in
^
n n 1 dimensional space
adds computational cost that increases with the number of design
variables. However, in many applications, such as computational
uid dynamic problems, this additional algorithm overhead is
signicantly less than the cost of a function evaluation. Thus, the
algorithm provides net computational savings. This is illustrated in
the example problems.
Example Problems
This paper presents two constrained example problems: a wing
planform design problem and the design of a batlike apping-wing
vehicle. Previous work has addressed unconstrained problems,
Data points
Trust region center
Least squares fit
Fit with additive correction
Constrained fit
Fig. 1 Demonstration of simultaneous vs sequential data tting and
enforcement of rst-order accuracy.
ROBINSON ET AL. 2817

Citations
More filters
Journal ArticleDOI

Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization

TL;DR: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest as discussed by the authors, and these different models have varying evaluation costs, i.e.
Journal ArticleDOI

Review of surrogate modeling in water resources

TL;DR: Two broad families of surrogates namely response surface surrogates, which are statistical or empirical data‐driven models emulating the high‐fidelity model responses, and lower‐f fidelity physically based surrogates which are simplified models of the original system are detailed in this paper.
Journal ArticleDOI

A review of surrogate models and their application to groundwater modeling

TL;DR: This review paper summarizes surrogate modeling techniques in three categories: data‐driven, projection, and hierarchical‐based approaches, which approximate a groundwater model through an empirical model that captures the input‐output mapping of the original model.
Journal ArticleDOI

Metamodeling in Multidisciplinary Design Optimization: How Far Have We Really Come?

TL;DR: The extent to which the use of metamodeling techniques inmultidisciplinary design optimization have evolved in the 25 years since the seminal paper on design and analysis of computer experiments is addressed.
Book ChapterDOI

Surrogate-Based Methods

TL;DR: This chapter briefly describes the basics of surrogate-based optimization, various ways of creating surrogate models, as well as several examples of surrogate -based optimization techniques.
References
More filters
Book

Solving least squares problems

TL;DR: Since the lm function provides a lot of features it is rather complicated so it is going to instead use the function lsfit as a model, which computes only the coefficient estimates and the residuals.
Journal ArticleDOI

The design and analysis of computer experiments

TL;DR: This paper presents a meta-modelling framework for estimating Output from Computer Experiments-Predicting Output from Training Data and Criteria Based Designs for computer Experiments.
Journal ArticleDOI

Muiltiobjective optimization using nondominated sorting in genetic algorithms

TL;DR: Goldberg's notion of nondominated sorting in GAs along with a niche and speciation method to find multiple Pareto-optimal points simultaneously are investigated and suggested to be extended to higher dimensional and more difficult multiobjective problems.
Journal ArticleDOI

Response surface methodology

TL;DR: In this article, the Response Surface Methodology (RSM) is used for scheduling and scheduling in response surface methodologies, and it is shown that it can be used in a variety of scenarios.
Related Papers (5)
Frequently Asked Questions (7)
Q1. What have the authors contributed in "Surrogate-based optimization using multifidelity models with variable parameterization and corrected space mapping" ?

A number of surrogate-based optimization methods have been proposed to achieve high-fidelity design optimization at reduced computational cost this paper. 

This paper aimed to extend SBO methods to variable-parameterization multifidelity problems: that is, problems for which multiple models exist and use different sets of design variables. 

An advantage of using panel-method approximations in an unsteady setting is that it requires neither remeshing nor moving-body formulations (such as arbitrary Lagrange–Euler formulations of the Navier–Stokes equations). 

The high-fidelity SQP method took 1344 high-fidelity function calls, including those required to calculate gradients, to achieve the optimum design, with an objective within 10 5 of the best design found, with a constraint violation less than 10 6. 

Because then-existing SBO methods cannot be applied to problems in which the low- and high-fidelitymodels use different design variables, Choi et al. used the two models sequentially, optimizing first using the low-fidelity model, with kriging corrections applied, and using the result of that optimization as a starting point for optimization using the high-fidelity model. 

On a constrained wing design problem, the method achieved 76% savings in high-fidelity function evaluations, reducing the time required for optimization from 34 to 8 h. 

A variable-fidelity design problem is a physical problem for which at least two mathematical or computational models exist: f x with c x and f̂ x̂ with ĉ x̂ .