scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Reliability-based design optimization using kriging surrogates and subset simulation

TL;DR: The aim of the present paper is to develop a strategy for solving reliability-based design optimization (RBDO) problems that remains applicable when the performance models are expensive to evaluate.
Abstract: The aim of the present paper is to develop a strategy for solving reliability-based design optimization (RBDO) problems that remains applicable when the performance models are expensive to evaluate. Starting with the premise that simulation-based approaches are not affordable for such problems, and that the most-probable-failure-point-based approaches do not permit to quantify the error on the estimation of the failure probability, an approach based on both metamodels and advanced simulation techniques is explored. The kriging metamodeling technique is chosen in order to surrogate the performance functions because it allows one to genuinely quantify the surrogate error. The surrogate error onto the limit-state surfaces is propagated to the failure probabilities estimates in order to provide an empirical error measure. This error is then sequentially reduced by means of a population-based adaptive refinement technique until the kriging surrogates are accurate enough for reliability analysis. This original refinement strategy makes it possible to add several observations in the design of experiments at the same time. Reliability and reliability sensitivity analyses are performed by means of the subset simulation technique for the sake of numerical efficiency. The adaptive surrogate-based strategy for reliability estimation is finally involved into a classical gradient-based optimization algorithm in order to solve the RBDO problem. The kriging surrogates are built in a so-called augmented reliability space thus making them reusable from one nested RBDO iteration to the other. The strategy is compared to other approaches available in the literature on three academic examples in the field of structural mechanics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors propose to use a Kriging surrogate for the performance function as a means to build a quasi-optimal importance sampling density, which can be applied to analytical and finite element reliability problems and proves efficient up to 100 basic random variables.

389 citations

01 Jan 2004
TL;DR: In this article, a guide for selecting an appropriate method in RBDO is provided by comparing probabilistic design approaches from the perspective of various numerical considerations, and it has been found in the literature that PMA is more efficient and stable than RIA in terms of numerical accuracy, simplicity, and stability.
Abstract: During the past decade, numerous endeavors have been made to develop effective reliability-based design optimization (RBDO) methods. Because the evaluation of probabilistic constraints defined in the RBDO formulation is the most difficult part to deal with, a number of different probabilistic design approaches have been proposed to evaluate probabilistic constraints in RBDO. In the first approach, statistical moments are approximated to evaluate the probabilistic constraint. Thus, this is referred to as the approximate moment approach (AMA). The second approach, called the reliability index approach (RIA), describes the probabilistic constraint as a reliability index. Last, the performance measure approach (PMA) was proposed by converting the probability measure to a performance measure. A guide for selecting an appropriate method in RBDO is provided by comparing probabilistic design approaches for RBDO from the perspective of various numerical considerations. It has been found in the literature that PMA is more efficient and stable than RIA in the RBDO process. It is found that PMA is accurate enough and stable at an allowable efficiency, whereas AMA has some difficulties in RBDO process such as a second-order design sensitivity required for design optimization, an inaccuracy to measure a probability of failure, and numerical instability due to its inaccuracy. Consequently, PMA has several major advantages over AMA, in terms of numerical accuracy, simplicity, and stability. Some numerical examples are shown to demonstrate several numerical observations on the three different RBDO approaches.

204 citations

Journal ArticleDOI
TL;DR: A novel approach to build such a surrogate model from a design of experiments using the selected polynomials as regression functions for the universal Kriging model, which seems to be an optimal solution between the two other classical approaches.

199 citations


Cites methods from "Reliability-based design optimizati..."

  • ...Later, this approach has been widely used in computer experiment domain [25, 10, 11, 12], sequential design of experiments [26, 27, 28] and global optimization [29]....

    [...]

  • ...In future investigations, this approach could be applied to global optimization problems and sequential design of experiments oriented to the evaluation of quantiles or oriented to reliability analysis [27, 28, 52, 53]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors propose a new method to provide local metamodel error estimates based on bootstrap resampling and sparse polynomial chaos expansions (PCE).

150 citations

Journal ArticleDOI
TL;DR: It is figured out that a surrogate model just rightly predicting the sign of performance function can meet the requirement of HRA in accuracy and a methodology based on active learning Kriging (ALK) model named ALK-HRA is proposed.
Abstract: Hybrid reliability analysis (HRA) with both random and interval variables is investigated in this paper. Firstly, it is figured out that a surrogate model just rightly predicting the sign of performance function can meet the requirement of HRA in accuracy. According to this idea, a methodology based on active learning Kriging (ALK) model named ALK-HRA is proposed. When constructing the Kriging model, the presented method only finely approximates the performance function in the region of interest: the region where the sign tends to be wrongly predicted. Based on the constructed Kriging model, Monte Carlo Simulation (MCS) is carried out to estimate both the lower and upper bounds of failure probability. ALK-HRA is accurate enough with calling the performance function as few times as possible. Four numerical examples and one engineering application are investigated to demonstrate the performance of the proposed method.

143 citations

References
More filters
01 Jan 1967
TL;DR: The k-means algorithm as mentioned in this paper partitions an N-dimensional population into k sets on the basis of a sample, which is a generalization of the ordinary sample mean, and it is shown to give partitions which are reasonably efficient in the sense of within-class variance.
Abstract: The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special

24,320 citations


"Reliability-based design optimizati..." refers methods in this paper

  • ...it uniformally covers M) by means of the K -means clustering technique ( MacQueen 1967 )....

    [...]

  • ...This population of candidates is then reduced to a smaller one that has essentially the same statistical properties (i.e. it uniformally covers M) by means of the K-means clustering technique (MacQueen, 1967)....

    [...]

Journal ArticleDOI
TL;DR: This paper introduces the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering and shows how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule.
Abstract: In many engineering optimization problems, the number of function evaluations is severely limited by time or cost. These problems pose a special challenge to the field of global optimization, since existing methods often require more function evaluations than can be comfortably afforded. One way to address this challenge is to fit response surfaces to data collected by evaluating the objective and constraint functions at a few points. These surfaces can then be used for visualization, tradeoff analysis, and optimization. In this paper, we introduce the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering. We then show how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule. The key to using response surfaces for global optimization lies in balancing the need to exploit the approximating surface (by sampling where it is minimized) with the need to improve the approximation (by sampling where prediction error may be high). Striking this balance requires solving certain auxiliary problems which have previously been considered intractable, but we show how these computational obstacles can be overcome.

6,914 citations

Journal ArticleDOI
TL;DR: The included papers present an interesting mixture of recent developments in the field as they cover fundamental research on the design of experiments, models and analysis methods as well as more applied research connected to real-life applications.
Abstract: The design and analysis of computer experiments as a relatively young research field is not only of high importance for many industrial areas but also presents new challenges and open questions for statisticians. This editorial introduces a special issue devoted to the topic. The included papers present an interesting mixture of recent developments in the field as they cover fundamental research on the design of experiments, models and analysis methods as well as more applied research connected to real-life applications.

2,583 citations


"Reliability-based design optimizati..." refers background in this paper

  • ...Kriging (Santner et al., 2003) is one particular emulator that is able to give a probabilistic response bY (x ) whose variance (spread) depends on the quantity of available knowledge....

    [...]

  • ...Kriging (Santner et al., 2003) is one particular emulator that is able to give a probabilistic response b Y (x ) whose variance (spread) depends on the quantity of available knowledge....

    [...]

BookDOI
10 Sep 1993

2,500 citations


"Reliability-based design optimizati..." refers methods in this paper

  • ...In this field, the autocovariance structure is usually estimated from the data using variographic analysis (Cressie, 1993)....

    [...]

Journal ArticleDOI
TL;DR: This chapter discusses how to make practical use of spatial statistics in day-to-day analytical work, and some examples from the scientific literature suggest a straightforward and efficient way to do this.
Abstract: Spatial statistics — analyzing spatial data through statistical models — has proven exceptionally versatile, encompassing problems ranging from the microscopic to the astronomic. However, for the scientist and engineer faced only with scattered and uneven treatments of the subject in the scientific literature, learning how to make practical use of spatial statistics in day-to-day analytical work is very difficult.

2,238 citations


"Reliability-based design optimizati..." refers background or methods in this paper

  • ...Note that the idea is inspired from Hurtado (2004); Deheeger and Lemaire (2007); Deheeger (2008); Bourinet et al....

    [...]

  • ...In this field, the autocovariance structure is usually estimated from the data using variographic analysis (Cressie, 1993)....

    [...]

  • ...In this field, the autocovariance structure is usually estimated from the data using variographic analysis (Cressie, 1993). Then, provided the empirical variogram features some required properties it can be turned into an autocovariance model. However, this methodology is not well-suited to our purpose because of its user-interactivity. In computer experiments, the most widely used methodology is the MLE technique. Provided a functional set f ∈ L2 Dx , R and a stationary autocorrelation model R(•, l) are chosen, one can express the likelihood of the data with respect to the model and maximize it with respect to the sought parameters (l, β and σ(2) Y ). One can show that β and σ(2) Y can be derived analytically (using the first-order optimality conditions) and solely depends on the autocovariance parameters l that are solution of a numerically tractable global optimization problem – see e.g. Welch et al. (1992); Lophaven et al. (2002) for more details....

    [...]

  • ...In this field, the autocovariance structure is usually estimated from the data using variographic analysis (Cressie, 1993). Then, provided the empirical variogram features some required properties it can be turned into an autocovariance model. However, this methodology is not well-suited to our purpose because of its user-interactivity. In computer experiments, the most widely used methodology is the MLE technique. Provided a functional set f ∈ L2 Dx , R and a stationary autocorrelation model R(•, l) are chosen, one can express the likelihood of the data with respect to the model and maximize it with respect to the sought parameters (l, β and σ(2) Y ). One can show that β and σ(2) Y can be derived analytically (using the first-order optimality conditions) and solely depends on the autocovariance parameters l that are solution of a numerically tractable global optimization problem – see e.g. Welch et al. (1992); Lophaven et al. (2002) for more details. This technique, implemented within the DACE toolbox by Lophaven et al. (2002), was used for the applications presented in this paper....

    [...]

  • ...In this field, the autocovariance structure is usually estimated from the data using variographic analysis (Cressie, 1993). Then, provided the empirical variogram features some required properties it can be turned into an autocovariance model. However, this methodology is not well-suited to our purpose because of its user-interactivity. In computer experiments, the most widely used methodology is the MLE technique. Provided a functional set f ∈ L2 Dx , R and a stationary autocorrelation model R(•, l) are chosen, one can express the likelihood of the data with respect to the model and maximize it with respect to the sought parameters (l, β and σ(2) Y ). One can show that β and σ(2) Y can be derived analytically (using the first-order optimality conditions) and solely depends on the autocovariance parameters l that are solution of a numerically tractable global optimization problem – see e.g. Welch et al. (1992); Lophaven et al....

    [...]