scispace - formally typeset
Search or ask a question
Journal ArticleDOI

An adaptive PCE-HDMR metamodeling approach for high-dimensional problems

Xinxin Yue1, Jian Zhang1, Weijie Gong1, Min Luo2, Libin Duan1 
27 Feb 2021-Structural and Multidisciplinary Optimization (Springer Berlin Heidelberg)-Vol. 64, Iss: 1, pp 141-162
TL;DR: The results show that the proposed PCE-HDMR has much superior accuracy and robustness in terms of both global and local error metrics while requiring fewer number of samples, and its superiority becomes more significant for polynomial-like functions, higher-dimensional problems, and relatively larger PCE degrees.
Abstract: Metamodel-based high-dimensional model representation (HDMR) has recently been developed as a promising tool for approximating high-dimensional and computationally expensive problems in engineering design and optimization. However, current stand-alone Cut-HDMRs usually come across the problem of prediction uncertainty while combining an ensemble of metamodels with Cut-HDMR results in an implicit and inefficient process in response approximation. To this end, a novel stand-alone Cut-HDMR is proposed in this article by taking advantage of the explicit polynomial chaos expansion (PCE) and hierarchical Cut-HDMR (named PCE-HDMR). An intelligent dividing rectangles (DIRECT) sampling method is adopted to adaptively refine the model. The novelty of the PCE-HDMR is that the proposed multi-hierarchical algorithm structure by integrating PCE with Cut-HDMR can efficiently and robustly provide simple and explicit approximations for a wide class of high-dimensional problems. An analytical function is first used to illustrate the modeling principles and procedures of the algorithm, and a comprehensive comparison between the proposed PCE-HDMR and other well-established Cut-HDMRs is then made on fourteen representative mathematical functions and five engineering examples with a wide scope of dimensionalities. The results show that the proposed PCE-HDMR has much superior accuracy and robustness in terms of both global and local error metrics while requiring fewer number of samples, and its superiority becomes more significant for polynomial-like functions, higher-dimensional problems, and relatively larger PCE degrees.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a prediction-oriented active sparse polynomial chaos expansion (PAS-PCE) is proposed for reliability analysis, which makes use of the Bregman-iterative greedy coordinate descent in effectively solving the least absolute shrinkage and selection operator based regression for sparse PCE approximation with a small set of initial samples.

11 citations

Journal ArticleDOI
TL;DR: This paper is among the first to use the XFEM in studying the robust topology optimization under uncertainty and there is no need for any post-processing techniques, so the effectiveness of this method is justified by the clear and smooth boundaries obtained.
Abstract: This research presents a novel algorithm for robust topology optimization of continuous structures under material and loading uncertainties by combining an evolutionary structural optimization (ESO) method with an extended finite element method (XFEM). Conventional topology optimization approaches (e.g. ESO) often require additional post-processing to generate a manufacturable topology with smooth boundaries. By adopting the XFEM for boundary representation in the finite element (FE) framework, the proposed method eliminates this time-consuming post-processing stage and produces more accurate evaluation of the elements along the design boundary for ESO-based topology optimization methods. A truncated Gaussian random field (without negative values) using a memory-less translation process is utilized for the random uncertainty analysis of the material property and load angle distribution. The superiority of the proposed method over Monte Carlo, solid isotropic material with penalization (SIMP) and polynomial chaos expansion (PCE) using classical finite element method (FEM) is demonstrated via two practical examples with compliances in material uncertainty and loading uncertainty improved by approximately 11% and 10%, respectively. The novelty of the present method lies in the following two aspects: (1) this paper is among the first to use the XFEM in studying the robust topology optimization under uncertainty; (2) due to the adopted XFEM for boundary elements in the FE framework, there is no need for any post-processing techniques. The effectiveness of this method is justified by the clear and smooth boundaries obtained.

9 citations

Journal ArticleDOI
TL;DR: In this article , a new stand-alone high-dimensional model representation (HDMR) metamodeling technique, termed as Dendrite-HDMR, is proposed based on the hierarchical cut-based HDMR and the white-box machine learning algorithm, DENDrite Net.
Abstract: Abstract High-dimensional model representation (HDMR), decomposing the high-dimensional problem into summands of different order component terms, has been widely researched to work out the dilemma of “curse-of-dimensionality” when using surrogate techniques to approximate high-dimensional problems in engineering design. However, the available one-metamodel-based HDMRs usually encounter the predicament of prediction uncertainty, while current multi-metamodels-based HDMRs cannot provide simple explicit expressions for black-box problems, and have high computational complexity in terms of constructing the model by the explored points and predicting the responses of unobserved locations. Therefore, aimed at such problems, a new stand-alone HDMR metamodeling technique, termed as Dendrite-HDMR, is proposed in this study based on the hierarchical Cut-HDMR and the white-box machine learning algorithm, Dendrite Net. The proposed Dendrite-HDMR not only provides succinct and explicit expressions in the form of Taylor expansion but also has relatively higher accuracy and stronger stability for most mathematical functions than other classical HDMRs with the assistance of the proposed adaptive sampling strategy, named KKMC, in which k-means clustering algorithm, k-Nearest Neighbor classification algorithm and the maximum curvature information of the provided expression are utilized to sample new points to refine the model. Finally, the Dendrite-HDMR technique is applied to solve the design optimization problem of the solid launch vehicle propulsion system with the purpose of improving the impulse-weight ratio, which represents the design level of the propulsion system.

5 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an efficient structural reliability analysis method based on stochastic configuration networks (SCNs), which is constructed by learning the real performance function of the structure.
Abstract: Structural reliability analysis is of critical importance for ensuring the safety of many engineering systems. This paper proposes an efficient structural reliability analysis method based on stochastic configuration networks (SCNs), which is constructed by learning the real performance function of the structure. In this article, two kinds of innovative surrogate models of performance function are constructed based on the trained SCNs. To improve the approximation accuracy and efficiency of SCNs for complex high-dimensional performance functions, the improved high-dimensional model representation is applied to decompose the multivariate function into a combination of several low-dimensional functions. An innovative SCN-based learning strategy is proposed to simultaneously approximate all the low-dimensional component functions. By performing Monte Carlo Simulation on the established SCN-based surrogate models, we can successfully evaluate the failure probability of many complex multi-dimensional engineering structures. The proposed SCN-based reliability analysis method provides a powerful tool for estimating the failure probability of complex multi-dimensional engineering structures. Effectiveness of the proposed method in terms of accuracy and efficiency as well as stability is demonstrated through several numerical experiments.

5 citations

Journal ArticleDOI
TL;DR: In this work, a robust prediction method is proposed based on the Kriging method and fuzzy c-means algorithm that produces much better performance in terms of outlier detection accuracy and prediction accuracy than the conventional outlier Detection method and the K Riging method.

4 citations

References
More filters
Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: It is demonstrated theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal.
Abstract: This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

8,604 citations

01 Aug 2007
TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

7,124 citations

Book
20 Dec 1990
TL;DR: In this article, a representation of stochastic processes and response statistics are represented by finite element method and response representation, respectively, and numerical examples are provided for each of them.
Abstract: Representation of stochastic processes stochastic finite element method - response representation stochastic finite element method - response statistics numerical examples.

5,495 citations