scispace - formally typeset
Search or ask a question
Journal Article•DOI•

Sparse polynomial chaos expansion based on Bregman-iterative greedy coordinate descent for global sensitivity analysis

Jian Zhang1, Xinxin Yue1, Jiajia Qiu, Lijun Zhuo1, Jianguo Zhu1 •
01 Aug 2021-Mechanical Systems and Signal Processing (Academic Press)-Vol. 157, pp 107727
TL;DR: A novel methodology for developing sparse PCE is proposed by making use of the efficiency of greedy coordinate descent in sparsity exploitation and the capability of Bregman iteration in accuracy enhancement, which shows that the proposed method is superior to the benchmark methods in terms of accuracy while maintaining a better balance among accuracy, complexity and computational efficiency.
About: This article is published in Mechanical Systems and Signal Processing.The article was published on 2021-08-01. It has received 14 citations till now. The article focuses on the topics: Polynomial chaos & Coordinate descent.
Citations
More filters
Journal Article•DOI•
Xinxin Yue1, Jian Zhang1, Weijie Gong1, Min Luo2, Libin Duan1 •
TL;DR: The results show that the proposed PCE-HDMR has much superior accuracy and robustness in terms of both global and local error metrics while requiring fewer number of samples, and its superiority becomes more significant for polynomial-like functions, higher-dimensional problems, and relatively larger PCE degrees.
Abstract: Metamodel-based high-dimensional model representation (HDMR) has recently been developed as a promising tool for approximating high-dimensional and computationally expensive problems in engineering design and optimization. However, current stand-alone Cut-HDMRs usually come across the problem of prediction uncertainty while combining an ensemble of metamodels with Cut-HDMR results in an implicit and inefficient process in response approximation. To this end, a novel stand-alone Cut-HDMR is proposed in this article by taking advantage of the explicit polynomial chaos expansion (PCE) and hierarchical Cut-HDMR (named PCE-HDMR). An intelligent dividing rectangles (DIRECT) sampling method is adopted to adaptively refine the model. The novelty of the PCE-HDMR is that the proposed multi-hierarchical algorithm structure by integrating PCE with Cut-HDMR can efficiently and robustly provide simple and explicit approximations for a wide class of high-dimensional problems. An analytical function is first used to illustrate the modeling principles and procedures of the algorithm, and a comprehensive comparison between the proposed PCE-HDMR and other well-established Cut-HDMRs is then made on fourteen representative mathematical functions and five engineering examples with a wide scope of dimensionalities. The results show that the proposed PCE-HDMR has much superior accuracy and robustness in terms of both global and local error metrics while requiring fewer number of samples, and its superiority becomes more significant for polynomial-like functions, higher-dimensional problems, and relatively larger PCE degrees.

16 citations

Journal Article•DOI•
TL;DR: In this paper , a prediction-oriented active sparse polynomial chaos expansion (PAS-PCE) is proposed for reliability analysis, which makes use of the Bregman-iterative greedy coordinate descent in effectively solving the least absolute shrinkage and selection operator based regression for sparse PCE approximation with a small set of initial samples.

11 citations

Journal Article•DOI•
TL;DR: This paper is among the first to use the XFEM in studying the robust topology optimization under uncertainty and there is no need for any post-processing techniques, so the effectiveness of this method is justified by the clear and smooth boundaries obtained.
Abstract: This research presents a novel algorithm for robust topology optimization of continuous structures under material and loading uncertainties by combining an evolutionary structural optimization (ESO) method with an extended finite element method (XFEM). Conventional topology optimization approaches (e.g. ESO) often require additional post-processing to generate a manufacturable topology with smooth boundaries. By adopting the XFEM for boundary representation in the finite element (FE) framework, the proposed method eliminates this time-consuming post-processing stage and produces more accurate evaluation of the elements along the design boundary for ESO-based topology optimization methods. A truncated Gaussian random field (without negative values) using a memory-less translation process is utilized for the random uncertainty analysis of the material property and load angle distribution. The superiority of the proposed method over Monte Carlo, solid isotropic material with penalization (SIMP) and polynomial chaos expansion (PCE) using classical finite element method (FEM) is demonstrated via two practical examples with compliances in material uncertainty and loading uncertainty improved by approximately 11% and 10%, respectively. The novelty of the present method lies in the following two aspects: (1) this paper is among the first to use the XFEM in studying the robust topology optimization under uncertainty; (2) due to the adopted XFEM for boundary elements in the FE framework, there is no need for any post-processing techniques. The effectiveness of this method is justified by the clear and smooth boundaries obtained.

9 citations

Journal Article•DOI•
TL;DR: In this work, a robust prediction method is proposed based on the Kriging method and fuzzy c-means algorithm that produces much better performance in terms of outlier detection accuracy and prediction accuracy than the conventional outlier Detection method and the K Riging method.

4 citations

Journal Article•DOI•
01 Mar 2022
TL;DR: In this article , a blind-Kriging-based natural frequency prediction of the industrial robot is proposed, utilizing the Latin Hypercube Sampling (LHS) technique, and a reliable dataset with 120 samples is generated for surrogate models based on the FEM.
Abstract: High-precision assembly conditions tend to necessitate consideration of the vibration modes of industrial robots. The modal characteristics of complex systems such as industrial robots are highly nonlinear. It means that mechanics experiments and finite element methods (FEM) to evaluate such features are usually expensive. Surrogate models combined with simulation-based design are widely used in engineering issues. However, few investigations apply surrogate models to industrial robots' modal analysis. We propose a practical scheme, i.e., the Blind-Kriging (KRG-B) based natural frequency prediction of the industrial robot, utilizing the Latin Hypercube Sampling (LHS) technique. A reliable dataset with 120 samples is generated for surrogate models based on the FEM. Then, the fourteen surrogate models with different optimization algorithms are evaluated to identify the optimal model for the natural frequency. In addition, the accuracy and robustness of the optimal surrogate model are investigated under different training samples. KRG-B model has better robustness (good fitting accuracy for both higher and lower order modes) and higher computational efficiency (1.133 s, the shortest time among all models). The proposed scheme mapping robot's joint angle and the natural frequency offers a valuable basis for further studying dynamic characteristics in industrial robotics.

3 citations

References
More filters
Journal Article•DOI•
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Abstract: SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.

40,785 citations

Book•
D.L. Donoho1•
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal Article•DOI•
TL;DR: In comparative timings, the new algorithms are considerably faster than competing methods and can handle large problems and can also deal efficiently with sparse features.
Abstract: We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multinomial regression problems while the penalties include l(1) (the lasso), l(2) (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.

13,656 citations

Journal Article•DOI•
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal Article•DOI•
TL;DR: It is demonstrated theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal.
Abstract: This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

8,604 citations