scispace - formally typeset
Open AccessJournal ArticleDOI

A non-adapted sparse approximation of PDEs with stochastic inputs

TLDR
The method converges in probability as a consequence of sparsity and a concentration of measure phenomenon on the empirical correlation between samples, and it is shown that the method is well suited for truly high-dimensional problems.
About
This article is published in Journal of Computational Physics.The article was published on 2011-04-01 and is currently open access. It has received 479 citations till now. The article focuses on the topics: Sparse approximation & Sampling (statistics).

read more

Figures
Citations
More filters
Book

Active Subspaces: Emerging Ideas for Dimension Reduction in Parameter Studies

TL;DR: Active subspaces are an emerging set of dimension reduction tools that identify important directions in the parameter space as mentioned in this paper, and they can be used to enable parameter studies when the model is expensive and the model has many inputs.
Journal ArticleDOI

Sparse Legendre expansions via l 1 -minimization

TL;DR: It is shown that a Legendre s-sparse polynomial of maximal degree N can be recovered from [email protected]?slog^4(N) random samples that are chosen independently according to the Chebyshev probability measure.
Journal ArticleDOI

Stochastic finite element methods for partial differential equations with random input data

TL;DR: Several approaches to quantification of probabilistic uncertainties in the outputs of physical, biological, and social systems governed by partial differential equations with random inputs require, in practice, the discretization of those equations, including intrusive approaches such as stochastic Galerkin methods and non-intrusive approaches.
Journal ArticleDOI

Polynomial-Chaos-based Kriging

TL;DR: PC-Kriging is derived as a new non-intrusive meta-modeling approach combining PCE and Kriging, which approximates the global behavior of the computational model whereas Kriged manages the local variability of the model output.
Journal ArticleDOI

Compressive sampling of polynomial chaos expansions

TL;DR: The coherence-optimal sampling scheme is proposed: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book

Compressed sensing

TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Journal ArticleDOI

Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information

TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Journal ArticleDOI

A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems

TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Journal ArticleDOI

Atomic Decomposition by Basis Pursuit

TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What have the authors contributed in "A non-adapted sparse approximation of pdes with stochastic inputs" ?

The authors propose a method for the approximation of solutions of PDEs with stochastic coefficients based on the direct, i. e., non-adapted, sampling of solutions. The authors show that the method is well suited for truly high-dimensional problems ( with slow decay in the spectrum ). 

The purpose of weighting the ℓ1 cost function with W is to prevent the optimization from biasing toward the non-zero entries in c whose corresponding columns in Ψ have large norms. 

The recovery is stable under the truncation error ‖Ψc− u‖2 and is within a distance of the exact solution that is proportional to the error tolerance δ. 

sparsity is salient in the analysis of high-dimensional problems where the number of energetic basis functions (those with large coefficients) is small relative to the cardinality of the full basis. 

In the context of the spectral stochastic methods [36, 27, 61, 2], the solution u(x,y) of (2) is represented by an infinite series of the formu(x,y) = ∑α∈Nd0cα(x)ψα(y), (8)where Nd0 := {(α1, · · · , αd) : αj ∈ N ∪ {0}} is the set of multi-indices of size d defined on non-negative integers. 

This is simply motivated by the fact that the truncation error on the validation samples is large for values of δr considerably larger and smaller than ‖Ψc0 − u‖2 evaluated using the reconstruction samples. 

The authors first observe that, by the orthogonality of the Legendre PC basis, the mutual coherence µ(Ψ) converges to zero almost surely for asymptotically large random sample sizes N . 

In this work, using concentration of measure inequalities and compressive sampling techniques, the authors derive a method for PC expansion of sparse solutions to stochastic PDEs. 

It hinges around the idea that a set of incomplete random observations of a sparse signal can be used to accurately, or even exactly, recover the signal (provided that the basis in which the signal is sparse is known). 

The covariance function Caa(x1,x2) is piecewise analytic on D × D [52, 8], implying that there exist real constants c1 and c2 such that for i = 1, · · · , d,0 ≤ λi ≤ c1e−c2i κ(5)and∀α ∈ Nd : √ λi‖∂αφi‖L∞(D) ≤ c1e−c2i κ , (6)where κ := 1/D and α ∈ 

In this case, under certain conditions, the sparse PC coefficients c may be computed accurately and robustly using only N ≪ P random samples of u(ω) via compressive sampling. 

As their construction is primarily based on the input parameter space, the computational cost of both stochastic Galerkin and collocation techniques increases rapidly for large number of independent input uncertainties. 

it is well understood that these methods are generally inefficient for large-scale systems due to their slow rate of convergence. 

The nested sampling property of their scheme is of paramount importance in large scale calculations where the computational cost of each solution evaluation is enormous.