Q2. What is the purpose of weighting the l1 cost function with W?
The purpose of weighting the ℓ1 cost function with W is to prevent the optimization from biasing toward the non-zero entries in c whose corresponding columns in Ψ have large norms.
Q3. What is the recovery under the truncation error c u?
The recovery is stable under the truncation error ‖Ψc− u‖2 and is within a distance of the exact solution that is proportional to the error tolerance δ.
Q4. What is the significance of sparsity in the analysis of high-dimensional problems?
sparsity is salient in the analysis of high-dimensional problems where the number of energetic basis functions (those with large coefficients) is small relative to the cardinality of the full basis.
Q5. what is the spectral stochastic discretization of u(x,y)?
In the context of the spectral stochastic methods [36, 27, 61, 2], the solution u(x,y) of (2) is represented by an infinite series of the formu(x,y) = ∑α∈Nd0cα(x)ψα(y), (8)where Nd0 := {(α1, · · · , αd) : αj ∈ N ∪ {0}} is the set of multi-indices of size d defined on non-negative integers.
Q6. What is the reason for the truncation error on the Nv validation samples?
This is simply motivated by the fact that the truncation error on the validation samples is large for values of δr considerably larger and smaller than ‖Ψc0 − u‖2 evaluated using the reconstruction samples.
Q7. What is the mutual coherence of the random measurement matrix?
The authors first observe that, by the orthogonality of the Legendre PC basis, the mutual coherence µ(Ψ) converges to zero almost surely for asymptotically large random sample sizes N .
Q8. What techniques are used to expand sparse solutions to stochastic PDEs?
In this work, using concentration of measure inequalities and compressive sampling techniques, the authors derive a method for PC expansion of sparse solutions to stochastic PDEs.
Q9. What is the idea that a set of incomplete random observations of a sparse signal?
It hinges around the idea that a set of incomplete random observations of a sparse signal can be used to accurately, or even exactly, recover the signal (provided that the basis in which the signal is sparse is known).
Q10. What is the eigenpair of the covariance function Caa(x1,?
The covariance function Caa(x1,x2) is piecewise analytic on D × D [52, 8], implying that there exist real constants c1 and c2 such that for i = 1, · · · , d,0 ≤ λi ≤ c1e−c2i κ(5)and∀α ∈ Nd : √ λi‖∂αφi‖L∞(D) ≤ c1e−c2i κ , (6)where κ := 1/D and α ∈
Q11. What is the way to compute the sparsest PC coefficients?
In this case, under certain conditions, the sparse PC coefficients c may be computed accurately and robustly using only N ≪ P random samples of u(ω) via compressive sampling.
Q12. What is the main difference between stochastic and collocation techniques?
As their construction is primarily based on the input parameter space, the computational cost of both stochastic Galerkin and collocation techniques increases rapidly for large number of independent input uncertainties.
Q13. What is the main reason why the Monte Carlo methods are inefficient?
it is well understood that these methods are generally inefficient for large-scale systems due to their slow rate of convergence.
Q14. What is the significance of the nested sampling property of their scheme?
The nested sampling property of their scheme is of paramount importance in large scale calculations where the computational cost of each solution evaluation is enormous.