A new algorithm of proper generalized decomposition for parametric symmetric elliptic problems
Summary (2 min read)
1 Introduction
- The Karhunen-Loève’s expansion (KLE) is a widely used tool, that provides a reliable procedure for a low dimensional representation of spatiotemporal signals (see [13, 23]).
- Also, in [11] the convergence of a recursive approximation of the solution of a linear elliptic PDE is proved, based on the existence of optimal subspaces of rank 1 that minimize the elliptic norm of the current residual.
- This is the case of the design analysis in computational mechanics.
- In Section 2 the authors state the general problem of finding optimal subspaces of a given dimension.
- Section 6 explains why the method introduced is a genuine extension of both POD and PGD algorithms, and provides a theoretical analysis for the latter.
2 Statement of the problem
- Let H be a separable Hilbert space endowed with the scalar product (·, ·).
- The authors denote by Bs(H) the space of bilinear, symmetric and continuous forms in H. Assume given a measure space (Γ,B, µ), with standard notation, so that µ is σ-finite.
- In the present minimization problem, the authors use the norm of L2(Γ, H; dµ) instead of the norm of L∞(Γ, H; dµ) as used there.
- The authors consider an orthonormal basis {zk} of R(v)⊥.
- As announced above, the next Proposition provides an equivalent formulation for (8) which does not depend on the knowledge of the solution u of (3), but only on the data f .
3 One-dimensional approximations
- In Section 4 the authors shall show the existence of the solution of problem (8) for any arbitrary k.
- The authors dedicate this section to this special case.
- The problem to solve can be reformulated as follows.
- The authors now prove the existence of a solution to problem (19).
- Since this proof can be carried out by replacing wn by any subsequence of wn, the authors conclude that the whole sequence wn (which they extracted just after (22) assuming that it converges weakly to some w) actually converges strongly to w.
5 An iterative algorithm by deflation
- In the previous section, for any given k ≥ 1, the authors have proved the existence of an optimal subspace for problem (8).
- The authors use here this fact to build an iterative approximation of the solution of (3) by a deflation approach.
- The authors build recursive approximations on finite-dimensional optimal subspaces by minimizing the mean parametric error of the current residual, similar to the one introduced in [11].
- Note that si (and therefore ui) in general is not defined in a unique way.
- This proves that ei converges strongly to zero in L 2(Γ, H; dµ).
6 Relationship with POD and PGD methods
- The “intrinsic” PGD method developed in the previous sections is a genuine extension of both POD and PGD method.
- In contrast, when a depends on γ it does not seem that problem (63) corresponds to an eigenvalue problem.
- However there is the possibility that problem (64) admits several solutions and that some of these do not provide a solution of the optimization problem (45).
- The previous analysis presents some differences with preceding works on the analysis of convergence of PGD methods applied to the solution of PDEs and optimization problems.
- This is a generalization of Eckart and Young theorem.
7 Conclusion
- In this paper the authors have introduced an iterative deflation algorithm to solve parametric symmetric elliptic equations.
- It is a Proper Generalized Decomposition algorithm as it builds a tensorized representation of the parameterized solutions, by means of optimal subspaces that minimize the residual in mean quadratic norm.
- Also, the authors have proved the strong convergence in the parametric elliptic norm of the deflation algorithm for quite general parametric elliptic operators.
- The authors will analyze wether the standard PGD provides the optimal sub-spaces, and compare the convergence rates with those of the POD expansion, to determine whether the use of optimal modes provides improved convergence rates.
Did you find this useful? Give us your feedback
Citations
10 citations
Cites background from "A new algorithm of proper generaliz..."
...For s ∈ [0, 1] X = x0 + νt, x = sh1(X), For s ∈ [1, 2] X = x0 + νt, x = (s− 1)hg + h1(X), For s ∈ [2, 3] X = x0 + νt, x = (s− 2)h2(X) + hg + h1(X), (44)...
[...]
...s ∈ [0, 1] X = r, x = sh1(X) = sh1(r), s ∈ [1, 2] X = r, x = (s− 1)l + h1(X) = (s− 1)l + h1(r), s ∈ [2, 3] X = r, x = (s− 2)h2(X) + l + h1(X) = (s− 2)h2(r) + l + h1(r), (36)...
[...]
...The interested reader can refer to [2, 7, 11, 20, 23, 25, 26] and the references therein for practical details on the computer implementation of separated representations....
[...]
7 citations
5 citations
2 citations
2 citations
References
546 citations
Additional excerpts
...An alternative approach is proper generalized decomposition (PGD) which iteratively computes a tensorized representation of the parameterized PDE that separates the parameter and the independent variables; this approach was introduced in [3]....
[...]
543 citations
301 citations
"A new algorithm of proper generaliz..." refers background in this paper
...Its use allows large savings of computational costs and makes affordable the solution of problems that need a large amount of solutions of parameter-dependent partial differential equations (PDEs); see [4, 10, 16, 21, 30, 31, 32, 34]....
[...]
281 citations
"A new algorithm of proper generaliz..." refers background in this paper
...It has been interpreted as a power type generalized spectral decomposition (see [27, 28])....
[...]
123 citations
"A new algorithm of proper generaliz..." refers background in this paper
...This makes it rather expensive to solve parametric elliptic PDEs, as it requires the previous solution of the PDE for a large enough number of values of the parameter (``snapshots""; see [18]), even if these can be located at optimal positions (see [20])....
[...]
Related Papers (5)
Frequently Asked Questions (11)
Q2. What are the future works in "A new algorithm of proper generalized decomposition for parametric symmetric elliptic problems" ?
In a future work the authors will consider the non-symmetric case.
Q3. What is the purpose of this paper?
The present paper is aimed at the direct determination of a variety of reduced dimension for the solution of parameterized symmetric elliptic PDEs.
Q4. What is the way to design a material?
the optimal design of heterogeneous materials with linear behavior law fits into the framework considered, as the parameters model the structural configuration of the various materials (cf. [29, 33]).
Q5. What is the correct definition of R(v) for a function v L?
For a function v ∈ L2(Γ, H; dµ), the authors denote by R(v) the closure of the vectorial space spanned by v(γ) when γ belongs to Γ; more exactly, taking into account that v is only defined up to sets of zero measure, the correct definition of R(v) is given byR(v) = ⋂µ(N)=0Span { v(γ) : γ ∈ Γ \\N } . (14)The following result proves that in (14) the intersection can be replaced a single closed spanned space corresponding to a single set M ∈ B.
Q6. What is the way to solve a parabolic problem?
Galerkin-POD strategies are well suited to solve parabolic problems, where the POD basis is obtained from the previous solution of the underlying elliptic operator (see [19, 26]).
Q7. how can i get a wn of f 60?
The existence of such a wn can be obtained by reasoning as in the proof of Theorem 3.3 or just using Weierstrass theorem because the dimension of Hn is finite.
Q8. What is the convergence of the PGD for the optimization problem?
The work [9] proves the convergence of the PGD for the optimization problem: Find u ∈ L2(Ω, H1(I)) such that u ∈ arg minv∈L2(Ω,H1(I)) E(v), where E is a strongly convex functional, with Lipschitzgradient on bounded sets.
Q9. What is the convergence of the PGD algorithm for the Laplace problem?
in [8] the authors prove the convergence of the PGD algorithm applied to the Laplace problem in a tensor product domain,−∆u = f in Ωx × Ωy, u|∂Ωx×Ωy = 0,where Ωx ⊂ R and Ωy ⊂ R are two bounded domains.
Q10. What is the connection between the PGD method and the standard POD method?
In particular it is strongly related to the PGD method in the sense that the standard formulation of the PGD method actually provides the optimality conditions of the minimization problem satisfied by the optimal 1D sub-spaces.
Q11. what is the recursive approximation of wn?
The authors build recursive approximations on finite-dimensional optimal subspaces by minimizing the mean parametric error of the current residual, similar to the one introduced in [11].