Analysis of continuous h −1 least-squares methods for the steady navier-stokes system
Summary (4 min read)
1. Introduction
- From a purely analytical perspective, the following is a well-known existence theorem.
- The minimization of this functional leads to a so-called continuous H−1-least-squares type method, following the terminology of [5] and later use in [3].
- Least-squares methods to solve non linear boundary value problems have been the subject of intensive developments in the last decades, as they present several advantages, notably on computational and stability viewpoints.
- The main reason of this work is to show that, under the assumption of Theorem 1.1, minimizing sequences for this so-called Date: 23-04-2018.
- Laboratoire de Mathématiques, Université Clermont Auvergne, UMR CNRS 6620, Campus des Cézeaux, 63177 Aubière, France.
2 JÉROME LEMOINE, ARNAUD MÜNCH, AND PABLO PEDREGAL
- Error functional do actually converge strongly to the solution of (1.1).
- The authors first consider in Section 2 the case where minimizing sequences live in V × L20(Ω) with V defined below by (2.1), a set of divergence free fields.
- Then, in Section 3, the authors discuss the general case where the field y is not a priori assumed to be divergence free.
- In the two cases, the authors provide a sufficient condition of the convergence of the values of the error functional E in terms of the convergence of the values of its derivative.
- Section 5 describes the conjugate gradient algorithm associated to the error functional E while section 6 discusses numerically the celebrated exemple of the 2D channel with a backward facing step.
2. Steady case under the div-free constraint
- As indicated in the Introduction, in order to solve the boundary value problem (1.1), the authors use a least-squares type approach.
- This computation is performed below in the proof of Proposition 2.4.
- In that direction, their main theorem is the following.
- H−1-LEAST-SQUARES METHODS FOR NAVIER-STOKES 3 Theorem 2.1.
- This proposition very clearly establishes that as the authors take down the error E to zero, they get closer, in the strong norm, to the solution of the problem, and so, it justifies why a promising strategy to find good approximations of the solution of problem (1.1) is to look for global minimizers of (2.4).
4 JÉROME LEMOINE, ARNAUD MÜNCH, AND PABLO PEDREGAL
- By the second part of Lemma 2.3, the last term above vanishes, while for the third term, the first part of the same lemma leads to∫.
- A practical way of taking a functional to its minimum is through some use of descent directions, i.e. the use of its derivative.
- The authors computations here follow closely those in [15].
- Before proceeding to their second step for a full proof of Theorem 2.1, the authors stress that the error functional E(y) is coercive in the sense E(y)→∞ if ‖y‖H10(Ω) →∞. Indeed, from (2.3), and using y itself as a test function, it is elementary to arrive, using∫.
- The relevant issue is to check coercivity.
6 JÉROME LEMOINE, ARNAUD MÜNCH, AND PABLO PEDREGAL
- According to their brief discussion before the statement of the proposition the authors are proving, vector fields y remain in a bounded set of V.
- The authors need to check that then the corresponding solutions given by Lemma 2.5 remain in a bounded set as well.
3. Steady case without the div-free constraint
- In practice, implementing the div-free constraint, as done in [3], is possibly expansive as it requires at each iteration several resolutions of the steady Stokes equation.
- The authors would like to explore to what extent a similar approach can be implemented that allows for fields without the div-free constraint.
- This new framework forces us to take into account the pressure field.
- Before getting into the proof of similar results as in the preceding section, it is instructive to spend some time with the following interesting discussion.
- Focus next on the operator taking F in (3.4) into the corresponding minimizer π0.
8 JÉROME LEMOINE, ARNAUD MÜNCH, AND PABLO PEDREGAL
- The authors are now ready to show a main result as in the previous section in this more general context.
- The authors strategy proceeds again in two steps.
- The theorem is, however, exactly the same.
- The first step of the proof involves an upper bound of the difference y − y0 in terms of the quantity E(y, π).
- METHODS FOR NAVIER-STOKES 9 Recall, however, that y0 is divergence-free.
10 JÉROME LEMOINE, ARNAUD MÜNCH, AND PABLO PEDREGAL
- It is elementary then to have the statement in the proposition from this inequality.
- Concerning the second step, there are just minor changes in the proof.
- As before, the authors plan to use several appropriate choices for the direction (Y,Π).
- To this end, by definition, the corrector v solves the variational formulation∫ METHODS FOR NAVIER-STOKES 11 for some Π ∈ L2(Ω) and for every w ∈ H10(Ω), provided fields y are taken from the same ball in the statement of that lemma.
- This together with (3.13) finishes the proof.
4. Minimizing sequence
- In practice, however, one would typically use a gradient method to calculate iteratively such sequences.
- Given that the exact solution y0 of the problem corresponds to an absolute minimum of the smooth functional E, for a certain small positive constant c2, one can ensure that ‖y0 − y0‖H10(Ω) ≤ c2 implies that the sequences computed through a gradient procedure starting from y0 will converge to y0.
- It would be interesting to have more explicit information about the size of the constant c2 that, eventually, could be of some help to decide in practice how to select the initial guess.
- The authors strategy is to show that the quantity E′(y) · (y0 − y) becomes non-positive, if y is sufficiently close, in a precise quantitative way, to the exact solution y0.
12 JÉROME LEMOINE, ARNAUD MÜNCH, AND PABLO PEDREGAL
- For some constant C provided ν−2‖f‖H−1(Ω) is small enough.
- It remains, hence, to quantify the continuity of E at the solution y0.
- It is a matter of keeping track of the constants in all those inequalities used above in the proof to have an expression of the constant C4 guaranteeing the claimed convergence.
- There are four quantities involved: viscosity ν, size of the source term ‖f‖ = ‖f‖H−1(Ω), Poincaré’s constant C for Ω, and the constant C(n) of the Sobolev compact embedding of H1(Ω) into L4(Ω).
- The non-divergence free situations is a bit more involved though a parallel proof would proceed along the same lines as in the case the authors have just explored.
5. Conjugate gradient algorithm
- The introduction of the ε term allows to fix the constant of the pressure π.
- The appropriate tool to produce minimizing sequence for the functional Eε is gradient method.
- Among them, the Polak-Ribière version of the conjugate gradient (CG for short in the sequel) algorithm (see [8]) have shown its efficiency in the similar context analyzed in [13, 14, 12].
14 JÉROME LEMOINE, ARNAUD MÜNCH, AND PABLO PEDREGAL
- The CG algorithm associated to the minimization over V of the functional E, defined in section 2, is very similar: the Poisson problems are simply replaced by Stokes problems (the authors refer to [3]).
- In both cases, the matrix (to be invert) associated to those four problems is the same and does not change from an iteration to the next one.
6. Numerical illustration: Two dimensional channel with a backward facing step
- The authors consider the celebrated test problem of a two-dimensional channel with a backward facing step, described for instance in Section 45 of [6] (see also [10]).
- The authors use exactly the geometry and boundary conditions from this reference.
- The authors now comment some computations performed with the FreeFem++ package developed at the University Paris 6 (see [9]).
- Similarly, Table 3 reports the norms obtained when minimizing the functional E over A (section 3).
- In both cases, the Polak-Ribiere version of the conjugate gradient algorithm, initialized with the solution of corresponding Stokes problem.
16 JÉROME LEMOINE, ARNAUD MÜNCH, AND PABLO PEDREGAL
- Finite element approximation P1/P1 (which do not satisfies the Ladyzenskaia-Babushka-Brezzi condition) provides similar results in term of accuracy and convergence.
- Figure 4 depicts the evolution of the norm of the gradient with respect to the iterates.
- The values of the cost √ E(yk, πk) in both cases are however similar, which suggests that the functional E is flat near local minima.
- The last line of Table 2 also displays the results of the BB algorithm for the minimization of E over V leading to similar results than CG method in term of speed of convergence.
- The corresponding mesh is composed of 19714 triangles and 10208 vertices.
18 JÉROME LEMOINE, ARNAUD MÜNCH, AND PABLO PEDREGAL
- Method leads to similar values but does not allow a reduction of the computational cost.
- The BB algorithm (from Stokes to ν = 1/700) converges after 510 iterates and leads to similar values: again its allows a reduction of the computation costs (2× 510 resolution of Stokes problems for BB whereas CG requires 4×357 resolution of Stokes problems).
- Eventually, as expected from their observations for ν = 1/50, the minimization of the functional E of Section 3 defined over A using CG and BB algorithms requires more iterates (1020 and 439 respectively) leading to a larger computational cost but similar numerical values.
- For ν = 1/700, the minimization of E using P1/P1 finite element approximation remains stable, contrary to the previous cases.
7. Conclusions and perspectives
- The authors have analyzed two H−1-least-squares methods and shown that they allow the construction of strong convergent sequences toward the solution (assumed unique) of the steady Navier-Stokes system.
- This study justifies in particular the least-squares approach introduced without proof in [3], which assume that the sequences are divergence free.
- Numerical experiments on a 2D channel with Poiseuille flow confirms their analysis and highlights the robustness of such methods with respect to the initial guess and also with respect to the approximation.
- The second least-squares functional coupled with a conjugate gradient method requires however much more iterates to achieve a satisfactory approximation.
- A natural extension of this study is the unsteady case : using ideas from [15], the authors may, at least in the divergence free situation of Section 2, obtain a result similar to Theorem 2.1 in the dynamic situation, and then, adapting [13, 12], examine the corresponding controllability issue.
Did you find this useful? Give us your feedback
Citations
11 citations
10 citations
8 citations
7 citations
5 citations
References
2,867 citations
"Analysis of continuous h −1 least-s..." refers methods in this paper
...We now comment some computations performed with the FreeFem++ package developed at the University Paris 6 (see [9])....
[...]
2,585 citations
775 citations
"Analysis of continuous h −1 least-s..." refers methods in this paper
...Among the various CG versions reported in [8], we observe that the Polak-Ribière leads to the best results in term of speed of convergence....
[...]
...Among them, the Polak-Ribière version of the conjugate gradient (CG for short in the sequel) algorithm (see [8]) have shown its efficiency in the similar context analyzed in [13, 14, 12]....
[...]
731 citations
455 citations
Additional excerpts
...Numerical illustration: Two dimensional channel with a backward facing step We consider the celebrated test problem of a two-dimensional channel with a backward facing step, described for instance in Section 45 of [6] (see also [10])....
[...]
Related Papers (5)
Frequently Asked Questions (13)
Q2. What is the simplest way to denote the Hilbert space?
Bold letters and symbols denote vector-valued functions and spaces; for instance L2(Ω) is the Hilbert space of the functions v = (v1, . . . , vN ) with vi ∈ L2(Ω) for all i.
Q3. What is the strategy to show that the quantityE′(y) (y0?
Their strategy is to show that the quantityE′(y) · (y0 − y)becomes non-positive, if y is sufficiently close, in a precise quantitative way, to the exact solution y0.
Q4. What is the main interest of the BB algorithm?
The main interest of the BB algorithm is on the computational viewpoint as it requires only two resolutions of Poisson problem, namely (5.4), (5.5) per iterate.
Q5. What is the recent Barzilai-Borwein algorithm?
The more recent Barzilai-Borwein algorithm allows to reduce significantly the number of iterations together with the computational cost.
Q6. What is the simplest method to solve non linear boundary value problems?
Least-squares methods to solve non linear boundary value problems have been the subject of intensive developments in the last decades, as they present several advantages, notably on computational and stability viewpoints.
Q7. What is the way to take a functional to its minimum?
A practical way of taking a functional to its minimum is through some (clever) use of descent directions, i.e. the use of its derivative.
Q8. How many iterations are necessary to satisfy the property gkH1(?
(Ω) ≤ 10−3 (gk denotes the residual at iterates k) is achieved in 39 iterates and leads to results very close to those from the resolution of (6.1), see table 2.
Q9. What is the strategy for the Stokes system?
Their strategy is to use a least-squares approach, much in the spirit of [2], [3], [7], but in a systematic way as in [15], having in mind some applications to control problem as described in [12, 13] for the Stokes system.
Q10. What is the recent study of the least squares method?
19The authors have analyzed two H−1-least-squares methods and shown that they allow the construction of strong convergent sequences toward the solution (assumed unique) of the steady Navier-Stokes system.
Q11. What is the case for the search for an element y solution of (1.1)?
In such a case, the search for an element y solution of (1.1) is reduced to the minimization of E, as indicated in the preceding paragraph.
Q12. What is the CG algorithm for the functional E?
For the functional Eε, the CG algorithm reads as follows :• Step 0: Initialization - Given any η > 0 and any z0 = (y0, π0) ∈ A, compute the residual g0 = (y0, π0) ∈ A solution of(5.1) (g0, (Y,Π))A = E ′ ε(y 0, π0) · (Y,Π), ∀(Y,Π) ∈ A.
Q13. What is the evolution of the gradient?
Figure 3 depicts the evolution (in log scale)of the norm ‖gk‖H1(Ω) of the gradient and √ E(yk) = |vk|H10(Ω) with respect to the iterates: theconvergence to zero is sur-linear as the authors observe √ E(yk) = O(e−0.15k).