scispace - formally typeset
Search or ask a question
Book

Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics

TL;DR: In this article, an augmented Lagrangian method for the solution of variational problems is proposed. But this method is not suitable for continuous media and their mathematical modeling, such as viscoplasticity and elastoviscasticity.
Abstract: 1. Some continuous media and their mathematical modeling 2. Variational formulations of the mechanical problems 3. Augmented Lagrangian methods for the solution of variational problems 4. Viscoplasticity and elastoviscoplasticity in small strains 5. Limit load analysis 6. Two-dimensional flow of incompressible viscoplastic fluids 7. Finite elasticity 8. Large displacement calculations of flexible rods References Index.
Citations
More filters
Journal ArticleDOI
TL;DR: A large selection of solution methods for linear systems in saddle point form are presented, with an emphasis on iterative methods for large and sparse problems.
Abstract: Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for this type of system. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.

2,253 citations


Cites background from "Augmented Lagrangian and Operator-S..."

  • ...Most of the results and algorithms reviewed in this paper admit straightforward extensions to the complex case....

    [...]

  • ...Finally we mention that in most applications n is larger than m, often much larger....

    [...]

Posted Content
Abstract: The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.

2,095 citations

Book ChapterDOI
01 Jan 2011
TL;DR: The basic properties of proximity operators which are relevant to signal processing and optimization methods based on these operators are reviewed and proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework.
Abstract: The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems and, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.

1,942 citations

Posted Content
TL;DR: In this article, alternating direction algorithms are used for solving the basis pursuit problem and the basis-pursuit denoising problem in both unconstrained and constrained forms, respectively.
Abstract: In this paper, we propose and study the use of alternating direction algorithms for several $\ell_1$-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the $\ell_1$-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an $\ell_1$-problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the $\ell_1$-norm fidelity should be the fidelity of choice in compressive sensing.

1,062 citations

Proceedings ArticleDOI
03 Oct 2011
TL;DR: In this article, the generalized AMP (G-AMP) algorithm is proposed to estimate a random vector observed through a linear transform followed by a componentwise probabilistic measurement channel.
Abstract: We consider the estimation of a random vector observed through a linear transform followed by a componentwise probabilistic measurement channel. Although such linear mixing estimation problems are generally highly non-convex, Gaussian approximations of belief propagation (BP) have proven to be computationally attractive and highly effective in a range of applications. Recently, Bayati and Montanari have provided a rigorous and extremely general analysis of a large class of approximate message passing (AMP) algorithms that includes many Gaussian approximate BP methods. This paper extends their analysis to a larger class of algorithms to include what we call generalized AMP (G-AMP). G-AMP incorporates general (possibly non-AWGN) measurement channels. Similar to the AWGN output channel case, we show that the asymptotic behavior of the G-AMP algorithm under large i.i.d. Gaussian transform matrices is described by a simple set of state evolution (SE) equations. The general SE equations recover and extend several earlier results, including SE equations for approximate BP on general output channels by Guo and Wang.

1,030 citations