scispace - formally typeset
Search or ask a question
Author

Roland Glowinski

Bio: Roland Glowinski is an academic researcher from University of Houston. The author has contributed to research in topics: Finite element method & Conjugate gradient method. The author has an hindex of 61, co-authored 393 publications receiving 20599 citations. Previous affiliations of Roland Glowinski include Paris Dauphine University & French Institute for Research in Computer Science and Automation.


Papers
More filters
Book
01 Jan 1987
TL;DR: In this article, an augmented Lagrangian method for the solution of variational problems is proposed. But this method is not suitable for continuous media and their mathematical modeling, such as viscoplasticity and elastoviscasticity.
Abstract: 1. Some continuous media and their mathematical modeling 2. Variational formulations of the mechanical problems 3. Augmented Lagrangian methods for the solution of variational problems 4. Viscoplasticity and elastoviscoplasticity in small strains 5. Limit load analysis 6. Two-dimensional flow of incompressible viscoplastic fluids 7. Finite elasticity 8. Large displacement calculations of flexible rods References Index.

1,329 citations

Journal ArticleDOI
TL;DR: In this article, a new Lagrange-multiplier based fictitious-domain method is presented for the direct numerical simulation of viscous incompressible flow with suspended solid particles, which uses a finite-element discretization in space and an operator-splitting technique for discretisation in time.

1,072 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Book
27 Nov 2013
TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Abstract: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

3,627 citations

Journal ArticleDOI
TL;DR: The term immersed boundary (IB) method is used to encompass all such methods that simulate viscous flows with immersed (or embedded) boundaries on grids that do not conform to the shape of these boundaries.
Abstract: The term “immersed boundary method” was first used in reference to a method developed by Peskin (1972) to simulate cardiac mechanics and associated blood flow. The distinguishing feature of this method was that the entire simulation was carried out on a Cartesian grid, which did not conform to the geometry of the heart, and a novel procedure was formulated for imposing the effect of the immersed boundary (IB) on the flow. Since Peskin introduced this method, numerous modifications and refinements have been proposed and a number of variants of this approach now exist. In addition, there is another class of methods, usually referred to as “Cartesian grid methods,” which were originally developed for simulating inviscid flows with complex embedded solid boundaries on Cartesian grids (Berger & Aftosmis 1998, Clarke et al. 1986, Zeeuw & Powell 1991). These methods have been extended to simulate unsteady viscous flows (Udaykumar et al. 1996, Ye et al. 1999) and thus have capabilities similar to those of IB methods. In this review, we use the term immersed boundary (IB) method to encompass all such methods that simulate viscous flows with immersed (or embedded) boundaries on grids that do not conform to the shape of these boundaries. Furthermore, this review focuses mainly on IB methods for flows with immersed solid boundaries. Application of these and related methods to problems with liquid-liquid and liquid-gas boundaries was covered in previous reviews by Anderson et al. (1998) and Scardovelli & Zaleski (1999). Consider the simulation of flow past a solid body shown in Figure 1a. The conventional approach to this would employ structured or unstructured grids that conform to the body. Generating these grids proceeds in two sequential steps. First, a surface grid covering the boundaries b is generated. This is then used as a boundary condition to generate a grid in the volume f occupied by the fluid. If a finite-difference method is employed on a structured grid, then the differential form of the governing equations is transformed to a curvilinear coordinate system aligned with the grid lines (Ferziger & Peric 1996). Because the grid conforms to the surface of the body, the transformed equations can then be discretized in the

3,184 citations