scispace - formally typeset
Search or ask a question

Showing papers in "Siam Review in 2010"


Journal ArticleDOI
TL;DR: It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Abstract: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.

3,432 citations


Journal ArticleDOI
TL;DR: This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms and provides a rigorous convergence analysis and demonstrates exponential convergence of the “probability error” with respect to the number of Gauss points in each direction of the probability space.
Abstract: This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms. These input data are assumed to depend on a finite number of random variables. The method consists of a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space, and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It treats easily a wide range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction of the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method. Finally, we include a section with developments posterior to the original publication of this work. There we review sparse grid stochastic collocation methods, which are effective collocation strategies for problems that depend on a moderately large number of random variables.

468 citations


Journal ArticleDOI
TL;DR: A general mathematical and experimental methodology to compare and classify classical image denoising algorithms and a nonlocal means (NL-means) algorithm addressing the preservation of structure in a digital image are defined.
Abstract: The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. All show an outstanding performance when the image model corresponds to the algorithm assumptions but fail in general and create artifacts or remove fine structures in images. The main focus of this paper is, first, to define a general mathematical and experimental methodology to compare and classify classical image denoising algorithms and, second, to propose a nonlocal means (NL-means) algorithm addressing the preservation of structure in a digital image. The mathematical analysis is based on the analysis of the “method noise,” defined as the difference between a digital image and its denoised version. The NL-means algorithm is proven to be asymptotically optimal under a generic statistical image model. The denoising performance of all considered methods is compared in four ways; mathematical: asymptotic order of magnitude of the method noise under regularity assumptions; perceptual-mathematical: the algorithms artifacts and their explanation as a violation of the image model; quantitative experimental: by tables of $L^2$ distances of the denoised version to the original image. The fourth and perhaps most powerful evaluation method is, however, the visualization of the method noise on natural images. The more this method noise looks like a real white noise, the better the method.

445 citations


Journal ArticleDOI
TL;DR: A general class of measures based on matrix functions is introduced, and it is shown that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments.
Abstract: The emerging field of network science deals with the tasks of modeling, comparing, and summarizing large data sets that describe complex interactions. Because pairwise affinity data can be stored in a two-dimensional array, graph theory and applied linear algebra provide extremely useful tools. Here, we focus on the general concepts of centrality, communicability, and betweenness, each of which quantifies important features in a network. Some recent work in the mathematical physics literature has shown that the exponential of a network's adjacency matrix can be used as the basis for defining and computing specific versions of these measures. We introduce here a general class of measures based on matrix functions, and show that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments. We also point out connections between these measures and the quantities typically computed when spectral methods are used for data mining tasks such as clustering and ordering. We finish with computational examples showing the new matrix resolvent version applied to real networks.

333 citations


Journal ArticleDOI
TL;DR: An overview of numerical problems encountered when determining the electronic structure of materials and the rich variety of techniques used to solve these problems is given and pseudopotential-density functional theory is emphasized.
Abstract: The goal of this article is to give an overview of numerical problems encountered when determining the electronic structure of materials and the rich variety of techniques used to solve these problems. The paper is intended for a diverse scientific computing audience. For this reason, we assume the reader does not have an extensive background in the related physics. Our overview focuses on the nature of the numerical problems to be solved, their origin, and the methods used to solve the resulting linear algebra or nonlinear optimization problems. It is common knowledge that the behavior of matter at the nanoscale is, in principle, entirely determined by the Schrodinger equation. In practice, this equation in its original form is not tractable. Successful but approximate versions of this equation, which allow one to study nontrivial systems, took about five or six decades to develop. In particular, the last two decades saw a flurry of activity in developing effective software. One of the main practical variants of the Schrodinger equation is based on what is referred to as density functional theory (DFT). The combination of DFT with pseudopotentials allows one to obtain in an efficient way the ground state configuration for many materials. This article will emphasize pseudopotential-density functional theory, but other techniques will be discussed as well.

264 citations


Journal ArticleDOI
TL;DR: A survey of more advanced topics in automatic differentiation includes an introduction to the reverse mode (the authors' implementation is forward mode) and considerations in arbitrary-order multivariable series computation.
Abstract: An introduction to both automatic differentiation and object-oriented programming can enrich a numerical analysis course that typically incorporates numerical differentiation and basic MATLAB computation. Automatic differentiation consists of exact algorithms on floating-point arguments. This implementation overloads standard elementary operators and functions in MATLAB with a derivative rule in addition to the function value; for example, $\sin u$ will also compute $(\cos u)\ast u^{\prime}$, where $u$ and $u^{\prime }$ are numerical values. These methods are mostly one-line programs that operate on a class of value-and-derivative objects, providing a simple example of object-oriented programming in MATLAB using the new (as of release 2008a) class definition structure. The resulting powerful tool computes derivative values and multivariable gradients, and is applied to Newton's method for root-finding in both single and multivariable settings. To compute higher-order derivatives of a single-variable function, another class of series objects keeps Taylor polynomial coefficients up to some order. Overloading multiplication on series objects is a combination (discrete convolution) of coefficients. This idea leads to algorithms for other operations and functions on series objects. A survey of more advanced topics in automatic differentiation includes an introduction to the reverse mode (our implementation is forward mode) and considerations in arbitrary-order multivariable series computation.

200 citations


Journal ArticleDOI
TL;DR: This work describes microbial communities denoted biofilms and efforts to model some of their important aspects, including quorum sensing, growth, mechanics, and antimicrobial tolerance mechanisms.
Abstract: We describe microbial communities denoted biofilms and efforts to model some of their important aspects, including quorum sensing, growth, mechanics, and antimicrobial tolerance mechanisms.

192 citations


Journal ArticleDOI
TL;DR: A survey of different representations for lattice sums for the Helmholtz equation is made to show how the various forms depend on the dimension$d$ of the underlying space and the lattice dimension $d_\Lambda$.
Abstract: A survey of different representations for lattice sums for the Helmholtz equation is made These sums arise naturally when dealing with wave scattering by periodic structures One of the main objectives is to show how the various forms depend on the dimension $d$ of the underlying space and the lattice dimension $d_\Lambda$ Lattice sums are related to, and can be calculated from, the quasi-periodic Green's function and this object serves as the starting point of the analysis

161 citations


Journal ArticleDOI
TL;DR: This work surveys essential properties of the so-called copositive matrices, the study of which has been spread over more than fifty-five years, with special emphasis on variational aspects related to the concept of copositivity.
Abstract: This work surveys essential properties of the so-called copositive matrices, the study of which has been spread over more than fifty-five years. Special emphasis is given to variational aspects related to the concept of copositivity. In addition, some new results on the geometry of the cone of copositive matrices are presented here for the first time.

157 citations


Journal ArticleDOI
TL;DR: A model for an economy where risk neutral firms produce goods to satisfy an inelastic demand and are endowed with permits in order to offset their pollution at compliance time and avoid having to pay a penalty is proposed, showing existence of an equilibrium and uniqueness of emissions credit prices.
Abstract: This paper is concerned with the mathematical analysis of emissions markets. We review the existing quantitative analyses on the subject and introduce some of the mathematical challenges posed by the implementation of the new phase of the European Union Emissions Trading Scheme as well as the cap-and-trade schemes touted by the U.S., Canada, Australia, and Japan. From a practical point of view, the main thrust of the paper is the design and numerical analysis of new cap-and-trade schemes for the control and reduction of atmospheric pollution. We develop tools intended to help policy makers and regulators understand the pros and cons of the emissions markets. We propose a model for an economy where risk neutral firms produce goods to satisfy an inelastic demand and are endowed with permits in order to offset their pollution at compliance time and avoid having to pay a penalty. Firms that can easily reduce emissions do so, while those for which it is harder buy permits from firms that anticipate they will not need all their permits, creating a financial market for pollution credits. Our equilibrium model elucidates the joint price formation for goods and pollution allowances, capturing most of the features of the first phase of the European Union Emissions Trading Scheme. We show existence of an equilibrium and uniqueness of emissions credit prices. We also characterize the equilibrium prices of goods and the optimal production and trading strategies of the firms. We use the electricity market in Texas to numerically illustrate the qualitative properties of these cap-and-trade schemes. Comparing the numerical implications of cap-and-trade schemes to the business-as-usual benchmark, we show that our numerical results match those observed during the implementation of the first phase of the European Union cap-and-trade CO${}_2$ emissions scheme. In particular, we confirm the presence of windfall profits criticized by the opponents of these markets. We also demonstrate the shortcomings of tax and subsidy alternatives. Finally we introduce a relative allocation scheme which, while easy to implement, leads to smaller windfall profits than the standard scheme.

141 citations


Journal ArticleDOI
TL;DR: A new method for reconstructing small absorbing regions inside a bounded domain from boundary measurements of the induced acoustic signal and the focusing property of the back-propagated acoustic signal is proposed.
Abstract: This paper is devoted to mathematical modeling in photoacoustic imaging of small absorbers. We propose a new method for reconstructing small absorbing regions inside a bounded domain from boundary measurements of the induced acoustic signal. We also show the focusing property of the back-propagated acoustic signal. Indeed, we provide two different methods for locating a targeted optical absorber from boundary measurements of the induced acoustic signal. The first method consists of a MUltiple SIgnal Classification (MUSIC)-type algorithm and the second one uses a multifrequency approach. We also show results of computational experiments to demonstrate efficiency of the algorithms.

Journal ArticleDOI
TL;DR: It is popular to use a quadratic profile as an approximation of the temperature, but it is shown that a cubic profile is far more accurate in most circumstances.
Abstract: The work in this paper concerns the study of conventional and refined heat balance integral methods for a number of phase change problems. These include standard test problems, both with one and two phase changes, which have exact solutions that enable us to test the accuracy of the approximate solutions. We also consider situations where no analytical solution is available and compare these to numerical solutions. It is popular to use a quadratic profile as an approximation of the temperature, but we show that a cubic profile, seldom considered in the literature, is far more accurate in most circumstances. In addition, the refined integral method can give greater improvement still, and we develop a variation on this method which turns out to be optimal in some cases. We assess which integral method is better for various problems, showing that it is largely dependent on the specified boundary conditions.

Journal ArticleDOI
TL;DR: The objective is to disseminate widely the most efficient numerical algorithms useful for applications in image processing, partial differential equations, generalized distance transform, and mathematical morphology operators, etc.
Abstract: Computational convex analysis algorithms have been rediscovered several times in the past by researchers from different fields. To further communications between practitioners, we review the field of computational convex analysis, which focuses on the numerical computation of fundamental transforms arising from convex analysis. Current models use symbolic, numeric, and hybrid symbolic-numeric algorithms. Our objective is to disseminate widely the most efficient numerical algorithms useful for applications in image processing (computing the distance transform, the generalized distance transform, and mathematical morphology operators), partial differential equations (solving Hamilton-Jacobi equations and using differential equations numerical schemes to compute the convex envelope), max-plus algebra (computing the equivalent of the fast Fourier transform), multifractal analysis, etc. The fields of applications include, among others, computer vision, robot navigation, thermodynamics, electrical networks, medical imaging, and network communication.

Journal ArticleDOI
TL;DR: This work examines condition numbers, preconditioners, and iterative methods for finite element discretizations of coercive PDEs in the context of the fundamental solvability result, the Lax-Milgram lemma.
Abstract: We examine condition numbers, preconditioners, and iterative methods for finite element discretizations of coercive PDEs in the context of the fundamental solvability result, the Lax-Milgram lemma. Working in this Hilbert space context is justified because finite element operators are restrictions of infinite-dimensional Hilbert space operators to finite-dimensional subspaces. Moreover, useful insight is gained as to the relationship between Hilbert space and matrix condition numbers, and translating Hilbert space fixed point iterations into matrix computations provides new ways of motivating and explaining some classic iteration schemes. In this framework, the “simplest” preconditioner for an operator from a Hilbert space into its dual is the Riesz isomorphism. Simple analysis gives spectral bounds and iteration counts bounded independent of the finite element subspaces chosen. Moreover, the abstraction allows us to consider not only Riesz map preconditioning for convection-diffusion equations in $H^1$ but also operators on other Hilbert spaces, such as planar elasticity in $\left(H^1\right)^2$.

Journal ArticleDOI
TL;DR: Techniques for approximating hybrid dynamical systems that generalize classical linearization techniques are proposed and the degree of homogeneity of a hybrid system to the Zeno phenomenon that can appear in the solutions of the system is related.
Abstract: Hybrid dynamical systems are systems that combine features of continuous-time dynamical systems and discrete-time dynamical systems, and can be modeled by a combination of differential equations or inclusions, difference equations or inclusions, and constraints. Preasymptotic stability is a concept that results from separating the conditions that asymptotic stability places on the behavior of solutions from issues related to existence of solutions. In this paper, techniques for approximating hybrid dynamical systems that generalize classical linearization techniques are proposed. The approximation techniques involve linearization, tangent cones, homogeneous approximations of functions and set-valued mappings, and tangent homogeneous cones, where homogeneity is considered with respect to general dilations. The main results deduce preasymptotic stability of an equilibrium point for a hybrid dynamical system from preasymptotic stability of the equilibrium point for an approximate system. Further results relate the degree of homogeneity of a hybrid system to the Zeno phenomenon that can appear in the solutions of the system.

Journal ArticleDOI
TL;DR: An accessible account of the essential idea of cloaking, aimed at nonspecialists and undergraduates who have had some vector calculus, Fourier series, and linear algebra, and shows how to cloak an object against detection from impedance tomography.
Abstract: In this article we provide an accessible account of the essential idea behind cloaking, aimed at nonspecialists and undergraduates who have had some vector calculus, Fourier series, and linear algebra. The goal of cloaking is to render an object invisible to detection from electromagnetic energy by surrounding the object with a specially engineered “metamaterial” that redirects electromagnetic waves around the object. We show how to cloak an object against detection from impedance tomography, an imaging technique of much recent interest, though the mathematical ideas apply to much more general forms of imaging. We also include some exercises and ideas for undergraduate research projects.

Journal ArticleDOI
TL;DR: A multigrid method and its implementation on parallel computers to solve the bidomain equation that appears in excitation propagation analysis of the human heart with the torso and it is shown that this formulation naturally satisfies the conservation property of the electric currents and fits into the multilevel adaptive solution technique framework.
Abstract: In this paper, we present a multigrid method and its implementation on parallel computers to solve the bidomain equation that appears in excitation propagation analysis of the human heart with the torso. The bidomain equation is discretized with the finite element method on a composite mesh composed of a fine voxel mesh around the heart and a coarse voxel mesh covering the torso. The extracellular potential problem on the torso is formulated as a variational problem with a constraint at the interface of the fine and coarse meshes. We show that this formulation naturally satisfies the conservation property of the electric currents and fits into the multilevel adaptive solution technique framework. We also present our special treatment of the Purkinje fiber network in the multigrid algorithm where it is modeled as multiway branching lines connected to the nodes in the voxel mesh of the heart. A parallel implementation of the proposed multigrid algorithm on distributed memory computers is presented and its performance is evaluated using real-life applications.

Journal ArticleDOI
TL;DR: In the limit of the polygon obtained by connecting the midpoints of its edges and then normalizing the resulting vertex vectors, the vertices converge to an ellipse that is centered at the origin and whose semiaxes are tilted forty-five degrees from the coordinate axes.
Abstract: Suppose $x$ and $y$ are unit 2-norm $n$-vectors whose components sum to zero. Let ${\cal P}(x,y)$ be the polygon obtained by connecting $(x_{1},y_{1}),\ldots,(x_{n},y_{n}),(x_{1},y_{1})$ in order. We say that $\widehat{{\cal P}}(\widehat{x},\widehat{y})$ is the normalized average of ${\cal P}(x,y)$ if it is obtained by connecting the midpoints of its edges and then normalizing the resulting vertex vectors $\widehat{x}$ and $\widehat{y}$ so that they have unit 2-norm. If this process is repeated starting with ${\cal P}_{0} = {\cal P}(x^{(0)},y^{(0)})$, then in the limit the vertices of the polygon iterates ${\cal P}(x^{(k)},y^{(k)})$ converge to an ellipse ${\cal E}$ that is centered at the origin and whose semiaxes are tilted forty-five degrees from the coordinate axes. An eigenanalysis together with the singular value decomposition is used to explain this phenomenon. The problem and its solution is a metaphor for matrix-based research in computational science and engineering.

Journal ArticleDOI
TL;DR: A single model is introduced for the structure, while three models of increasing complexity are proposed for the fluid flow solver, leading to the arbitrary Lagrangian Eulerian (ALE) approach.
Abstract: Structure and fluid models need to be combined, or coupled, when problems of fluid-structure interaction (FSI) are addressed. We first present the basic knowledge required for building and then evaluating a simple coupling. The approach proposed is to consider a dedicated solver for each of the two physical systems involved. We illustrate this approach by examining the interaction between a gas contained in a one-dimensional chamber closed by a moving piston attached to an external and fixed point with a spring. A single model is introduced for the structure, while three models of increasing complexity are proposed for the fluid flow solver. The most complex fluid flow model leads us to the arbitrary Lagrangian Eulerian (ALE) approach. The pros and cons of each model are discussed. The computer implementations of the structure model, the fluid model, and the coupling use MATLAB scripts, downloadable from either http://www.utc.fr/$\sim$elefra02/ifs or http://www.hds.utc.fr/$\sim$boufflet/ifs.

Journal ArticleDOI
TL;DR: In this paper, a complete and consistent formal development of complex singular-ities of the Lorenz system using the psi series was given, which implies a two-parameter family of singular solutions of the system.
Abstract: The Lorenz attractor is one of the best-known examples of applied mathematics. However, much of what is known about it is a result of numerical calculations and not of mathemat- ical analysis. As a step toward mathematical analysis, we allow the time variable in the three-dimensional Lorenz system to be complex, hoping that solutions that have resisted analysis on the real line will give up their secrets in the complex plane. Knowledge of singularities being fundamental to any investigation in the complex plane, we build upon earlier work and give a complete and consistent formal development of complex singular- ities of the Lorenz system using the psi series. The psi series contain two undetermined constants. In addition, the location of the singularity is undetermined as a consequence of the autonomous nature of the Lorenz system. We prove that the psi series converge, using a technique that is simpler and more powerful than that of Hille, thus implying a two-parameter family of singular solutions of the Lorenz system. We pose three ques- tions, answers to which may bring us closer to understanding the connection of complex singularities to Lorenz dynamics.

Journal ArticleDOI
TL;DR: In this article, the authors showed that the square block of the canonical block triangular form has a special fine structure and that the uncovered symmetry helps to permute the matrix in a special form which is symmetric along the main diagonal while exhibiting the blocks of the original block.
Abstract: We present some observations on the block triangular form (btf) of structurally symmetric, square, sparse matrices. If the matrix is structurally rank deficient, its canonical btf has at least one underdetermined and one overdetermined block. We prove that these blocks are transposes of each other. We further prove that the square block of the canonical btf, if present, has a special fine structure. These findings help us recover symmetry around the antidiagonal in the block triangular matrix. The uncovered symmetry helps us to permute the matrix in a special form which is symmetric along the main diagonal while exhibiting the blocks of the original btf. As the square block of the canonical btf has full structural rank, the observation relating to the square block applies to structurally nonsingular, square symmetric matrices as well.

Journal ArticleDOI
TL;DR: The present paper, "Mathematical Description of Microbial Biofilms", by Isaac Klapper and Jack Dockery, provides a very engaging introduction to the subject of biofilms and an overview of mathematical modeling efforts in this field.
Abstract: An alternate title to this issue's Survey and Review article could be "Understanding Slime". Biofilms, the subject of this review, are colonies of microorganisms living on a surface. These microorganisms live within a self-produced matrix of extracellular material which we often associate with slime. Biofilms are everywhere! They are found on rocks at the bottom of rivers and represent an important part of the food chain. They can thrive in hostile environments such as in very hot, briny springs, or inside the walls of sewer pipes. On seagoing vessels, biofilms make it easy for other larger organisms, such as barnacles, to attach. Biofilms can also be useful for cleaning up oil spills. But they are not only "out there"; they are also inside us. Dental plaque is an example of biofilm, and we know that their presence leads to gum disease and tooth decay. In fact, they are responsible for many microbial infections in the body. They also like to form on implanted medical devices such as pacemakers and artificial heart valves. Understanding them is key to controlling their destructive power and to harnessing their beneficial properties. The present paper, "Mathematical Description of Microbial Biofilms", by Isaac Klapper and Jack Dockery, provides a very engaging introduction to the subject of biofilms and an overview of mathematical modeling efforts in this field. It covers four aspects of slime—communication, growth, material properties, and survival mechanism. Although much more work is needed to gain further understanding of these microbial biofilms, this very nicely written article provides a starting point for someone to "play with slime" (mathematically) and indicates important research directions.