scispace - formally typeset
Search or ask a question

Showing papers in "Siam Review in 2012"


Journal ArticleDOI
TL;DR: It is shown that fractional Laplacian and fractional derivative models for anomalous diffusion are special cases of the nonlocal model for diffusion that the authors consider.
Abstract: A recently developed nonlocal vector calculus is exploited to provide a variational analysis for a general class of nonlocal diffusion problems described by a linear integral equation on bounded domains in $\mbRn$. The nonlocal vector calculus also enables striking analogies to be drawn between the nonlocal model and classical models for diffusion, including a notion of nonlocal flux. The ubiquity of the nonlocal operator in applications is illustrated by a number of examples ranging from continuum mechanics to graph theory. In particular, it is shown that fractional Laplacian and fractional derivative models for anomalous diffusion are special cases of the nonlocal model for diffusion that we consider. The numerous applications elucidate different interpretations of the operator and the associated governing equations. For example, a probabilistic perspective explains that the nonlocal spatial operator appearing in our model corresponds to the infinitesimal generator for a symmetric jump process. Sufficie...

566 citations


Journal ArticleDOI
TL;DR: This survey of different types of MMOs is given, concentrating its analysis on MMOs whose small-amplitude oscillations are produced by a local, multiple-time-scale “mechanism.”
Abstract: Mixed-mode oscillations (MMOs) are trajectories of a dynamical system in which there is an alternation between oscillations of distinct large and small amplitudes. MMOs have been observed and studied for over thirty years in chemical, physical, and biological systems. Few attempts have been made thus far to classify different patterns of MMOs, in contrast to the classification of the related phenomena of bursting oscillations. This paper gives a survey of different types of MMOs, concentrating its analysis on MMOs whose small-amplitude oscillations are produced by a local, multiple-time-scale “mechanism.” Recent work gives substantially improved insight into the mathematical properties of these mechanisms. In this survey, we unify diverse observations about MMOs and establish a systematic framework for studying their properties. Numerical methods for computing different types of invariant manifolds and their intersections are an important aspect of the analysis described in this paper.

509 citations


Journal ArticleDOI
TL;DR: The coupled Laplace-Beltrami and Poisson-Boltzmann-Nernst-Planck (LB-PBNP) equations are proposed for charge transport in heterogeneous systems and a number of computational algorithms are developed to implement the proposed new variational multiscale models in an efficient manner.
Abstract: This work presents a few variational multiscale models for charge transport in complex physical, chemical and biological systems and engineering devices, such as fuel cells, solar cells, battery cells, nanofluidics, transistors and ion channels. An essential ingredient of the present models, introduced in an earlier paper (Bulletin of Mathematical Biology, 72, 1562-1622, 2010), is the use of differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain from the microscopic domain, meanwhile, dynamically couple discrete and continuum descriptions. Our main strategy is to construct the total energy functional of a charge transport system to encompass the polar and nonpolar free energies of solvation, and chemical potential related energy. By using the Euler-Lagrange variation, coupled Laplace-Beltrami and Poisson-Nernst-Planck (LB-PNP) equations are derived. The solution of the LB-PNP equations leads to the minimization of the total free energy, and explicit profiles of electrostatic potential and densities of charge species. To further reduce the computational complexity, the Boltzmann distribution obtained from the Poisson-Boltzmann (PB) equation is utilized to represent the densities of certain charge species so as to avoid the computationally expensive solution of some Nernst-Planck (NP) equations. Consequently, the coupled Laplace-Beltrami and Poisson-Boltzmann-Nernst-Planck (LB-PBNP) equations are proposed for charge transport in heterogeneous systems. A major emphasis of the present formulation is the consistency between equilibrium LB-PB theory and non-equilibrium LB-PNP theory at equilibrium. Another major emphasis is the capability of the reduced LB-PBNP model to fully recover the prediction of the LB-PNP model at non-equilibrium settings. To account for the fluid impact on the charge transport, we derive coupled Laplace-Beltrami, Poisson-Nernst-Planck and Navier-Stokes equations from the variational principle for chemo-electro-fluid systems. A number of computational algorithms is developed to implement the proposed new variational multiscale models in an efficient manner. A set of ten protein molecules and a realistic ion channel, Gramicidin A, are employed to confirm the consistency and verify the capability. Extensive numerical experiment is designed to validate the proposed variational multiscale models. A good quantitative agreement between our model prediction and the experimental measurement of current-voltage curves is observed for the Gramicidin A channel transport. This paper also provides a brief review of the field.

133 citations


Journal ArticleDOI
TL;DR: A review of the theories used to model the biomechanical modeling of growing tissues, categorized according to whether the tissue is considered as a continuum object or a collection of cells, concludes by assessing the prospects for reconciliation between these two fundamentally different approaches to tissue growth.
Abstract: The biomechanical modeling of growing tissues has recently become an area of intense interest. In particular, the interplay between growth patterns and mechanical stress is of great importance, with possible applications to arterial mechanics, embryo morphogenesis, tumor development, and bone remodeling. This review aims to give an overview of the theories that have been used to model these phenomena, categorized according to whether the tissue is considered as a continuum object or a collection of cells. Among the continuum models discussed is the deformation gradient decomposition method, which allows a residual stress field to develop from an incompatible growth field. The cell-based models are further subdivided into cellular automata, center-dynamics, and vertex-dynamics models. Of these the second two are considered in more detail, especially with regard to their treatment of cell-cell interactions and cell division. The review concludes by assessing the prospects for reconciliation between these two fundamentally different approaches to tissue growth, and by identifying possible avenues for further research.

132 citations


Journal ArticleDOI
TL;DR: It is seen in this article that the path leading to modern computational methods and theory involved a long struggle over three centuries requiring the efforts of many great mathematicians.
Abstract: The so-called Ritz--Galerkin method is one of the most fundamental tools of modern computing. Its origins lie in Hilbert's “direct” approach to the variational calculus of Euler--Lagrange and in the thesis of Walther Ritz, who died 100 years ago at the age of 31 after a long battle with tuberculosis. The thesis was submitted in 1902 in Gottingen, during a period of dramatic developments in physics. Ritz tried to explain the phenomenon of Balmer series in spectroscopy using eigenvalue problems of partial differential equations on rectangular domains. While this physical model quickly turned out to be completely obsolete, his mathematics later enabled him to solve difficult problems in applied sciences. He thereby revolutionized the variational calculus and became one of the fathers of modern computational mathematics. We will see in this article that the path leading to modern computational methods and theory involved a long struggle over three centuries requiring the efforts of many great mathematicians.

96 citations


Journal ArticleDOI
TL;DR: A novel derivation of the Kalman filter using Newton's method for root finding is described, quite general as it can also be used to derive a number of variations of theKalman filter, including recursive estimators for both prediction and smoothing, estimators with fading memory, and the extended Kalman Filter for nonlinear systems.
Abstract: In this paper, we discuss the Kalman filter for state estimation in noisy linear discrete-time dynamical systems. We give an overview of its history, its mathematical and statistical formulations, and its use in applications. We describe a novel derivation of the Kalman filter using Newton's method for root finding. This approach is quite general as it can also be used to derive a number of variations of the Kalman filter, including recursive estimators for both prediction and smoothing, estimators with fading memory, and the extended Kalman filter for nonlinear systems.

83 citations


Journal ArticleDOI
TL;DR: The goal of this paper is to examine this trace ratio optimization problem in detail, to consider different algorithms for solving it, and to illustrate the use of these algorithms for dimensionality reduction.
Abstract: This paper considers the problem of optimizing the ratio $\mathrm{Tr}[V^{T}AV]/\mathrm{Tr}[V^{T}BV]$ over all unitary matrices $V$ with $p$ columns, where $A,B$ are two positive definite matrices. This problem is common in supervised learning techniques. However, because its numerical solution is typically expensive it is often replaced by the simpler optimization problem which consists of optimizing $\mathrm{Tr}[V^{T}AV]$ under the constraint that $V^{T}BV=I$, the identity matrix. The goal of this paper is to examine this trace ratio optimization problem in detail, to consider different algorithms for solving it, and to illustrate the use of these algorithms for dimensionality reduction.

82 citations


Journal ArticleDOI
TL;DR: It is shown here that Lax pairs provide the generalization of the divergence formulation from a separable linear to an integrable nonlinear PDE, whose crucial feature is the existence of a Lax pair formulation.
Abstract: Every applied mathematician has used separation of variables. For a given boundary value problem (BVP) in two dimensions, the starting point of this powerful method is the separation of the given PDE into two ODEs. If the spectral analysis of either of these ODEs yields an appropriate transform pair, i.e., a transform consistent with the given boundary conditions, then the given BVP can be reduced to a BVP for an ODE. For simple BVPs it is straightforward to choose an appropriate transform and hence the spectral analysis can be avoided. In spite of its enormous applicability, this method has certain limitations. In particular, it requires the given domain, PDE, and boundary conditions to be separable, and also may not be applicable if the BVP is non-self-adjoint. Furthermore, it expresses the solution as either an integral or a series, neither of which are uniformly convergent on the boundary of the domain (for nonvanishing boundary conditions), which renders such expressions unsuitable for numerical computations. This paper describes a recently introduced transform method that can be applied to certain nonseparable and non-self-adjoint problems. Furthermore, this method expresses the solution as an integral in the complex plane that is uniformly convergent on the boundary of the domain. The starting point of the method is to write the PDE as a one-parameter family of equations formulated in a divergence form, and this allows one to consider the variables together. In this sense, the method is based on the “synthesis” as opposed to the “separation” of variables. The new method has already been applied to a plethora of BVPs and furthermore has led to the development of certain novel numerical techniques. However, a large number of related analytical and numerical questions remain open. This paper illustrates the method by applying it to two particular non-self-adjoint BVPs: one for the linearized KdV equation formulated on the half-line, and the other for the Helmholtz equation in the exterior of the disc (the latter is non-self-adjoint due to the radiation condition). The former problem played a crucial role in the development of the new method, whereas the latter problem was instrumental in the full development of the classical transform method. Although the new method can now be presented using only classical techniques, it actually originated in the theory of certain nonlinear PDEs called integrable, whose crucial feature is the existence of a Lax pair formulation. It is shown here that Lax pairs provide the generalization of the divergence formulation from a separable linear to an integrable nonlinear PDE.

72 citations


Journal ArticleDOI
TL;DR: A detailed account of various connections between several classes of objects: Hankel, Hurwitz, Toeplitz, Vandermonde, and other structured matrices, Stietjes- and Jacobi-type continued fractions, Cauchy indices, moment problems, total positivity, and root localization of univariate polynomials is given.
Abstract: We give a detailed account of various connections between several classes of objects: Hankel, Hurwitz, Toeplitz, Vandermonde, and other structured matrices, Stietjes- and Jacobi-type continued fractions, Cauchy indices, moment problems, total positivity, and root localization of univariate polynomials. Along with a survey of many classical facts, we provide a number of new results.

46 citations


Journal ArticleDOI
TL;DR: The renormalization group (RG) method of Chen, Goldenfeld, and Oono is presented in a pedagogical way to increase its visibility in applied mathematics and to argue favorably for its incorporation into the corresponding graduate curriculum.
Abstract: In this paper the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. Lett., 73 (1994), pp. 1311-1315; Phys. Rev. E, 54 (1996), pp. 376-394] is presented in a pedagogical way to increase its visibility in applied mathematics and to argue favorably for its incorporation into the corresponding graduate curriculum. The method is illustrated by some linear and nonlinear singular perturbation problems.

35 citations


Journal ArticleDOI
TL;DR: A classical finite dimensional example: the use of frequencies of vibration to recover positions and masses of beads vibrating on a string is detailed, and one such method based on orthogonal polynomials is presented in a manner suitable for advanced undergraduates.
Abstract: To what extent do the vibrations of a mechanical system reveal its composition? Despite innumerable applications and mathematical elegance, this question often slips through those cracks that separate courses in mechanics, differential equations, and linear algebra. We address this omission by detailing a classical finite dimensional example: the use of frequencies of vibration to recover positions and masses of beads vibrating on a string. First we derive the equations of motion, then compare the eigenvalues of the resulting linearized model against vibration data measured from our laboratory's monochord. More challenging is the recovery of masses and positions of the beads from spectral data, a problem for which a variety of elegant algorithms exist. After presenting one such method based on orthogonal polynomials in a manner suitable for advanced undergraduates, we confirm its efficacy through physical experiment. We encourage readers to conduct their own explorations using the numerous data sets we provide.

Journal ArticleDOI
TL;DR: V vibrating plates have many interesting applications, and from which the Chladni figures, representing sand ornaments which form on a vibrating plate, and the Tacoma Bridge are chosen, from which both the QR-algorithm and Lanczos can be well illustrated.
Abstract: Teaching linear algebra routines for computing eigenvalues of a matrix can be well moti- vated for students by using interesting examples. We propose in this paper to use vibrating plates for two reasons: First, they have many interesting applications, from which we chose the Chladni figures, representing sand ornaments which form on a vibrating plate, and the Tacoma Bridge, one of the most spectacular bridge failures. Second, the partial differential operator that arises from vibrating plates is the biharmonic operator, which one does not encounter often in a first course on numerical partial differential equations, and which is more challenging to discretize than the standard Laplacian seen in most textbooks. In addition, the history of vibrating plates is interesting, and we will show both spectral discretizations, leading to small dense matrix eigenvalue problems, and a finite difference discretization, leading to large scale sparse matrix eigenvalue problems. Hence both the QR-algorithm and Lanczos can be well illustrated.

Journal ArticleDOI
TL;DR: It is shown how reasonably well-understood thin-layer phenomena associated with any one of the four generic equations may translate into less well-known effects associated with the others, including matched asymptotic, WKB, and multiple-scales expansions.
Abstract: This paper concerns a certain class of two-dimensional solutions to four generic partial differential equations—the Helmholtz, modified Helmholtz, and convection-diffusion equations, and the heat conduction equation in the frequency domain—and the connections between these equations for this particular class of solutions. Specifically, we consider “thin-layer” solutions, valid in narrow regions across which there is rapid variation, in the singularly perturbed limit as the coefficient of the Laplacian tends to zero. For the well-studied Helmholtz equation, this is the high-frequency limit and the solutions in question underpin the conventional ray theory/WKB approach in that they provide descriptions valid in some of the regions where these classical techniques fail. Examples are caustics, shadow boundaries, whispering gallery, and creeping waves and focusing and bouncing ball modes. It transpires that virtually all such thin-layer models reduce to a class of generalized parabolic wave equations, of which the heat conduction equation is a special case. Moreover, in most situations, we will find that the appropriate parabolic wave equation solutions can be derived as limits of exact solutions of the Helmholtz equation. We also show how reasonably well-understood thin-layer phenomena associated with any one of the four generic equations may translate into less well-known effects associated with the others. In addition, our considerations also shed some light on the relationship between the methods of matched asymptotic, WKB, and multiple-scales expansions.

Journal ArticleDOI
TL;DR: It is proved that after a suitable rescaling the solution to the Kramers-Smoluchowski equation converges, in the limit of high activation energy, to the solution of a simpler system modeling the spatial diffusion of A and B combined with the reaction A\rightleftharpoons B.
Abstract: We study the limit of high activation energy of a special Fokker-Planck equation known as the Kramers-Smoluchowski equation (KS). This equation governs the time evolution of the probability density of a particle performing a Brownian motion under the influence of a chemical potential $H/\varepsilon$. We choose $H$ having two wells corresponding to two chemical states $A$ and $B$. We prove that after a suitable rescaling the solution to the KS converges, in the limit of high activation energy ($\varepsilon\to0$), to the solution of a simpler system modeling the spatial diffusion of $A$ and $B$ combined with the reaction $A\rightleftharpoons B$. With this result we give a rigorous proof of Kramers's formal derivation, and we show how chemical reactions and diffusion processes can be embedded in a common framework. This allows one to derive a chemical reaction as a singular limit of a diffusion process, thus establishing a connection between two worlds often regarded as separate. The proof rests on two main ingredients. One is the formulation of the two disparate equations as evolution equations for measures. The second is a variational formulation of both equations that allows us to use the tools of variational calculus and, specifically, $\Gamma$-convergence.

Journal ArticleDOI
TL;DR: Convex graph invariants as mentioned in this paper are convex functions of the adjacency matrix of a graph that do not depend on the labels of the nodes and can be used to construct convex sets that impose various structural constraints on graphs.
Abstract: The structural properties of graphs are usually characterized in terms of invariants, which are functions of graphs that do not depend on the labeling of the nodes. In this paper we study convex graph invariants, which are graph invariants that are convex functions of the adjacency matrix of a graph. Some examples include functions of a graph such as the maximum degree, the MAXCUT value (and its semidefinite relaxation), and spectral invariants such as the sum of the $k$ largest eigenvalues. Such functions can be used to construct convex sets that impose various structural constraints on graphs and thus provide a unified framework for solving a number of interesting graph problems via convex optimization. We give a representation of all convex graph invariants in terms of certain elementary invariants, and we describe methods to compute or approximate convex graph invariants tractably. We discuss the interesting subclass of spectral invariants, and also compare convex and nonconvex invariants. Finally, we...

Journal ArticleDOI
TL;DR: This work derives explicit solutions for the error rate and probability distribution function of decision times for a group of independent, (possibly) nonidentical decision makers using one of three simple rules: Race, Majority Total, and Majority First.
Abstract: The sequential probability ratio test (SPRT) and related drift-diffusion model (DDM) are optimal for choosing between two hypotheses using the minimal (average) number of samples and relevant for modeling the decision-making process in human observers. This work extends these models to group decision making. Previous works have focused almost exclusively on group accuracy; here, we explicitly address group decision time. First, we derive explicit solutions for the error rate and probability distribution function of decision times for a group of independent, (possibly) nonidentical decision makers using one of three simple rules: Race, Majority Total, and Majority First. We illustrate our solutions with a group of $N$ i.i.d. decision makers who each make an individual decision using the SPRT-based DDM, then compare the performance of each group rule under different constraints. We then generalize these group rules to the $\eta$-Total and $\eta$-First schemes, to demonstrate the flexibility and power of our approach in characterizing the performance of a group, given the performance of its individual members.

Journal ArticleDOI
TL;DR: This paper presents a first attempt at enhancing the SNR of the array data by suppressing medium reverberations and introduces filters, or filters, in sensor array imaging.
Abstract: Sensor array imaging arises in applications such as nondestructive evaluation of materials with ultrasonic waves, seismic exploration, and radar. The sensors probe a medium with signals and record the resulting echoes, which are then processed to determine the location and reflectivity of remote reflectors. These could be defects in materials such as voids, fault lines or salt bodies in the earth, and cars, buildings, or aircraft in radar applica- tions. Imaging is relatively well understood when the medium through which the signals propagate is smooth, and therefore nonscattering. But in many problems the medium is heterogeneous, with numerous small inhomogeneities that scatter the waves. We refer to the collection of inhomogeneities as clutter, which introduces an uncertainty in imaging because it is unknown and impossible to estimate in detail. We model the clutter as a random process. The array data is measured in one realization of the random medium, and the challenge is to mitigate cumulative clutter scattering so as to obtain robust images that are statistically stable with respect to different realizations of the inhomogeneities. Scatterers that are not buried too deep in clutter can be imaged reliably with the coher- ent interferometric (CINT) approach. But in heavy clutter the signal-to-noise ratio (SNR) is low and CINT alone does not work. The "signal," the echoes from the scatterers to be imaged, is overwhelmed by the "noise," the strong clutter reverberations. There are two existing approaches for imaging at low SNR: The first operates under the premise that data are incoherent so that only the intensity of the scattered field can be used. The unknown coherent scatterers that we want to image are modeled as changes in the coefficients of diffusion or radiative transport equations satisfied by the intensities, and the problem be- comes one of parameter estimation. Because the estimation is severely ill-posed, the results have poor resolution, unless very good prior information is available and large arrays are used. The second approach recognizes that if there is some residual coherence in the data, that is, some reliable phase information is available, it is worth trying to extract it and use it with well-posed coherent imaging methods to obtain images with better resolution. This paper takes the latter approach and presents a first attempt at enhancing the SNR of the array data by suppressing medium reverberations. It introduces filters, or annihila-

Journal ArticleDOI
TL;DR: This paper uses the structure of the companion matrix to obtain smaller zero inclusion regions, thereby providing some nonstandard results to accompany and illustrate this frequently covered topic in numerical and matrix analysis.
Abstract: All the zeros of a polynomial are contained in the union of Gershgorin disks derived from its companion matrix, a consequence of Gershgorin's theorem. However, this theorem does not exploit the structure of the companion matrix. We will use this structure to obtain smaller zero inclusion regions, thereby providing some nonstandard results to accompany and illustrate this frequently covered topic in numerical and matrix analysis.

Journal ArticleDOI
TL;DR: It turns out that the Alcuin number of a graph is closely related to the size of a minimum vertex cover in the graph, and several surprising connections between these two graph parameters are unraveled.
Abstract: We consider a planning problem that generalizes Alcuin's river crossing problem to scenarios with arbitrary conflict graphs. This generalization leads to the so-called Alcuin number of the underlying conflict graph. We derive a variety of combinatorial, structural, algorithmical, and complexity theoretical results around the Alcuin number. Our technical main result is an NP-certificate for the Alcuin number. It turns out that the Alcuin number of a graph is closely related to the size of a minimum vertex cover in the graph, and we unravel several surprising connections between these two graph parameters. We provide hardness results and a fixed parameter tractability result for computing the Alcuin number. Furthermore we demonstrate that the Alcuin number of chordal graphs, bipartite graphs, and planar graphs is substantially easier to analyze than the Alcuin number of general graphs.