scispace - formally typeset
Search or ask a question

Showing papers in "Siam Review in 2011"


Journal ArticleDOI
TL;DR: This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation, and presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions.
Abstract: Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the $k$ dominant components of the singular value decomposition of an $m \times n$ matrix. (i) For a dense input matrix, randomized algorithms require $\bigO(mn \log(k))$ floating-point operations (flops) in contrast to $ \bigO(mnk)$ for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to $\bigO(k)$ passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.

3,248 citations


Journal ArticleDOI
TL;DR: This paper surveys the primary research, both theoretical and applied, in the area of robust optimization (RO), focusing on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology.
Abstract: In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.

1,863 citations


Journal ArticleDOI
TL;DR: This article reviews identifiability analysis methodologies for nonlinear ODE models developed in the past one to two decades, including structural identifiable analysis, practical identIFiability analysis and sensitivity-based identifability analysis.
Abstract: Ordinary differential equations (ODEs) are a powerful tool for modeling dynamic processes with wide applications in a variety of scientific fields. Over the last two decades, ODEs have also emerged as a prevailing tool in various biomedical research fields, especially in infectious disease modeling. In practice, it is important and necessary to determine unknown parameters in ODE models based on experimental data. Identifiability analysis is the first step in determining unknown parameters in ODE models and such analysis techniques for nonlinear ODE models are still under development. In this article, we review identifiability analysis methodologies for nonlinear ODE models developed in the past couple of decades, including structural identifiability analysis, practical identifiability analysis, and sensitivity-based identifiability analysis. Some advanced topics and ongoing research are also briefly reviewed. Finally, some examples from modeling viral dynamics of HIV and influenza viruses are given to illustrate how to apply these identifiability analysis methods in practice.

506 citations


Journal ArticleDOI
TL;DR: This study examines the importance of common high school affiliation at large state universities and the varying degrees of influence that common major can have on the social structure at different universities, indicating that university networks typically have multiple organizing factors rather than a single dominant one.
Abstract: We study the structure of social networks of students by examining the graphs of Facebook “friendships” at five U.S. universities at a single point in time. We investigate the community structure of each single-institution network and employ visual and quantitative tools, including standardized pair-counting methods, to measure the correlations between the network communities and a set of self-identified user characteristics (residence, class year, major, and high school). We review the basic properties and statistics of the employed pair-counting indices and recall, in simplified notation, a useful formula for the $z$-score of the Rand coefficient. Our study illustrates how to examine different instances of social networks constructed in similar environments, emphasizes the array of social forces that combine to form “communities,” and leads to comparative observations about online social structures, which reflect offline social structures. We calculate the relative contributions of different characteristics to the community structure of individual universities and compare these relative contributions at different universities. For example, we examine the importance of common high school affiliation at large state universities and the varying degrees of influence that common major can have on the social structure at different universities. The heterogeneity of the communities that we observe indicates that university networks typically have multiple organizing factors rather than a single dominant one.

476 citations


Journal ArticleDOI
TL;DR: A review and critical analysis of the mathematical literature concerning the modeling of vehicular traffic and crowd phenomena and a critical analysis focused on research perspectives that consider the development of a unified modeling strategy are presented.
Abstract: This paper presents a review and critical analysis of the mathematical literature concerning the modeling of vehicular traffic and crowd phenomena. The survey of models deals with the representation scales and the mathematical frameworks that are used for the modeling approach. The paper also considers the challenging objective of modeling complex systems consisting of large systems of individuals interacting in a nonlinear manner, where one of the modeling difficulties is the fact that these systems are difficult to model at a global level when based only on the description of the dynamics of individual elements. The review is concluded with a critical analysis focused on research perspectives that consider the development of a unified modeling strategy.

434 citations


Journal ArticleDOI
TL;DR: The Gaussian plume model is reviewed, its derivation from the advection-diffusion equation is discussed, and the key properties of the plume solution are applied to solving an inverse problem in which emission source rates are determined from a given set of ground-level contaminant measurements.
Abstract: The Gaussian plume model is a standard approach for studying the transport of airborne contaminants due to turbulent diffusion and advection by the wind. This paper reviews the assumptions underlying the model, its derivation from the advection-diffusion equation, and the key properties of the plume solution. The results are then applied to solving an inverse problem in which emission source rates are determined from a given set of ground-level contaminant measurements. This source identification problem can be formulated as an overdetermined linear system of equations that is most easily solved using the method of least squares. Various generalizations of this problem are discussed, and we illustrate our results with an application to the study of zinc emissions from a smelting operation.

281 citations


Journal ArticleDOI
TL;DR: An asymmetric form of RIP that gives tighter bounds than the usual symmetric one is presented, and the best known bounds on the RIP constants for matrices from the Gaussian ensemble are given.
Abstract: Compressed sensing (CS) seeks to recover an unknown vector with $N$ entries by making far fewer than $N$ measurements; it posits that the number of CS measurements should be comparable to the information content of the vector, not simply $N$ CS combines directly the important task of compression with the measurement task Since its introduction in 2004 there have been hundreds of papers on CS, a large fraction of which develop algorithms to recover a signal from its compressed measurements Because of the paradoxical nature of CS—exact reconstruction from seemingly undersampled measurements—it is crucial for acceptance of an algorithm that rigorous analyses verify the degree of undersampling the algorithm permits The restricted isometry property (RIP) has become the dominant tool used for the analysis in such cases We present here an asymmetric form of RIP that gives tighter bounds than the usual symmetric one We give the best known bounds on the RIP constants for matrices from the Gaussian ensemble Our derivations illustrate the way in which the combinatorial nature of CS is controlled Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners We also document the extent to which RIP gives precise information about the true performance limits of CS, by comparison with approaches from high-dimensional geometry

188 citations


Journal ArticleDOI
TL;DR: MREIT is reviewed from its mathematical framework to the most recent human experiment outcomes and its numerical simulations showed that high-resolution conductivity image reconstructions are possible.
Abstract: Magnetic resonance electrical impedance tomography (MREIT) is a recently developed medical imaging modality visualizing conductivity images of an electrically conducting object. MREIT was motivated by the well-known ill-posedness of the image reconstruction problem of electrical impedance tomography (EIT). Numerous experiences have shown that practically measurable data sets in an EIT system are insufficient for a robust reconstruction of a high-resolution static conductivity image due to its ill-posed nature and the influences of errors in forward modeling. To overcome the inherent ill-posed characteristics of EIT, the MREIT system was proposed in the early 1990s to use the internal data of magnetic flux density ${\bf B}=(B_x,B_y,B_z)$, which is induced by an externally injected current. MREIT uses an MRI scanner as a tool to measure the $z$-component $B_z$ of the magnetic flux density, where $z$ is the axial magnetization direction of the MRI scanner. In 2001, a constructive $B_z$-based MREIT algorithm called the harmonic $B_z$ algorithm was developed and its numerical simulations showed that high-resolution conductivity image reconstructions are possible. This novel algorithm is based on the key observation that the Laplacian $\Delta B_z$ probes changes in the log of the conductivity distribution along any equipotential curve having its tangent to the vector field ${\bf J}\times (0,0,1)$, where ${\bf J}=(J_x,J_y,J_z)$ is the induced current density vector. Since then, imaging techniques in MREIT have advanced rapidly and have now reached the stage of in vivo animal and human experiments. This paper reviews MREIT from its mathematical framework to the most recent human experiment outcomes.

180 citations


Journal ArticleDOI
TL;DR: It is shown that a simple adaptation of a consensus algorithm leads to an averaging algorithm, and lower bounds on the worst-case convergence time for various classes of linear, time-invariant, distributed consensus methods are proved.
Abstract: We study the convergence speed of distributed iterative algorithms for the consensus and averaging problems, with emphasis on the latter. We first consider the case of a fixed communication topology. We show that a simple adaptation of a consensus algorithm leads to an averaging algorithm. We prove lower bounds on the worst-case convergence time for various classes of linear, time-invariant, distributed consensus methods, and provide an algorithm that essentially matches those lower bounds. We then consider the case of a time-varying topology, and provide a polynomial-time averaging algorithm.

164 citations


Journal ArticleDOI
TL;DR: It is shown that no stable procedure for approximating functions from equally spaced samples can converge exponentially for analytic functions, and one must settle for root-exponential convergence to avoid instability.
Abstract: It is shown that no stable procedure for approximating functions from equally spaced samples can converge exponentially for analytic functions. To avoid instability, one must settle for root-exponential convergence. The proof combines a Bernstein inequality of 1912 with an estimate due to Coppersmith and Rivlin in 1992.

143 citations


Journal ArticleDOI
TL;DR: The key idea of the paper is to attribute sliding bifurcations to singularities in the manifold's projection along the flow, namely, to points where the projection contains folds, cusps, and two-folds (saddles and bowls).
Abstract: Using the singularity theory of scalar functions, we derive a classification of sliding bifurcations in piecewise-smooth flows. These are global bifurcations which occur when distinguished orbits become tangent to surfaces of discontinuity, called switching manifolds. The key idea of the paper is to attribute sliding bifurcations to singularities in the manifold's projection along the flow, namely, to points where the projection contains folds, cusps, and two-folds (saddles and bowls). From the possible local configurations of orbits we obtain sliding bifurcations. In this way we derive a complete classification of generic one-parameter sliding bifurcations at a smooth codimension one switching manifold in $n$ dimensions for $n\ge3$. We uncover previously unknown sliding bifurcations, all of which are catastrophic in nature. We also describe how the method can be extended to sliding bifurcations of codimension two or higher.

Journal ArticleDOI
TL;DR: Just when modern computers were being invented, John von Neumann and Herman Goldstine wrote a paper to illustrate the mathematical analyses that they believed would be needed to use the new machines effectively and to guide the development of still faster computers.
Abstract: Just when modern computers (digital, electronic, and programmable) were being invented, John von Neumann and Herman Goldstine wrote a paper to illustrate the mathematical analyses that they believed would be needed to use the new machines effectively and to guide the development of still faster computers. Their foresight and the congruence of historical events made their work the first modern paper in numerical analysis. Von Neumann once remarked that to found a mathematical theory one had to prove the first theorem, which he and Goldstine did for the accuracy of mechanized Gaussian elimination—but their paper was about more than that. Von Neumann and Goldstine described what they surmized would be the significant questions once computers became available for computational science, and they suggested enduring ways to answer them.

Journal ArticleDOI
TL;DR: In this article, the problem of optimizing rod devices from a topological viewpoint was discussed, and the optimal growth rate was shown to be the logarithm of the golden ratio.
Abstract: There are many industrial situations where rods are used to stir a fluid, or where rods repeatedly knead a material such as bread dough or taffy. The goal in these applications is to stretch either material lines (in a fluid) or the material itself (for dough or taffy) as rapidly as possible. The growth rate of material lines is conveniently given by the topological entropy of the rod motion. We discuss the problem of optimizing such rod devices from a topological viewpoint. We express rod motions in terms of generators of the braid group and assign a cost based on the minimum number of generators needed to write the braid. We show that for one cost function—the topological entropy per generator—the optimal growth rate is the logarithm of the golden ratio. For a more realistic cost function, involving the topological entropy per operation where rods are allowed to move together, the optimal growth rate is the logarithm of the silver ratio, $1+\sqrt{2}$. We show how to construct devices that realize this optimal growth, which we call silver mixers.

Journal ArticleDOI
TL;DR: The symbol smoothness conditions obeyed by many operators in connection to smooth linear partial differential equations allow fast-converging, nonasymptotic expansions in adequate systems of rational Chebyshev functions or hierarchical splines to be written as mentioned in this paper.
Abstract: This paper deals with efficient numerical representation and manipulation of differential and integral operators as symbols in phase-space, i.e., functions of space $x$ and frequency $\xi$. The symbol smoothness conditions obeyed by many operators in connection to smooth linear partial differential equations allow fast-converging, nonasymptotic expansions in adequate systems of rational Chebyshev functions or hierarchical splines to be written. The classical results of closedness of such symbol classes under multiplication, inversion, and taking the square root translate into practical iterative algorithms for realizing these operations directly in the proposed expansions. Because symbol-based numerical methods handle operators and not functions, their complexity depends on the desired resolution $N$ very weakly, typically only through $\log N$ factors. We present three applications to computational problems related to wave propagation: (1) preconditioning the Helmholtz equation, (2) decomposing wave fields into one-way components, and (3) depth extrapolation in reflection seismology. The software is made available in the software sections of math.mit.edu/$\sim$laurent and www.math.utexas.edu/users/lexing.

Journal ArticleDOI
TL;DR: This review of recent developments of fast analytical methods for macroscopic electrostatic calculations in biological applications, including the Poisson-Boltzmann and the generalized Born models for electrostatic solvation energy, focuses on analytical approaches for hybrid solvation models.
Abstract: We review recent developments of fast analytical methods for macroscopic electrostatic calculations in biological applications, including the Poisson-Boltzmann (PB) and the generalized Born models for electrostatic solvation energy. The focus is on analytical approaches for hybrid solvation models, especially the image charge method for a spherical cavity, and also the generalized Born theory as an approximation to the PB model. This review places much emphasis on the mathematical details behind these methods.

Journal ArticleDOI
TL;DR: It is shown that in terms of similarity, or scaling, variables in an algebraically weighted $L^2$ space, the self-similar diffusion waves correspond to a one-dimensional global center manifold of stationary solutions, corresponding to the diffusive N-waves.
Abstract: The large-time behavior of solutions to the Burgers equation with small viscosity is described using invariant manifolds. In particular, a geometric explanation is provided for a phenomenon known as metastability, which in the present context means that solutions spend a very long time near the family of solutions known as diffusive N-waves before finally converging to a stable self-similar diffusion wave. More precisely, it is shown that in terms of similarity, or scaling, variables in an algebraically weighted $L^2$ space, the self-similar diffusion waves correspond to a one-dimensional global center manifold of stationary solutions. Through each of these fixed points there exists a one-dimensional, global, attractive, invariant manifold corresponding to the diffusive N-waves. Thus, metastability corresponds to a fast transient in which solutions approach this “metastable” manifold of diffusive N-waves, followed by a slow decay along this manifold, and, finally, convergence to the self-similar diffusion wave.

Journal ArticleDOI
TL;DR: The content of a CSE curriculum, the skills needed by successful graduates, the structure and experiences of some recently developed CSE undergraduate programs, and the potential career paths following a C SE undergraduate education are outlined.
Abstract: It is widely acknowledged that computational science and engineering (CSE) will play a critical role in the future of the scientific discovery process and engineering design. However, in recent years computational skills have been deemphasized in the curricula of many undergraduate programs in science and engineering. There is a clear need to provide training in CSE fundamentals at the undergraduate level. An undergraduate CSE program can train students for careers in industry, education, and for graduate CSE study. The courses developed for such a program will have an impact throughout the science, technology, engineering, and mathematics (STEM) undergraduate curriculum. This paper outlines the content of a CSE curriculum, the skills needed by successful graduates, the structure and experiences of some recently developed CSE undergraduate programs, and the potential career paths following a CSE undergraduate education.

Journal ArticleDOI
TL;DR: A collection of typical examples shows the exotic behavior of numerical methods when applied to singular perturbation problems, even on layer-adapted meshes.
Abstract: A collection of typical examples shows the exotic behavior of numerical methods when applied to singular perturbation problems. While standard meshes are used in the first six examples, even on layer-adapted meshes several surprising phenomena are shown to occur.

Journal ArticleDOI
TL;DR: It is shown in this paper that linear probing using a 2-wise independent hash function may have expected logarithmic cost per operation, and it is shown that 5-wise independence is enough to ensure constant expected time per operation.
Abstract: Hashing with linear probing dates back to the 1950s and is among the most studied algorithms for storing (key, value) pairs. In recent years it has become one of the most important hash table organizations since it uses the cache of modern computers very well. Unfortunately, previous analyses rely either on complicated and space consuming hash functions, or on the unrealistic assumption of free access to a hash function with random and independent function values. Carter and Wegman, in their seminal paper on universal hashing, raised the question of extending their analysis to linear probing. However, we show in this paper that linear probing using a 2-wise independent hash function may have expected logarithmic cost per operation. Recently, Pactrascu and Thorup have shown that 3- and 4-wise independent hash functions may also give rise to logarithmic expected query time. On the positive side, we show that 5-wise independence is enough to ensure constant expected time per operation. This resolves the question of finding a space and time efficient hash function that provably ensures good performance for hashing with linear probing.

Journal ArticleDOI
TL;DR: Divide-and-conquer (D&C) as discussed by the authors is a proportional cake-cutting algorithm that minimizes the maximum number of players that any single player can envy.
Abstract: We analyze a class of proportional cake-cutting algorithms that use a minimal number of cuts ($n-1$ if there are $n$ players) to divide a cake that the players value along one dimension. While these algorithms may not produce an envy-free or efficient allocation—as these terms are used in the fair-division literature—one, divide-and-conquer (D&C), minimizes the maximum number of players that any single player can envy. It works by asking $n \ge 2$ players successively to place marks on a cake—valued along a line—that divide it into equal halves (when $n$ is even) or nearly equal halves (when $n$ is odd), then halves of these halves, and so on. Among other properties, D&C ensures players of at least $1/n$ shares, as they each value the cake, if and only if they are truthful. However, D&C may not allow players to obtain proportional, connected pieces if they have unequal entitlements. Possible applications of D&C to land division are briefly discussed.

Journal ArticleDOI
TL;DR: A mathematical model is proposed for an infestation of a wooded area by a beetle species in which the larvae develop deep in the wood of living trees, and there are always a number of trees that completely escape infestation.
Abstract: We propose a mathematical model for an infestation of a wooded area by a beetle species in which the larvae develop deep in the wood of living trees. Due to the difficulties of detection, we presume that only a certain proportion of infested trees will be detected and that detection, if it happens, will occur only after some delay, which could be long. An infested tree once detected is immediately cut down and burned. The model is stage structured and contains a second time delay, which is the development time of the beetle from egg to adult. There is a delicate interplay between the two time delays due to the possibility in one case for a larva to mature even in a tree destined for destruction. We present conditions sufficient for infestation eradication and discuss the significance of the conditions, particularly in terms of the proportion of infested trees that need to be detected and removed. If the infestation is successfully eradicated, there are always a number of trees that completely escape infestation, and we compute lower bounds and an approximation for this number. Finally, we present the results of some numerical simulations.

Journal ArticleDOI
TL;DR: This work revisits a physiological standing gradient problem of Lin and Segel with a view to giving it an up-to-date perspective and shows that the problem can be analyzed using the tools of singular perturbation theory and matched asymptotic expansions.
Abstract: We revisit a physiological standing gradient problem of Lin and Segel from their landmark text on mathematical modeling [C. C. Lin and L. A. Segel, Mathematics Applied to Deterministic Problems in the Natural Sciences, SIAM, Philadelphia, 1988] with a view to giving it an up-to-date perspective. In particular, via an alternative nondimensionalization, we show that the problem can be analyzed using the tools of singular perturbation theory and matched asymptotic expansions. In the spirit of the aforementioned authors, the development is didactic in style. Solving the problem requires many of the necessary skills of continuous modern mathematical modeling: formulation from a physical description of the process, scaling and asymptotic simplification, and solution using advanced perturbation (boundary layer) techniques.

Journal ArticleDOI
TL;DR: The two papers in this issue are concerned with analysis of differential equations: the solution of partial differential equations that describe heat transfer, while the second one analyzes hybrid dynamical systems whose behavior alternates between continuous and discrete modes.
Abstract: The two papers in this issue are concerned with analysis of differential equations: the first paper discusses the solution of partial differential equations that describe heat transfer, while the second one analyzes dynamical systems whose behavior alternates between continuous and discrete modes The paper “Application of Standard and Refined Heat Balance Integral Methods to One-Dimensional Stefan Problems,” by Sarah Mitchell and Tim Myers, deals with a heat transfer problem, on a semi-infinite region Consider an experiment on a long metal bar occupying the positive real-axis The experiment starts by heating the origin, thereby raising the bar above the melting temperature The bar melts near the origin, and as the heat diffuses, the solid-liquid interface propagates slowly but surely towards infinity The temperature is modeled by heat equations across the two regions, and by a Stefan condition at their interface The authors heat balance integral methods to solve for the temperature These methods reduce a partial differential equation to an ordinary differential equation The authors investigate different boundary conditions, and different approximating functions for the temperature Readers interested in heat balance integral methods will find this to be a valuable survey with new results that preserve the simplicity of the method The second paper is concerned with so-called hybrid dynamical systems A bouncing ball, for instance, is a hybrid dynamical system The movement of the ball above ground can be described by Newton's law However, at the very moment the ball hits the ground and bounces, an instantaneous reversal of velocity occurs along with some dissipation of energy After the bounce, the ball moves again according to Newton's law until the next bounce, and so on Mathematically one can show that the time points at which the ball bounces represent a convergent sequence The convergence of this sequence implies that infinitely many bounces occur in a finite amount of time This is called “Zeno behavior”: infinitely many switches of mode in a finite amount of time If Zeno behavior occurs in a control system, a numerical simulation of the system is extremely difficult, if not impossible In the terminology of dynamical systems, the movement of the ball above ground is a “flow” and the bounce is a “jump” Hybrid dynamical systems alternate between continuous (flow) and discrete (jump) modes Rafal Goebel and Andrew Teel in their paper “Preasymptotic Stability and Homogeneous Approximations of Hybrid Dynamical Systems” model hybrid dynamical systems and approximate them by simpler systems obtained from linearization and tangent cones The authors analyze preasymptotic stability, homogeneity, and convergence A variety of well-chosen simple examples helps us to understand the general concepts and results

Journal ArticleDOI
TL;DR: In this article, two papers of the analytic kind are presented: one dealing with symbol calculus, and the other with compressed sensing, both of which are of the same kind of work.
Abstract: The two papers in this issue are of the analytic kind. The first one deals with symbol calculus, and the second one with compressed sensing. In their paper Discrete Symbol Calculus, Laurent Demanet...