scispace - formally typeset
Search or ask a question

Showing papers in "Siam Review in 2015"


Journal ArticleDOI
TL;DR: Model reduction aims to reduce the computational burden by generating reduced models that are faster and cheaper to simulate, yet accurately represent the original large-scale system behavior as mentioned in this paper. But model reduction of linear, nonparametric dynamical systems has reached a considerable level of maturity, as reflected by several survey papers and books.
Abstract: Numerical simulation of large-scale dynamical systems plays a fundamental role in studying a wide range of complex physical phenomena; however, the inherent large-scale nature of the models often leads to unmanageable demands on computational resources. Model reduction aims to reduce this computational burden by generating reduced models that are faster and cheaper to simulate, yet accurately represent the original large-scale system behavior. Model reduction of linear, nonparametric dynamical systems has reached a considerable level of maturity, as reflected by several survey papers and books. However, parametric model reduction has emerged only more recently as an important and vibrant research area, with several recent advances making a survey paper timely. Thus, this paper aims to provide a resource that draws together recent contributions in different communities to survey the state of the art in parametric model reduction methods. Parametric model reduction targets the broad class of problems for wh...

1,230 citations


Journal ArticleDOI
TL;DR: Google's PageRank method was developed to evaluate the importance of web-pages via their link structure and apply to any graph or network in any domain this paper, however, the mathematics of PageRank are entirely general and can be used to evaluate any graph and any network.
Abstract: Google's PageRank method was developed to evaluate the importance of web-pages via their link structure. The mathematics of PageRank, however, are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It's even used for systems analysis of road networks, as well as biology, chemistry, neuroscience, and physics. We'll see the mathematics and ideas that unite these diverse applications.

569 citations


Journal ArticleDOI
TL;DR: This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications, and combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements.
Abstract: This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, cal...

533 citations


Journal ArticleDOI
TL;DR: This paper provides a rigorous mathematical formulation of the influence network dynamics, characterize its equilibria, and establish its convergence properties for all possible structures of the relative interpersonal weights and corresponding eigenvector centrality scores.
Abstract: This paper studies the evolution of self-appraisal, social power, and interpersonal influences for a group of individuals who discuss and form opinions about a sequence of issues. Our empirical model combines the averaging rule of DeGroot to describe opinion formation processes and the reflected appraisal mechanism of Friedkin to describe the dynamics of individuals' self-appraisal and social power. Given a set of relative interpersonal weights, the DeGroot--Friedkin model predicts the evolution of the influence network governing the opinion formation process. We provide a rigorous mathematical formulation of the influence network dynamics, characterize its equilibria, and establish its convergence properties for all possible structures of the relative interpersonal weights and corresponding eigenvector centrality scores. The model predicts that the social power ranking among individuals is asymptotically equal to their centrality ranking, that social power tends to accumulate at the top of the hierarchy,...

273 citations


Journal ArticleDOI
TL;DR: In this paper, a wide range of problems can be modeled as Mixed Integer Linear Programming (MIP) problems using standard formulation techniques, but in some cases the resulting MIP can be either too weak or...
Abstract: A wide range of problems can be modeled as Mixed Integer Linear Programming (MIP) problems using standard formulation techniques. However, in some cases the resulting MIP can be either too weak or ...

211 citations



Journal ArticleDOI
TL;DR: In this paper, the basic concepts of nonlinear omposition and preconditioning are described and a number of solvers applicable to nonlinear partial differential equations are presented, where the performance gains from using composed solvers can be substantial compared with gains from standard Newton-Krylov methods.
Abstract: Most efficient linear solvers use composable algorithmic components, with the most common model being the combination of a Krylov accelerator and one or more preconditioners. A similar set of concepts may be used for nonlinear algebraic systems, where nonlinear composition of different nonlinear solvers may significantly improve the time to solution. We describe the basic concepts of nonlinear omposition and preconditioning and present a number of solvers applicable to nonlinear partial differential equations. We have developed a software framework in order to easily explore the possible combinations of solvers. We show that the performance gains from using composed solvers can be substantial compared with gains from standard Newton--Krylov methods.

112 citations


Journal ArticleDOI
TL;DR: This survey concerns iterative solution methods for quadratic extremum problems and shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods.
Abstract: The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example, interior point methods and the sequential quadratic programming approach to nonlinear optimization. This survey concerns iterative solution methods for these problems and, in particular, shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness---in terms of rapidity of convergence---is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matri...

76 citations


Journal ArticleDOI
TL;DR: It is reported that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a finite number of marginals of the data-generating distribution are unknown, they could be generically brittle when applied to continuous systems with finite information.
Abstract: With the advent of high-performance computing, Bayesian methods are becoming increasingly popular tools for the quantification of uncertainty throughout science and industry. Since these methods can impact the making of sometimes critical decisions in increasingly complicated contexts, the sensitivity of their posterior conclusions with respect to the underlying models and prior beliefs is a pressing question to which there currently exist positive and negative answers. We report new results suggesting that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a finite number of marginals of the data-generating distribution are unknown, they could be generically brittle when applied to continuous systems (and their discretizations) with finite information on the data-generating distribution. If closeness is defined in terms of the total variation (TV) metric or the matching of a finite system of generalized moments, then (1) two practitioners who use arbitrarily close models and observe the same (possibly arbitrarily large amount of) data may reach opposite conclusions; and (2) any given prior and model can be slightly perturbed to achieve any desired posterior conclusion. The mechanism causing brittleness/robustness suggests that learning and robustness are antagonistic requirements, which raises the possibility of a missing stability condition when using Bayesian inference in a continuous world under finite information.

66 citations


Journal ArticleDOI
TL;DR: The amplitude of the gradient of a potential inside a wire cage is investigated, with par- ticular attention to the 2D configuration of a ring of n disks of radius r held at equal potential.
Abstract: The amplitude of the gradient of a potential inside a wire cage is investigated, with par- ticular attention to the 2D configuration of a ring of n disks of radius r held at equal potential. The Faraday shielding effect depends upon the wires having finite radius and is weaker than one might expect, scaling as | log r|/n in an appropriate regime of small r and large n. Both numerical results and a mathematical theorem are provided. By the method of multiple scales, a continuum approximation is then derived in the form of a homogenized boundary condition for the Laplace equation along a curve. The homogenized equation reveals that in a Faraday cage, charge moves so as to somewhat cancel an external field, but not enough for the cancellation to be fully effective. Physically, the effect is one of electrostatic induction in a surface of limited capacitance. An alternative discrete model of the effect is also derived based on a principle of energy minimization. Extensions to electromagnetic waves and 3D geometries are mentioned.

56 citations


Journal ArticleDOI
TL;DR: In this article, the Frisch-Kalman dictum is used to identify a noise contribution that allows a maximal number of simultaneous linear relations among variables in a noisy data set.
Abstract: We address the problem of identifying linear relations among variables based on noisy measurements. This is a central question in the search for structure in large data sets. Often a key assumption is that measurement errors in each variable are independent. This basic formulation has its roots in the work of Charles Spearman in 1904 and of Ragnar Frisch in the 1930s. Various topics such as errors-in-variables, factor analysis, and instrumental variables all refer to alternative viewpoints on this problem and on ways to account for the anticipated way that noise enters the data. In the present paper we begin by describing certain fundamental contributions by the founders of the field and provide alternative modern proofs to certain key results. We then go on to consider a modern viewpoint and novel numerical techniques to the problem. The central theme is expressed by the Frisch--Kalman dictum, which calls for identifying a noise contribution that allows a maximal number of simultaneous linear relations a...

Journal ArticleDOI
TL;DR: Through isostable reduction, this work is able to implement sophisticated control strategies in a high-dimensional model of cardiac activity for the termination of alternans, a precursor to cardiac fibrillation.
Abstract: Phase reduction methods have been tremendously useful for understanding the dynamics of nonlinear oscillators, but have been difficult to extend to systems with a stable fixed point, such as an excitable system. Using the notion of isostables, which measure the time it takes for a given initial condition in phase space to approach a stable fixed point, we present a general method for isostable reduction for excitable systems. We also devise an adjoint method for calculating infinitesimal isostable response curves, which are analogous to infinitesimal phase response curves for oscillatory systems. Through isostable reduction, we are able to implement sophisticated control strategies in a high-dimensional model of cardiac activity for the termination of alternans, a precursor to cardiac fibrillation.

Journal ArticleDOI
TL;DR: The course is based on realistic ecological principles, such as using nutrient concentration to measure populations together with explicit resource availability to constrain population growth, and it considers simple Lotka--Volterra systems within this theoretical framework.
Abstract: Mathematical biology/ecology teaching for undergraduates has generally relied on the Lotka--Volterra competition and predator-prey models to introduce students to population dynamics. Students are provided with an understanding of the application of dynamical system theory in simulating and understanding the behavior of the natural world, and they are provided with opportunities to practice phase plane analysis techniques such as determining the stability of equilibrium points and bifurcation analysis. This paper outlines a course in ecological modeling suitable for all students in the life sciences. The course is based on realistic ecological principles, such as using nutrient concentration to measure populations together with explicit resource availability to constrain population growth, and it considers simple Lotka--Volterra systems within this theoretical framework. An advantage of this approach is that the widely experimentally observed models of mixotrophy and mutualism can be naturally and simply ...

Journal ArticleDOI
TL;DR: This article aims to describe in a unified framework all plateau-generating random effects models in terms of plausible distributions for the hazard and the random effect as well as the impact of frailty on the baseline hazard.
Abstract: This article aims to describe in a unified framework all plateau-generating random effects models in terms of (i) plausible distributions for the hazard (baseline mortality) and the random effect (unobserved heterogeneity, frailty) as well as (ii) the impact of frailty on the baseline hazard. Mortality plateaus result from multiplicative (proportional) and additive hazards, but not from accelerated failure time models. Frailty can have any distribution with regularly-varying-at-0 density and the distribution of frailty among survivors to each subsequent age converges to a gamma distribution. In a multiplicative setting the baseline cumulative hazard can be represented as the inverse of the negative logarithm of any completely monotone function. If the plateau is reached, the only meaningful solution at the plateau is provided by the gamma-Gompertz model.

Journal ArticleDOI
TL;DR: In this paper, Allmaras et al. describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions.
Abstract: Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149--167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain...

Journal ArticleDOI
TL;DR: This work gives an accessible treatment of some criteria for the existence of such points (including the mountain pass lemma), as well as describe a method that could be used to find such points.
Abstract: Variational methods find solutions of equations by considering a solution as a critical point of an appropriately chosen function. Local minima and maxima are well-known types of critical points. We explore methods for finding critical points that are neither local maxima or minima, but instead are mountain passes or saddle points. Criteria for the existence of minima or maxima are well known, but those for mountain passes or saddle points are less well known. We give an accessible treatment of some criteria for the existence of such points (including the mountain pass lemma), as well as describe a method that could be used to find such points.

Journal ArticleDOI
TL;DR: In this paper, a matrix-valued function that is analytic on some simply connected domain is defined, and a point in this domain is an eigth point in the matrix.
Abstract: Let $T : \Omega \rightarrow \mathbb{C}^{n \times n}$ be a matrix-valued function that is analytic on some simply connected domain $\Omega \subset \mathbb{C}$. A point $\lambda \in \Omega$ is an eig...

Journal ArticleDOI
TL;DR: In this article, a structural model for the European Union Emissions Trading Scheme (EU ETS) is presented, which is positioned between existing complex full equilibrium models and pure reduced-form models.
Abstract: We present a novel approach to the pricing of financial instruments in emission markets---for example, the European Union Emissions Trading Scheme (EU ETS). The proposed structural model is positioned between existing complex full equilibrium models and pure reduced-form models. Using an exogenously specified demand for a polluting good, it gives a causal explanation for the accumulation of CO$_2$ emissions and takes into account the feedback effect from the cost of carbon to the rate at which the market emits CO$_2$. We derive a forward-backward stochastic differential equation for the price process of the allowance certificate and solve the associated semilinear partial differential equation numerically. We also show that derivatives written on the allowance certificate satisfy a linear partial differential equation. The model is extended to emission markets with multiple compliance periods, and we analyze the impact different intertemporal connecting mechanisms, such as borrowing, banking, and withdraw...

Journal ArticleDOI
TL;DR: The literature on the comparison of various perturbation methods for multiple timescales is drawn attention in this article.
Abstract: The method of multiple timescales is widely used in engineering and mathematical physics. In this article we draw attention to the literature on the comparison of various perturbation methods. We i...

Journal ArticleDOI
TL;DR: It is proved that the infimum of $R$ is exactly $1/2$.
Abstract: Here we solve the problem posed by Comte and Lachand-Robert in [SIAM J. Math. Anal., 34 (2002), pp. 101--120]. Take a bounded domain $\Omega \subset \mathbb{R}^2$ and a piecewise smooth nonpositive function $u : \bar\Omega \to \mathbb{R}$ vanishing on $\partial\Omega$. Consider a flow of point particles falling vertically down and reflected elastically from the graph of $u$. It is assumed that each particle is reflected no more than once (no multiple reflections are allowed); then the resistance of the graph to the flow is expressed as $R(u;\Omega) = \frac{1}{|\Omega|} \int_\Omega (1 + | abla u(x)|^2)^{-1} dx$. We need to find $\inf_{\Omega,u} R(u;\Omega)$. One can easily see that $| abla u(x)| 1/2$. We prove that the infimum of $R$ is exactly $1/2$. This result is somewhat paradoxical, and the proof is inspired by, and partly similar to, the paradoxical solution given by Besicovitch to the Kakeya problem [Amer. Math. Month...

Journal ArticleDOI
TL;DR: This note presents a game-theoretic justification of the homing guidance law called proportional navigation, in which a moving pursuer homes in on a moving target by turning with a rate proportional to the turn rate of the line-of-sight.
Abstract: This note presents a game-theoretic justification of the homing guidance law called proportional navigation, in which a moving pursuer homes in on a moving target by turning with a rate proportional to the turn rate of the line-of-sight. It also uses optimal control to show that the optimal evasive maneuver must jink, i.e., use a sequence of perfectly timed hard turns to the left and to the right. Throughout the note, the developments are at a level suitable for classroom use.