scispace - formally typeset
Search or ask a question

Showing papers in "Siam Review in 2006"


Journal ArticleDOI
TL;DR: This review describes mathematical models for legged animal locomotion, focusing on rapidly running insects and highlighting past achievements and challenges that remain.
Abstract: Cheetahs and beetles run, dolphins and salmon swim, and bees and birds fly with grace and economy surpassing our technology. Evolution has shaped the breathtaking abilities of animals, leaving us the challenge of reconstructing their targets of control and mechanisms of dexterity. In this review we explore a corner of this fascinating world. We describe mathematical models for legged animal locomotion, focusing on rapidly running insects and highlighting past achievements and challenges that remain. Newtonian body--limb dynamics are most naturally formulated as piecewise-holonomic rigid body mechanical systems, whose constraints change as legs touch down or lift off. Central pattern generators and proprioceptive sensing require models of spiking neurons and simplified phase oscillator descriptions of ensembles of them. A full neuromechanical model of a running animal requires integration of these elements, along with proprioceptive feedback and models of goal-oriented sensing, planning, and learning. We outline relevant background material from biomechanics and neurobiology, explain key properties of the hybrid dynamical systems that underlie legged locomotion models, and provide numerous examples of such models, from the simplest, completely soluble "peg-leg walker" to complex neuromuscular subsystems that are yet to be assembled into models of behaving animals. This final integration in a tractable and illuminating model is an outstanding challenge.

728 citations


Journal ArticleDOI
TL;DR: In the asymptotic limit of a small spatial region and a large spherical harmonic bandwidth, the spherical concentration problem reduces to its planar equivalent, which exhibits self-similarity when the Shannon number is kept invariant.
Abstract: We pose and solve the analogue of Slepian's time-frequency concentration problem on the surface of the unit sphere to determine an orthogonal family of strictly bandlimited functions that are optimally concentrated within a closed region of the sphere or, alternatively, of strictly spacelimited functions that are optimally concentrated in the spherical harmonic domain. Such a basis of simultaneously spatially and spectrally concentrated functions should be a useful data analysis and representation tool in a variety of geophysical and planetary applications, as well as in medical imaging, computer science, cosmology, and numerical analysis. The spherical Slepian functions can be found by solving either an algebraic eigenvalue problem in the spectral domain or a Fredholm integral equation in the spatial domain. The associated eigenvalues are a measure of the spatiospectral concentration. When the concentration region is an axisymmetric polar cap, the spatiospectral projection operator commutes with a Sturm--Liouville operator; this enables the eigenfunctions to be computed extremely accurately and efficiently, even when their area-bandwidth product, or Shannon number, is large. In the asymptotic limit of a small spatial region and a large spherical harmonic bandwidth, the spherical concentration problem reduces to its planar equivalent, which exhibits self-similarity when the Shannon number is kept invariant.

353 citations


Journal ArticleDOI
TL;DR: Analysis of the PageRank formula provides a wonderful applied topic for a linear algebra course and complements the discussion of Markov chains in matrix algebra.
Abstract: Google's success derives in large part from its PageRank algorithm, which ranks the importance of web pages according to an eigenvector of a weighted link matrix. Analysis of the PageRank formula provides a wonderful applied topic for a linear algebra course. Instructors may assign this article as a project to more advanced students or spend one or two lectures presenting the material with assigned homework from the exercises. This material also complements the discussion of Markov chains in matrix algebra. Maple and Mathematica files supporting this material can be found at www.rose-hulman.edu/~bryan.

257 citations


Journal ArticleDOI
TL;DR: A dual of the FMMP problem is formulated and it is shown that it has a natural geometric interpretation as a maximum variance unfolding (MVU) problem, the problem of choosing a set of points to be as far apart as possible, measured by their variance, while respecting local distance constraints.
Abstract: We consider a Markov process on a connected graph, with edges labeled with transition rates between the adjacent vertices. The distribution of the Markov process converges to the uniform distribution at a rate determined by the second smallest eigenvalue $\lambda_2$ of the Laplacian of the weighted graph. In this paper we consider the problem of assigning transition rates to the edges so as to maximize $\lambda_2$ subject to a linear constraint on the rates. This is the problem of finding the fastest mixing Markov process (FMMP) on the graph. We show that the FMMP problem is a convex optimization problem, which can in turn be expressed as a semidefinite program, and therefore effectively solved numerically. We formulate a dual of the FMMP problem and show that it has a natural geometric interpretation as a maximum variance unfolding (MVU) problem, , the problem of choosing a set of points to be as far apart as possible, measured by their variance, while respecting local distance constraints. This MVU problem is closely related to a problem recently proposed by Weinberger and Saul as a method for “unfolding” high-dimensional data that lies on a low-dimensional manifold. The duality between the FMMP and MVU problems sheds light on both problems, and allows us to characterize and, in some cases, find optimal solutions.

190 citations


Journal ArticleDOI
TL;DR: Although the numerous mode-locking models are considerably different, they are unified by the fact that stable solitons are exhibited in each case due to the intensity discrimination perturbation in the laser cavity.
Abstract: A comprehensive treatment is given for the formation of mode-locked soliton pulses in optical fiber and solid state lasers. The pulse dynamics is dominated by the interaction of the cubic Kerr nonlinearity and chromatic dispersion with an intensity-dependent perturbation provided by the mode-locking element in the laser cavity. The intensity-dependent perturbation preferentially attenuates low intensity electromagnetic radiation which makes the mode-locked pulses attractors of the laser cavity. A review of the broad spectrum of mode-locked laser models, both qualitative and quantitative, is considered with the basic pulse formation phenomena highlighted. The strengths and weaknesses of each model are considered with two key instabilities studied in detail: Q-switching and harmonic mode-locking. Although the numerous mode-locking models are considerably different, they are unified by the fact that stable solitons are exhibited in each case due to the intensity discrimination perturbation in the laser cavity.

183 citations


Journal ArticleDOI
TL;DR: Micromagnetics is a continuum variational theory describing magnetization patterns in ferromagnetic media that leads to rich behavior and pattern formation and is also the reason for severe problems in analysis, model validation, reductions, and numerics.
Abstract: Micromagnetics is a continuum variational theory describing magnetization patterns in ferromagnetic media. Its multiscale nature due to different inherent spatiotemporal physical and geometric scales, together with nonlocal phenomena and a nonconvex side-constraint, leads to rich behavior and pattern formation. This variety of effects is also the reason for severe problems in analysis, model validation, reductions, and numerics, which have only recently been accessed and are reviewed in this work.

178 citations


Journal ArticleDOI
TL;DR: This paper reviews several representative globalizations of Newton-Krylov methods, discusses their properties, and reports on a numerical study aimed at evaluating their relative merits on large-scale two- and three-dimensional problems involving the steady-state Navier-Stokes equations.
Abstract: A Newton-Krylov method is an implementation of Newton’s method in which a Krylov subspace method is used to solve approximately the linear subproblems that determine Newton steps. To enhance robustness when good initial approximate solutions are not available, these methods are usually globalized, i.e., augmented with auxiliary procedures (globalizations) that improve the likelihood of convergence from a starting point that is not near a solution. In recent years, globalized Newton-Krylov methods have been used increasingly for the fully coupled solution of large-scale problems. In this paper, we review several representative globalizations, discuss their properties, and report on a numerical study aimed at evaluating their relative merits on large-scale two- and three-dimensional problems involving the steady-state Navier-Stokes equations.

87 citations


Journal ArticleDOI
TL;DR: This survey gives an introduction to a recently developed technique to analyze this extremal problem in polynomial approximation theory in the case of symmetric matrices.
Abstract: Krylov subspace iterations are among the best-known and most widely used numerical methods for solving linear systems of equations and for computing eigenvalues of large matrices. These methods are polynomial methods whose convergence behavior is related to the behavior of polynomials on the spectrum of the matrix. This leads to an extremal problem in polynomial approximation theory: How small can a monic polynomial of a given degree be on the spectrum? This survey gives an introduction to a recently developed technique to analyze this extremal problem in the case of symmetric matrices. It is based on global information on the spectrum in the sense that the eigenvalues are assumed to be distributed according to a certain measure. Then, depending on the number of iterations, the Lanczos method for the calculation of eigenvalues finds those eigenvalues that lie in a certain region, which is characterized by means of a constrained equilibrium problem from potential theory. The same constrained equilibrium problem also describes the superlinear convergence of conjugate gradients and other iterative methods for solving linear systems.

77 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that finds a bisection whose cost is within a factor of $O(\log^{1.5} n)$ from the minimum, and for graphs excluding any fixed graph as a minor (e.g., planar graphs) the previously known approximation ratio for bisection is improved.
Abstract: A bisection of a graph with $n$ vertices is a partition of its vertices into two sets, each of size $n/2$. The bisection cost is the number of edges connecting the two sets. The problem of finding a bisection of minimum cost is prototypical to graph partitioning problems, which arise in numerous contexts. This problem is NP-hard. We present an algorithm that finds a bisection whose cost is within a factor of $O(\log^{1.5} n)$ from the minimum. For graphs excluding any fixed graph as a minor (e.g., planar graphs) we obtain an improved approximation ratio of $O(\log n)$. The previously known approximation ratio for bisection was roughly $\sqrt{n}$.

64 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the self-similarity (or dynamical scaling) of the cluster size distribution for the "solvable" rate kernels, and prove uniform convergence of densities to a selfsimilar solution with exponential tail.
Abstract: Smoluchowski’s coagulation equation is a fundamental mean-field model of clustering dynamics. We consider the approach to self-similarity (or dynamical scaling) of the cluster size distribution for the “solvable” rate kernels $K(x,y)=2,x+y$, and $xy$. In the case of continuous cluster size distributions, we prove uniform convergence of densities to a self-similar solution with exponential tail, under the regularity hypothesis that a suitable moment have an integrable Fourier transform. For discrete size distributions, we prove uniform convergence under optimal moment hypotheses. Our results are completely analogous to classical local convergence theorems for the normal law in probability theory. The proofs rely on the Fourier inversion formula and the solution for the Laplace transform by the method of characteristics in the complex plane.

33 citations


Journal ArticleDOI
TL;DR: A more physically realistic and systematic derivation of the wave equation suitable for a typical undergraduate course in partial differential equations is advocated and three applications that follow naturally are described: strings, hanging chains, and jump ropes.
Abstract: Following Antman [Amer. Math. Mon., 87 (1980), pp. 359-370], we advocate a more physically realistic and systematic derivation of the wave equation suitable for a typical undergraduate course in partial differential equations. To demonstrate the utility of this derivation, three applications that follow naturally are described: strings, hanging chains, and jump ropes.

Journal ArticleDOI
TL;DR: In this paper, the probability of winning a game, set, match, or single elimination tournament in tennis is computed using Monte Carlo simulations based on each player's probability to win a point on serve, which can be held constant or varied from point to point, game to game, or match to match.
Abstract: The probability of winning a game, set, match, or single elimination tournament in tennis is computed using Monte Carlo simulations based on each player’s probability of winning a point on serve, which can be held constant or varied from point to point, game to game, or match to match. The theory, described in Newton and Keller [Stud. Appl. Math., 114 (2005), pp. 241-269], is based on the assumption that points in tennis are independent, identically distributed (i.i.d.) random variables. This is used as a baseline to compare with the simulations, which under similar circumstances are shown to converge quickly to the analytical curves in accordance with the weak law of large numbers. The concept of the importance of a point, game, and set to winning a match is described based on conditional probabilities and is used as a starting point to model non-i.i.d.effects, allowing each player to vary, from point to point, his or her probability of winning on serve. Several non-i.i.d.models are investigated, including the “hot-hand-effect,” in which we increase each player’s probability of winning a point on serve on the next point after a point is won. The “back-to-the-wall” effect is modeled by increasing each player’s probability of winning a point on serve on the next point after a point is lost. In all cases, we find that the results provided by the theoretical curves based on the i.i.d.assumption are remarkably robust and accurate, even when relatively strong non-i.i.d.effects are introduced. We end by showing examples of tournament predictions from the 2002 men’s and women’s U.S. Open draws based on the Monte Carlo simulations. We also describe Arrow’s impossibility theorem and discuss its relevance with regard to sports ranking systems, and we argue for the development of probability-based ranking systems as a way to soften its consequences.

Journal ArticleDOI
TL;DR: The present paper outlines the recent discovery of the role of Faa di Bruno's formula in Laplace's method, gives examples of the application of the explicit expression for the coefficients $c_{s},$ and provides grounds for a possible generalization of the result.
Abstract: Laplace's method is a preeminent technique in the asymptotic approximation of integrals. Its utility was enhanced enormously in 1956 when Erdelyi found a way to apply Watson's lemma and thereby obtain an infinite asymptotic expansion valid, in principle, for any integral of Laplace type. Erdelyi's formulation requires tedious computation of coefficients $c_{s}$ for each specific application of the method, and traditionally this has involved reverting a series. Recently, it was shown that the coefficients $c_{s}$ can be computed via a simple, explicit expression that is probably computationally optimal, which avoids the reversion approach altogether. The formula is made possible by recognizing the central role of Faa di Bruno's formula, alongside Watson's lemma, in Erdelyi's formulation of Laplace's classical method. Laplace's method can now be implemented cleanly and relatively quickly, provided one has the luck and the patience to get to the point where implementation becomes automatic. The present paper outlines the recent discovery of the role of Faa di Bruno's formula in Laplace's method, gives examples of the application of the explicit expression for the coefficients $c_{s},$ and provides grounds for a possible generalization of the result.

Journal ArticleDOI
TL;DR: In this paper, the authors extended the classical coupon collector's problem to one in which two collectors are simultaneously and independently seeking collections of d coupons, and obtained the evaluation in finite terms of certain infinite series whose coefficients are powers and products of Stirling numbers of the second kind.
Abstract: We extend the classical coupon collector's problem to one in which two collectors are simultaneously and independently seeking collections of d coupons. We find, in finite terms, the probability that the two collectors finish at the same trial, and we find, using the methods of Gessel and Viennot, the probability that the game has the following "ballot-like" character: the two collectors are tied with each other for some initial number of steps, and after that the player who first gains the lead remains ahead throughout the game. As a by-product we obtain the evaluation in finite terms of certain infinite series whose coefficients are powers and products of Stirling numbers of the second kind. We study the variant of the original coupon collector's problem in which a single collector wants to obtain at least h copies of each coupon. Here we give a simpler derivation of results of Newman and Shepp and extend those results. Finally, we obtain the distribution of the number of coupons that have been obtained exactly once ("singletons") at the conclusion of a successful coupon collecting sequence.

Journal ArticleDOI
TL;DR: The next term in such an asymptotic approximation for a particular ionic model consisting of two ODEs is derived, showing excellent quantitative agreement with the actual restitution curve, whereas the leading-order approximation significantly underestimates actual APD values.
Abstract: If spatial extent is neglected, ionic models of cardiac cells consist of systems of ordinary differential equations (ODEs) which have the property of excitability, i.e., a brief stimulus produces a prolonged evolution (called an action potential in the cardiac context) before the eventual return to equilibrium. Under repeated stimulation, or pacing, cardiac tissue exhibits electrical restitution: the steady-state action potential duration (APD) at a given pacing period B shortens as B is decreased. Independent of ionic models, restitution is often modeled phenomenologically by a one-dimensional mapping of the form APDnext = f(B - APDprevious). Under some circumstances, a restitution function f can be derived as an asymptotic approximation to the behavior of an ionic model. In this paper, extending previous work, we derive the next term in such an asymptotic approximation for a particular ionic model consisting of two ODEs. The two-term approximation exhibits excellent quantitative agreement with the actual restitution curve, whereas the leading-order approximation significantly underestimates actual APD values.

Journal ArticleDOI
TL;DR: The effectiveness of the optimal approximate inverse preconditionsers (parametrized by any vectorial structure) improves at the same time as the smallest singular value (or the smallest eigenvalue's modulus) of the corresponding preconditioned matrices increases to $1.
Abstract: Many strategies for constructing different structures of sparse approximate inverse preconditioners for large linear systems\ have been proposed in the literature. In a more general framework, this paper analyzes the theoretical effectiveness of the optimal preconditioner (in the Frobenius norm) of a linear system over an arbitrary subspace of $M_{n}\left( \mathbb{R}\right)$. For this purpose, the spectral analysis of the Frobenius orthogonal projections of the identity matrix onto the linear subspaces of $M_{n}\left( \mathbb{R}\right)$ is performed. This analysis leads to a simple, general criterion: The effectiveness of the optimal approximate inverse preconditioners (parametrized by any vectorial structure)\ improves at the same time as the smallest singular value (or the smallest eigenvalue's modulus) of the corresponding preconditioned matrices increases to $1$.

Journal ArticleDOI
TL;DR: An approximating recursive filter is proposed, which is designed using differential-geometric methods in a suitably chosen space of unnormalized probability densities and can be interpreted as an adaptive version of the celebrated Shiryayev--Wonham equation for the detection of a priori known changes.
Abstract: A benchmark change detection problem is considered which involves the detection of a change of unknown size at an unknown time. Both unknown quantities are modeled by stochastic variables, which allows the problem to be formulated within a Bayesian framework. It turns out that the resulting nonlinear filtering problem is much harder than the well-known detection problem for known sizes of the change, and in particular that it can no longer be solved in a recursive manner. An approximating recursive filter is therefore proposed, which is designed using differential-geometric methods in a suitably chosen space of unnormalized probability densities. The new nonlinear filter can be interpreted as an adaptive version of the celebrated Shiryayev--Wonham equation for the detection of a priori known changes, combined with a modified Kalman filter structure to generate estimates of the unknown size of the change. This intuitively appealing interpretation of the nonlinear filter and its excellent performance in simulation studies indicates that it may be of practical use in realistic change detection problems.

Journal ArticleDOI
TL;DR: An alternative approach is presented that uses a computer algebra system to calculate a limit and allows one to bypass the use of differential equations.
Abstract: Exponentially decaying functions can be successfully introduced as early as high school. However, obtaining the equations for the resulting effect of absorption and elimination of drug processes acting simultaneously, even in simplified models like the single dose case in a one-compartment model with an order 1 kinetic (Bateman equation), requires one to solve a nontrivial differential equation. Therefore, this equation is normally taught to second- or third-year students in the schools of medicine and pharmacy. An alternative approach is presented that uses a computer algebra system to calculate a limit and allows one to bypass the use of differential equations.

Journal ArticleDOI
TL;DR: The work here is motivated by a problem in engineering design, namely, the allocation of reliability to the subsystems of a system, or to the nodes of a network, and uses results from probability on the crossing properties of star-ordered distributions to provide a foundation for addressing a class of optimization problems in reliability.
Abstract: The work here is motivated by a problem in engineering design, namely, the allocation of reliability to the subsystems of a system, or to the nodes of a network. Allocation problems in reliability have been considered before, but the focus has been on allocating redundancy rather than reliability. Attempts at the latter topic suffer from a drawback, namely, that component interdependencies have not been considered. Here we overcome this drawback, and then provide a foundation for addressing a class of optimization problems in reliability. This boils down to finding the fixed points of a function in a unit square. For this we use results from probability on the crossing properties of star-ordered distributions. We illustrate our approach by considering series, parallel, and bridge-structured networks. In the latter two cases, we are able to show that an optimal allocation of reliability could lead to system collapsibility, i.e., a simplification of the system's architecture. For the case of series systems with dependent life lengths, we observe the result that independence when incorrectly assumed could result in an overallocation of reliability.

Journal ArticleDOI
TL;DR: A water-supply problem considered by the optimization and hydrology communities for benchmarking purposes is introduced and a nonclassical shock wave previously unknown to exist in thin liquid films is discovered.
Abstract: Communicating Applied Mathematics is a writing- and speaking-intensive graduate course at North Carolina State University. The purpose of this article is to provide a brief description of the course objectives and the assignments. Parts A--D of of this article represent the class projects and illustrate the outcome of the course: * The Evolution of an Optimization Test Problem: From Motivation to Implementation, by Daniel E. Finkel and Jill P. Reese * Finding the Volume of a Powder from a Single Surface Height Measurement, by Christopher Kuster * Finding Oscillations in Resonant Tunneling Diodes, by Matthew Lasater * A Shocking Discovery: Nonclassical Waves in Thin Liquid Films, by Rachel Levy We introduce a water-supply problem considered by the optimization and hydrology communities for benchmarking purposes. The objective is to drill five wells so that the cost of pumping water out of the ground is minimized. Using the implicit filtering optimization algorithm to locate the wells, we save approximately $2,500 over the cost of a given initial well configuration. The volume of powder poured into a bin with obstructions is found by calculating the height of the surface at every point. This is done using the fast marching algorithm. We look at two different bin geometries and determine the volumes as a function of the powder height under the spout. The surface of the powder satisfies a two-dimensional eikonal equation. This equation is solved using the fast marching method. Resonant tunneling diodes (RTDs) are ultrasmall semiconductor devices that have potential as very high-frequency oscillators. To describe the electron transport within these devices, physicists use the Wigner--Poisson equations which incorporate quantum mechanics to describe the distribution of electrons within the RTD. Continuation methods are employed to determine the steady-state electron distributions as a function of the voltage difference across the device. These simulations predict the operating state of the RTD under different applied voltages and will be a tool to help physicists understand how changing the voltage applied to the device leads to the development of current oscillations. When a thin film flows down an inclined plane, a bulge of fluid, known as a capillary ridge, forms on the leading edge and is subject to a fingering instability in which the fluid is channeled into rivulets. This process is familiar to us in everyday experiments such as painting a wall or pouring syrup over a stack of pancakes. It is also observed that changes in surface tension due to a temperature gradient can draw fluid up an inclined plane. Amazingly, in this situation the capillary ridge broadens and no fingering instability is observed. Numerical and analytical studies of a mathematical model of this process led to the discovery that these observations are associated with a nonclassical shock wave previously unknown to exist in thin liquid films.

Journal ArticleDOI
TL;DR: Some methods for analyzing the existence of solutions and obtaining the set of all solutions, based on the theory of cones and polyhedra, are given.
Abstract: This paper introduces the problem of solving ordinary differential equations with extra linear conditions written in terms of ranges, and deals with the corresponding existence and uniqueness problems. Some methods for analyzing the existence of solutions and obtaining the set of all solutions, based on the theory of cones and polyhedra, are given. These solutions are found by first converting the problem to a system of linear algebraic equations and then applying the corresponding well-known theory for solving and discussing the existence and uniqueness of solutions of these systems. Finally, the methods are illustrated by their application to some practical examples of the beam problem.


Journal ArticleDOI
TL;DR: Properties of the difference of two sums containing products of binomial coefficients and their logarithms which arise in the application of Shannon's information theory to a certain class of covert channels are deduced.
Abstract: Properties of the difference of two sums containing products of binomial coefficients and their logarithms which arise in the application of Shannon's information theory to a certain class of covert channels are deduced. Some allied consequences of the latter are also recorded.