scispace - formally typeset
Search or ask a question

Showing papers in "Siam Review in 2002"


Journal ArticleDOI
TL;DR: A condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization shows how their influence has transformed both the theory and practice of constrained optimization.
Abstract: Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interior-point techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar's widely publicized announcement in 1984 of a fast polynomial-time interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.

693 citations


Journal ArticleDOI
TL;DR: The results show that modeling by reaction-diffusion equations is an appropriate tool for investigating fundamental mechanisms of complex spatiotemporal plankton dynamics, fractal properties of planktivorous fish school movements, and their interrelationships.
Abstract: Nonlinear dynamics and chaotic and complex systems constitute some of the most fascinating developments of late twentieth century mathematics and physics. The implications have changed our understanding of important phenomena in almost every field of science, including biology and ecology. This article investigates complexity and chaos in the spatiotemporal dynamics of aquatic ecosystems. The dynamics of these biological communities exhibit an interplay between processes acting on a scale from hundreds of meters to kilometers, controlled by biology, and processes acting on a scale from dozens to hundreds of kilometers, dominated by the heterogeneity of hydrophysical fields. We focus on how biological processes affect spatiotemporal pattern formation. Our results show that modeling by reaction-diffusion equations is an appropriate tool for investigating fundamental mechanisms of complex spatiotemporal plankton dynamics, fractal properties of planktivorous fish school movements, and their interrelationships.

441 citations


Journal ArticleDOI
TL;DR: In this paper, a simple adaptive finite element method (FEM) for elliptic partial differential equations is proposed, which guarantees an error reduction rate based on posteriori error estimators, together with a reduction rate of data oscillation (information missed by the underlying averaging process).
Abstract: Adaptive finite element methods (FEMs) have been widely used in applications for over 20 years now. In practice, they converge starting from coarse grids, although no mathematical theory has been able to prove this assertion. Ensuring an error reduction rate based on a posteriori error estimators, together with a reduction rate of data oscillation (information missed by the underlying averaging process), we construct a simple and efficient adaptive FEM for elliptic partial differential equations. We prove that this algorithm converges with linear rate without any preliminary mesh adaptation nor explicit knowledge of constants. Any prescribed error tolerance is thus achieved in a finite number of steps. A number of numerical experiments in two and three dimensions yield quasi-optimal meshes along with a competitive performance. Extensions to higher order elements and applications to saddle point problems are discussed as well.

313 citations


Journal ArticleDOI
TL;DR: The theory of optical wavefront reconstruction is detailed, some numerical methods for this problem are reviewed, and a novel numerical technique is presented that is called extended least squares is presented.
Abstract: Optical wavefront reconstruction algorithms played a central role in the effort to identify gross manufacturing errors in NASA's Hubble Space Telescope (HST). NASA's success with reconstruction algorithms on the HST has led to an effort to develop software that can aid and in some cases replace complicated, expensive, and error-prone hardware. Among the many applications is HST's replacement, the Next Generation Space Telescope (NGST). indent This work details the theory of optical wavefront reconstruction, reviews some numerical methods for this problem, and presents a novel numerical technique that we call extended least squares. We compare the performance of these numerical methods for potential inclusion in prototype NGST optical wavefront reconstruction software. We begin with a tutorial on Rayleigh--Sommerfeld diffraction theory.

204 citations


Journal ArticleDOI
TL;DR: This paper classifies the MCMs and presents the important methods for each class, and emphasis is placed on correct application and interpretation.
Abstract: Multiple comparison methods (MCMs) are used to investigate differences between pairs of population means or, more generally, between subsets of population means using sample data. Although several such methods are commonly available in statistical software packages, users may be poorly informed about the appropriate method(s) to use and/or the correct way to interpret the results. This paper classifies the MCMs and presents the important methods for each class. Both simulated and real data are used to compare methods, and emphasis is placed on correct application and interpretation. We include suggestions for choosing the best method. Mathematica programs developed by the authors are used to compare MCMs. By taking advantage of Mathematica's notebook structure, an interested student can use these programs to explore the subject more deeply. The programs and examples used in the article are available at http://www.cs.gasou.edu/faculty/rafter/MCMM/.

118 citations


Journal ArticleDOI
TL;DR: A generalization of the RQI which computes a p-dimensional invariant subspace of A which Cubic convergence is preserved and the cost per iteration is low compared to other methods proposed in the literature.
Abstract: The classical Rayleigh quotient iteration (RQI) allows one to compute a one-dimensional invariant subspace of a symmetric matrix A. Here we propose a generalization of the RQI which computes a p-dimensional invariant subspace of A. Cubic convergence is preserved and the cost per iteration is low compared to other methods proposed in the literature.

96 citations


Journal ArticleDOI
TL;DR: In this paper, links are established between optimality conditions for quadratic optimization problems, qualitative properties in the nonlinear selection replicator dynamics, and central solution concepts of evolutionary game theory.
Abstract: In this paper, links are established between optimality conditions for quadratic optimization problems, qualitative properties in the nonlinear selection replicator dynamics, and central solution concepts of evolutionary game theory, with particular emphasis on several regularity conditions that are desirable in any of the three fields mentioned above: as strictness conditions for locally optimal solutions, as hyperbolicity conditions for fixed points, and as quasi-strictness conditions for game equilibria.

75 citations


Journal ArticleDOI
TL;DR: Two time-frequency methods, spectrograms and scalograms, will be shown to extend the classic Fourier approach, providing time- frequencies portraits of the sounds produced by musical instruments that seem to correlate well with perceptions of theSounds produced by these instruments and of the differences between each instrument.
Abstract: This paper describes several approaches to analyzing the frequency, or pitch, content of the sounds produced by musical instruments. The classic method, using Fourier analysis, identifies fundamentals and overtones of individual notes. A second method, using spectrograms, analyzes the changes in fundamentals and overtones over time as several notes are played. Spectrograms produce a time-frequency description of a musical passage. A third method, using scalograms, produces far more detailed time-frequency descriptions within the region of the time-frequency plane typically occupied by musical sounds. Scalograms allow one to zoom in on selected regions of the time-frequency plane in a more flexible manner than is possible with spectrograms, and they have a natural interpretation in terms of a musical scale. All three of these techniques will be employed in analyzing music played on a piano, a flute, and a guitar. The two time-frequency methods, spectrograms and scalograms, will be shown to extend the classic Fourier approach, providing time-frequency portraits of the sounds produced by these instruments. Among other advantages, these time-frequency portraits seem to correlate well with our perceptions of the sounds produced by these instruments and of the differences between each instrument. There are many additional applications of time-frequency methods, such as compression of audio and resolution of closely spaced spectral lines in spectroscopy. Brief discussions of these additional applications are included in the paper.

67 citations


Journal ArticleDOI
TL;DR: The existence of a critical field $\oh,$ for which the normal states are the only solutions to the Ginzburg--Landau equations is analytically shown.
Abstract: We study the behavior of a superconducting material subjected to a constant applied magnetic field, $\bH_a=h\bee$ with $|\bee|=1$, using the Ginzburg--Landau theory. We analytically show the existence of a critical field $\oh,$ for which, when $h>\oh,$ the normal states are the only solutions to the Ginzburg--Landau equations. We estimate $\oh$. As $\k\downarrow 0$ we derive $\oh=O(1)$, while as $\k\to\infty$ we obtain $\oh=O(\k)$.

45 citations


Journal ArticleDOI
TL;DR: A simple method to formulate an explicit expression for the roots of any analytic transcendental function is presented based on Cauchy's integral theorem and uses only basic concepts of complex integration.
Abstract: A simple method to formulate an explicit expression for the roots of any analytic transcendental function is presented. The method is based on Cauchy's integral theorem and uses only basic concepts of complex integration. One convenient method for numerically evaluating the exact expression is presented. The application of both the formulation and evaluation of the exact expression is illustrated for several classical root finding problems.

38 citations


Journal ArticleDOI
TL;DR: Nine MATLAB programs that implement the binomial method for valuing a European put option are given and show how execution times in MATLAB can be dramatically reduced by using high-level operations on arrays rather than computing with individual components, a principle that applies in many scientific computing environments.
Abstract: In the context of a real-life application that is of interest to many students, we illustrate how the choices made in translating an algorithm into a high-level computer code can affect the execution time. More precisely, we give nine MATLAB programs that implement the binomial method for valuing a European put option. The first program is a straightforward translation of the pseudocode in Figure 10.4 of The Mathematics of Financial Derivatives, by P. Wilmott, S. Howison, and J. Dewynne, Cambridge University Press, 1995. Four variants of this program are then presented that improve the efficiency by avoiding redundant computation, vectorizing, and accessing subarrays via MATLAB's colon notation. We then consider reformulating the problem via a binomial coefficient expansion. Here, a straightforward implementation is seen to be improved by vectorizing, avoiding overflow and underflow, and exploiting sparsity. Overall, the fastest of the binomial method programs has an execution time that is within a factor 2 of direct evaluation of the Black--Scholes formula. One of the vectorized versions is then used as the basis for a program that values an American put option. The programs show how execution times in MATLAB can be dramatically reduced by using high-level operations on arrays rather than computing with individual components, a principle that applies in many scientific computing environments. The relevant files are downloadable from the World Wide Web.

Journal ArticleDOI
TL;DR: Algorithms from linear algebra are used to prove the existence of max-plus-algebraic analogues of the QR decomposition and the singular value decomposition.
Abstract: This paper is an updated and extended version of the paper "The QR Decomposition and the Singular Value Decomposition in the Symmetrized Max-Plus Algebra" (B. De Schutter and B. De Moor, SIAM J. Matrix Anal. Appl., 19 (1998), pp. 378--406). The max-plus algebra, which has maximization and addition as its basic operations, can be used to describe and analyze certain classes of discrete-event systems, such as flexible manufacturing systems, railway networks, and parallel processor systems. In contrast to conventional algebra and conventional (linear) system theory, the max-plus algebra and the max-plus-algebraic system theory for discrete-event systems are at present far from fully developed, and many fundamental problems still have to be solved. Currently, much research is going on to deal with these problems and to further extend the max-plus algebra and to develop a complete max-plus-algebraic system theory for discrete-event systems. In this paper we address one of the remaining gaps in the max-plus algebra by considering matrix decompositions in the symmetrized max-plus algebra. The symmetrized max-plus algebra is an extension of the max-plus algebra obtained by introducing a max-plus-algebraic analogue of the $-$-operator. We show that we can use well-known linear algebra algorithms to prove the existence of max-plus-algebraic analogues of basic matrix decompositions from linear algebra such as the QR decomposition, the singular value decomposition, the Hessenberg decomposition, the LU decomposition, and so on. These max-plus-algebraic matrix decompositions could play an important role in the max-plus-algebraic system theory for discrete-event systems.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the one-dimensional bin packing problem under the discrete uniform distribution and show that the average case performance of heuristics can differ substantially between the two types of distributions.
Abstract: We consider the one-dimensional bin packing problem under the discrete uniform distributions $U\{j,k\}$, $1 \leq j \leq k-1$, in which the bin capacity is $k$ and item sizes are chosen uniformly from the set $\{1,2,\ldots,j\}$. Note that for $0 < u = j/k \leq 1$ this is a discrete version of the previously studied continuous uniform distribution $U(0,u]$, where the bin capacity is 1 and item sizes are chosen uniformly from the interval $(0,u]$. We show that the average-case performance of heuristics can differ substantially between the two types of distributions. In particular, there is an online algorithm that has constant expected wasted space under $U\{j,k\}$ for any $j,k$ with $1 \leq j < k-1$, whereas no online algorithm can have $o(n^{1/2})$ expected waste under $U(0,u]$ for any $0 < u \leq 1$. Our $U\{j,k\}$ result is an application of a general theorem of Courcoubetis and Weber that covers all discrete distributions. Under each such distribution, the optimal expected waste for a random list of $n$ items must be either $\Theta (n)$, $\Theta (n^{1/2} )$, or $O(1)$, depending on whether certain ``perfect'' packings exist. The perfect packing theorem needed for the $U\{j,k\}$ distributions is an intriguing result of independent combinatorial interest, and its proof is a cornerstone of the paper. We also survey other recent results comparing the behavior of heuristics under discrete and continuous uniform distributions.

Journal ArticleDOI
TL;DR: This paper has designed and made available some interactive software to aid in the simulation and inversion of gravitational lenses in a classroom setting and gives a relatively self-contained outline of the basic concepts and mathematics behind gravitational lensing.
Abstract: Gravitational lensing provides a powerful tool to study a number of fundamental questions in astrophysics. Fortuitously, one can begin to explore some nontrivial issues associated with this phenomenon without a lot of very sophisticated mathematics, making an elementary treatment of this topic tractable even to senior undergraduates. In this paper, we give a relatively self-contained outline of the basic concepts and mathematics behind gravitational lensing as a recent and exciting topic for courses in mathematical modeling or scientific computing. To this end, we have designed and made available some interactive software to aid in the simulation and inversion of gravitational lenses in a classroom setting.

Journal ArticleDOI
TL;DR: A piecewise smooth mapping of the three-dimensional Euclidean space is derived from a discrete-time model of combat based on the effects of discontinuity caused by the defender's withdrawal strategy, a prime component of the original model.
Abstract: A piecewise smooth mapping of the three-dimensional Euclidean space is derived from a discrete-time model of combat. The mathematical analysis of this mapping focuses on the effects of discontinuity caused by the defender's withdrawal strategy---a prime component of the original model. Both the asymptotics and the transient behavior are discussed, and all the behavior types noted in the title are established as possible outcomes.

Journal ArticleDOI
TL;DR: Graphical analysis reveals that any linefitting method must have ``singularities,'' i.e., data sets near which the line fitting method is unstable, and these ideas are illustrated in principal component analysis and least squares and least absolute deviation linear regression.
Abstract: Linear regression and principal components analysis are examples of plane fitting methods. Plane fitting is a very important activity in multivariate statistical analysis. The geometry of plane fitting is surprisingly complex, but general insights into it can be gained by considering the problem of fitting a line (``one-dimensional plane'') to only three or four bivariate, quantitative data points. Graphical analysis reveals that any line fitting method must have ``singularities,'' i.e., data sets near which the line fitting method is unstable. (For example, collinear data sets are the singularities of least squares linear regression.) Singularities can be classified according to the effects they have on the behavior of the line fitting method and those effects can be quantified as well. The dimension of (``degrees of freedom'' in) the set of all singularities of a line fitting method is related to the probability of getting a data set near a singularity. These ideas are illustrated in principal component analysis and least squares and least absolute deviation linear regression.

Journal ArticleDOI
TL;DR: The objective is to design a control mechanism to decide whether bets should be accepted or rejected in the four-digit number game, and a nonlinear optimization model is proposed for this problem.
Abstract: The four-digit number game is a popular game of chance played in Southeast Asia. The players in this game choose a four-digit number and place their bets on it. In this paper, we study the design of a control mechanism for managing bets in this game. Our objective is to design a control mechanism to decide whether bets should be accepted or rejected. We propose a nonlinear optimization model for this problem and provide the mathematical justification for the control mechanism used by several operators in this region. We also suggest a simple improved control mechanism. Using data provided by a company in the region, we show that our control mechanism can accept more money per draw, while the risk exposure of the proposed mechanism can be considerably smaller than the current system.

Journal ArticleDOI
TL;DR: The authors show that for dozens of problems involving behavioral choices by animals, quantitative predictions can be made if one assumes that the behavior has been approximately optimized by natural selection.
Abstract: We can all understand intuitively why a female bear is more aggressive when she has cubs, why a squirrel hoards food as winter approaches, or why a blackbird may sing to attract a mate though he risks attracting a cat instead. As soon as more complex behavioral choices are involved, however, one must graduate from intuition to mathematics, and this is the subject of this issue's article by John McNamara, Alasdair Houston, and Sean Collins of the Centre for Behavioural Biology at the University of Bristol. The authors show that for dozens of problems involving behavioral choices by animals, quantitative predictions can be made if one assumes that the behavior has been approximately optimized by natural selection. Many mathematical tools are employed along the way, ranging from calculus for a simple problem concerning one individual to game theory when multiple individuals interact and compete. I note with pleasure that SIAM Review has published several Survey and Review articles in recent years on biological topics. Perhaps this is an occasion to review the subjects we have addressed since SIAM Review turned blue in 1999. Sixteen Survey and Review articles have appeared in this time; the average one is 40 pages long and cites 110 items in the bibliography. One might also add that our average author is rather eminent! Here is a rough attempt at the procrustean task of classifying this very special set of articles into categories: Modeling of biological systems: Durrett, Stochastic spatial models Hethcote, Infectious diseases McNamara, Houston, and Collins, Behavioral biology Perelson and Nelson, HIV-1 dynamics Modeling of physical systems: Chapman, Type-II superconductors Stewart, Rigid-body dynamics Xin, Front propagation in heterogeneous media Computations and algorithms: Colton, Coyle, and Monk, Inverse scattering theory Du, Faber, and Gunzburger, Centroidal Voronoi tesselations Sethian, Fast marching methods Tisseur and Meerbergen, Quadratic eigenproblems Fundamental mathematics and methods: Berry and Keating, Riemann zeros and eigenvalue asymptotics Chapman, Lawry, Ockendon, and Tew, Complex rays Diaconis and Freedman, Iterated random functions Thunberg, One-dimensional dynamics Finance: Steinbach, Portfolio analysis Are you tempted to interpret this list as SIAM's map of the territory of applied mathematics? Please don't! There are a hundred important topics missing from this list on which we would be equally happy to publish articles. If a particular omission strikes you as especially glaring, and if you have the perfect author in mind for an authoritative review, we would be glad to hear from you.


Journal ArticleDOI
TL;DR: Spectroscopic ellipsometry, a nondestructive optically based technique for real-time in-situ characterization of materials, is described and its use in the design, implementation, and testing of a feedback control system for the etching process is discussed.
Abstract: A dynamic model for the thermal chlorine etching of gallium arsenide is formulated and validated. The model consists of three ordinary differential equations. One models the chemical reaction between the chlorine gas and the gallium arsenide substrate being etched. The second equation, which is based on an inflow/outflow paradigm, models the dynamics of the pressure in the etching chamber. The third equation models the dynamics of a throttle valve which controls the chamber pressure. The entire model is based upon a combination of empirical and first principle physics-based reasoning, and is formulated using sophomore-level elementary chemistry, physics, and differential equations. Spectroscopic ellipsometry, a nondestructive optically based technique for real-time in-situ characterization of materials, is described. Offline and real-time ellipsometry measurements of sample thickness are used to identify or estimate otherwise unmeasurable parameters which appear in the model and to verify or validate our model via comparison with simulation results based upon the model. The use of the model in the design, implementation, and testing of a feedback control system for the etching process is discussed.