scispace - formally typeset
Search or ask a question

Showing papers on "Maxima and minima published in 1991"


Journal ArticleDOI
TL;DR: This is a method for determining numerically local minima of differentiable functions of several variables by suitable choice of starting values, and without modification of the procedure, linear constraints can be imposed upon the variables.
Abstract: This is a method for determining numerically local minima of differentiable functions of several variables. In the process of locating each minimum, a matrix which characterizes the behavior of the function about the minimum is determined. For a region in which the function depends quadratically on the variables, no more than N iterations are required, where N is the number of variables. By suitable choice of starting values, and without modification of the procedure, linear constraints can be imposed upon the variables.

1,010 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the problem of computing mu in the case of mixed real parametric and complex uncertainty and showed that the problem is equivalent to a smooth constrained finite-dimensional optimization problem.
Abstract: Continuing the development of the structured singular value approach to robust control design, the authors investigate the problem of computing mu in the case of mixed real parametric and complex uncertainty. The problem is shown to be equivalent to a smooth constrained finite-dimensional optimization problem. In view of the fact that the functional to be maximized may have several local extrema, an upper bound on mu whose computation is numerically tractable is established; this leads to a sufficient condition of robust stability and performance. A historical perspective on the development of the mu theory is included. >

801 citations


Proceedings ArticleDOI
09 Apr 1991
TL;DR: A novel formulation of the artificial potential approach to the obstacle avoidance problem for a mobile robot or a manipulator in a known environment and an elegant control strategy for the real-time control of a robot is proposed.
Abstract: A novel formulation of the artificial potential approach to the obstacle avoidance problem for a mobile robot or a manipulator in a known environment is presented. Previous formulations of artificial potentials for obstacle avoidance have exhibited local minima in a cluttered environment. To build an artificial potential field, the authors use harmonic functions that completely eliminate local minima even for a cluttered environment. The panel method is used to represent arbitrarily shaped obstacles and to derive the potential over the whole space. Based on this potential function, an elegant control strategy for the real-time control of a robot is proposed. Simulation results are presented for a bar-shaped mobile robot and a three-degree-of-freedom planar redundant manipulator. >

667 citations


Journal ArticleDOI
01 Sep 1991
TL;DR: The parallel genetic algorithm PGA is applied to the optimization of continuous functions and is able to find the global minimum of Rastrigin's function of dimension 400 on a 64 processor system!
Abstract: In this paper, the parallel genetic algorithm PGA is applied to the optimization of continuous functions. The PGA uses a mixed strategy. Subpopulations try to locate good local minima. If a subpopulation does not progress after a number of generations, hillclimbing is done. Good local minima of a subpopulation are diffused to neighboring subpopulations. Many simulation results are given with popular test functions. The PGA is at least as good as other genetic algorithms on simple problems. A comparison with mathematical optimization methods is done for very large problems. Here a breakthrough can be reported. The PGA is able to find the global minimum of Rastrigin's function of dimension 400 on a 64 processor system! Furthermore, we give an example of a superlinear speedup.

647 citations


Journal ArticleDOI
TL;DR: A backpropagation algorithm is presented that varies the number of hidden units and is expected to escape local minima and makes it no longer necessary to decide on the numberof hidden units.

525 citations


Book
01 Jun 1991
TL;DR: An example of how simulated evolution can be applied to a practical optimization problem and more specifically, how the addition of co-evolving parasites can improve the procedure by preventing the system from sticking at local maxima is shown.
Abstract: This paper shows an example of how simulated evolution can be applied to a practical optimization problem, and more specifically, how the addition of co-evolving parasites can improve the procedure by preventing the system from sticking at local maxima. Firstly an optimization procedure based on simulated evolution and its implementation on a parallel computer are described. Then an application of this system to the problem of generating minimal sorting networks is described. Finally it is shown how the introduction of a species of co-evolving parasites improves the efficiency and effectiveness of the procedure.

463 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used simulated annealing to find the critical temperature at which a solid in a heat bath is heated by increasing the temperature, followed by slow cooling until it reaches the global minimum energy state.
Abstract: The seismic inverse problem involves finding a model m that either minimizes the error energy between the data and theoretical seismograms or maximizes the cross-correlation between the synthetics and the observations. We are, however, faced with two problems: (1) the model space is very large, typically of the order of 50 50 ; and, (2) the error energy function is multimodal. Existing calculus-based methods are local in scope and easily get trapped in local minima of the energy function. Other methods such as 'simulated annealing' and 'genetic algorithms' can be applied to such global optimization problems and they do not depend on the starting model. Both of these methods bear analogy to natural systems and are robust in nature. For example, simulated annealing is the analog to a physical process in which a solid in a 'heat bath' is heated by increasing the temperature, followed by slow cooling until it reaches the global minimum energy state where it forms a crystal. To use simulated annealing efficiently for 1-D seismic waveform inversion, we require a modeling method that rapidly performs the forward modeling calculation and a cooling schedule that will enable us to find the global minimum of the energy function rapidly. With the advent of vector computers, the reflectivity method has proved successful and the time of the calculation can be reduced substantially if only plane-wave seismograms are required. Thus, the principal problem with simulated annealing is to find the critical temperature, i.e., the temperature at which crystallization occurs. By initiating the simulated annealing process with different starting temperatures for a fixed number of iterations with a very slow cooling, we noticed that by starting very near but just above the critical temperature, we reach very close to the global minimum energy state very rapidly. We have applied this technique successfully to band-limited synthetic data in the presence of random noise. In most cases we find that we are able to obtain very good solutions using only a few plane wave seismograms.

458 citations


Journal ArticleDOI
TL;DR: The author proposes a technique based on the idea that for most of the data, only a few dimensions of the input may be necessary to compute the desired output function, and it can be used to reduce the number of required measurements in situations where there is a cost associated with sensing.
Abstract: Nonlinear function approximation is often solved by finding a set of coefficients for a finite number of fixed nonlinear basis functions. However, if the input data are drawn from a high-dimensional space, the number of required basis functions grows exponentially with dimension, leading many to suggest the use of adaptive nonlinear basis functions whose parameters can be determined by iterative methods. The author proposes a technique based on the idea that for most of the data, only a few dimensions of the input may be necessary to compute the desired output function. Additional input dimensions are incorporated only where needed. The learning procedure grows a tree whose structure depends upon the input data and the function to be approximated. This technique has a fast learning algorithm with no local minima once the network shape is fixed, and it can be used to reduce the number of required measurements in situations where there is a cost associated with sensing. Three examples are given: controlling the dynamics of a simulated planar two-joint robot arm, predicting the dynamics of the chaotic Mackey-Glass equation, and predicting pixel values in real images from pixel values above and to the left. >

135 citations


Proceedings ArticleDOI
03 Nov 1991
TL;DR: A method to generate curvature continuous trajectories for which the curvature profile is a polynomial function of arc length and an algorithm based on the deformation of a curve by energy minimization allows one to solve general geometric constraints which was not possible by previous methods.
Abstract: The trajectory generation problem for mobile robots consists in providing a set of trajectories that are 'smooth' and meet certain boundary conditions. The authors present a method to generate curvature continuous trajectories for which the curvature profile is a polynomial function of arc length. An algorithm based on the deformation of a curve by energy minimization allows one to solve general geometric constraints which was not possible by previous methods. Furthermore, it is able to take into account the limitation of radius of curvature of the robot by controlling the extrema of curvature along the path. >

99 citations


Journal ArticleDOI
01 Jul 1991
TL;DR: A general-purpose program, INTEROPT, is described, which finds the minimum of arbitrary functions, with user-friendly, quasi-natural-language input, and optimizes functions of up to 30 variables.
Abstract: A numerical method for finding the global minimum of nonconvex functions is presented. The method is based on the principles of simulated annealing, but handles continuously valued variables in a natural way. The method is completely general, and optimizes functions of up to 30 variables. Several examples are presented. A general-purpose program, INTEROPT, is described, which finds the minimum of arbitrary functions, with user-friendly, quasi-natural-language input. >

97 citations


Proceedings ArticleDOI
19 Jun 1991
TL;DR: In this article, the authors investigate a path planning approach that consists of concurrently building and searching a graph connecting the local minima of a numerical potential field defined over the robot's configuration space.
Abstract: The authors investigate a path planning approach that consists of concurrently building and searching a graph connecting the local minima of a numerical potential field defined over the robot's configuration space. They describe techniques for constructing 'good' potentials and efficient methods for dealing with the local minima of these functions. These techniques have been implemented in fast planners that can deal with single and/or multiple robot systems with few and/or many degrees of freedom. Some experimental results with these planners are described. >

Journal ArticleDOI
TL;DR: The theoretical results of this paper provide an explanation and quantification of this empirical observation that spurious local minima of the cross-validation function are more likely to occur at too small values of the bandwidth, rather than at too large values.
Abstract: SUMMARY The method of least squares cross-validation for choosing the bandwidth of a kernel density estimator has been the object of considerable research, through both theoretical analysis and simulation studies. The method involves the minimization of a certain function of the bandwidth. One of the less attractive features of this method, which has been observed in simulation studies but has not previously been understood theoretically, is that rather often the cross-validation function has multiple local minima. The theoretical results of this paper provide an explanation and quantification of this empirical observation, through modelling the cross-validation function as a Gaussian stochastic process. Asymptotic analysis reveals that the degree of wiggliness of the cross-validation function depends on the underlying density through a fairly simple functional, but dependence on the kernel function is much more complicated. A simulation study explores the extent to which the asymptotic analysis describes the actual situation. Our techniques may also be used to obtain other related results-e.g. to show that spurious local minima of the cross-validation function are more likely to occur at too small values of the bandwidth, rather than at too large values.

Journal ArticleDOI
TL;DR: A new method, stochastic region contraction (SRC), is proposed that achieves a computational speedup of 30-50 when compared to the commonly used simulated-annealing method and is ideally suited for coarse-gain parallel processing.
Abstract: The authors deal with optimal microphone placement and gain for a linear one-dimensional array often in a confined environment. A power spectral dispersion function (PSD) is used as a core element for a min-max objective function (PSDX). Derivation of the optimal spacings and gains of the microphones is a hard computational problem since the min-max objective function exhibits multiple local minima (hundreds or thousands). The authors address the computational problem of finding the global optimal solution of the PSDX function. A new method, stochastic region contraction (SRC), is proposed. It achieves a computational speedup of 30-50 when compared to the commonly used simulated-annealing method. SRC is ideally suited for coarse-gain parallel processing. >

Journal ArticleDOI
TL;DR: It is demonstrated that these techniques for plotting scalar fields on an N-dimensional lattice work for such data visualization tasks as the location of maxima, minima, saddle points, and other features, as well as for visually fitting multivariate data and the visual determination of dominant and weak or irrelevant variables.
Abstract: The problem of visualizing a scalar-dependent variable that is a function of many independent variables is addressed, focusing on cases with three or more independent variables. A hierarchical axis using different metrics for each independent variable is used, as are hierarchical data symbols. The technique is described for the case in which each independent variable is sampled in a regular grid or lattice-like fashion (that is, in equal increments), but it can be generalized to a variety of less restrictive domains. Rather than presenting a formal mathematical description, the authors use a visual means of describing the technique for a simple three-dimensional data case, and then demonstrate by example how to extend it to higher dimensions. It is demonstrated that these techniques for plotting scalar fields on an N-dimensional lattice work for such data visualization tasks as the location of maxima, minima, saddle points, and other features, as well as for visually fitting multivariate data and the visual determination of dominant and weak or irrelevant variables. >

Journal ArticleDOI
B.W. Lee1, Bing J. Sheu
TL;DR: This modified Hopfield network has been successfully applied to the construction of analog-to-digital converters with optimal solutions and experimental results on the voltage transfer characteristics of data converters are presented.
Abstract: Due to the rugged energy function of the original Hopfield networks, the output is usually one local minimum in the energy function. An analysis on the locations of local minima in Hopfield networks is presented, and a modified network architecture to eliminate such local minima is described. In particular, another amplifier is introduced at the processor nodes to give correction terms. This modified Hopfield network has been successfully applied to the construction of analog-to-digital converters with optimal solutions. Experimental results on the voltage transfer characteristics of data converters are presented. >

Journal ArticleDOI
TL;DR: The proposed elastic tracking algorithm is shown to be very robust to noise and measurement error and extends tracking capabilities to much higher track densities than possible via local road finding or even the novel Denby-Peterson neural network tracking algorithms.

Journal ArticleDOI
TL;DR: In this article, a method of constructing extremal models using simulated annealing optimization is developed, which can be applied to construct models which extremize a linear or non-linear objective function in any inverse problem for which the corresponding forward solution exists.
Abstract: SUMMARY Conductivity models derived from magnetotelluric measurements can be appraised by constructing extremal models which minimize and maximize localized conductivity averages. These extremal models provide lower and upper bounds for the conductivity average over the region of interest. Previous applications of this method have constructed extremal models via (iterated) linearized inversion; however, it is difficult to verify that the computed bounds represent global (rather than local) extrema. In this paper, a method of constructing extremal models using simulated annealing optimization is developed. Simulated annealing requires no approximations and is renowned for its ability to avoid unfavourable local minima. The optimization procedure is flexible and general, and can be applied to construct models which extremize a linear or non-linear objective function in any inverse problem for which the corresponding forward solution exists. Appraisal via simulated annealing is demonstrated using synthetic data and field measurements, and the results are compared with those based on linearization. The comparisons suggest that the bounds calculated via linearization represent excellent approximations to the global extrema.

Proceedings ArticleDOI
03 Jun 1991
TL;DR: An analytical study is presented which allows one to know exactly what the behavior is of a detector around trihedral vertices, and it is shown that near three surfaces, two elliptic maxima of DET exist, and their location is inside extremal contrast surface.
Abstract: A formal representation of corner and vertex detection is presented. In particular, an analytical study is presented which allows one to know exactly what the behavior is of a detector around trihedral vertices. It is shown that near three surfaces, two elliptic maxima of DET exist, and their location is inside extremal contrast surface. The intermediate surface always shows a hyperbolic minima. It is shown that the detector allows to find the exact position of vertex. The approach proposed has been tested on many noisy synthetic data and real images and its robustness seems promising. >

Journal ArticleDOI
TL;DR: A graph-like data structure is constructed on these shape features, called the Characteristic Region Configuration Graph, which represents die surface in an effective and concise way.
Abstract: A method is described for the extraction of morphological information from a terrain approximated by a Delaunay triangulation, in order to find a combinatorial simpler surface description while maintaining its basic features. Characteristic regions (i.e., regions with concave, convex, planar or saddle shape) are considered the basic descriptive elements of the surface morphology, and are defined by taking into account the type of adjacency between triangles. Adjacencies between regions define the surface characteristic lines, which are classified as ridges, ravines or generic creases, and characteristic points, which are classified as maxima, minima or saddle points. A graph-like data structure is constructed on these shape features, called the Characteristic Region Configuration Graph, which represents die surface in an effective and concise way.

Journal ArticleDOI
TL;DR: In this article, a complete solution of the excitation values which may occur at the local minima of the XOR problem is obtained analytically for two-layered networks in the two most commonly quoted configurations, using the gradient backpropagation algorithm.
Abstract: A complete solution of the excitation values which may occur at the local minima of the XOR problem is obtained analytically for two-layered networks in the two most commonly quoted configurations, using the gradient backpropagation algorithm. The role of direct connections which bypass the two-layered system is discussed in connection to the XOR problem and other related training tasks.

Journal ArticleDOI
TL;DR: The importance of adding self-coupling terms to the energy functions which are redundant from the encoding point of view but beneficial when it comes to ignoring local minima and to stabilizing the good solutions in the annealing process is stressed.
Abstract: A convenient mapping and an efficient algorithm for solving scheduling problems within the neural network paradigm is presented. It is based on a reduced encoding scheme and a mean field annealing prescription which was recently successfully applied to TSP. Most scheduling problems are characterized by a set of hard and soft constraints. The prime target of this work is the hard constraints. In this domain the algorithm persistently finds legal solutions for quite difficult problems. We also make some exploratory investigations by adding soft constraints with very encouraging results. Our numerical studies cover problem sizes up to O(105) degrees of freedom with no parameter tuning. We stress the importance of adding self-coupling terms to the energy functions which are redundant from the encoding point of view but beneficial when it comes to ignoring local minima and to stabilizing the good solutions in the annealing process.

Journal ArticleDOI
TL;DR: The authors show that there are no nonglobal minima in the behavior of the least-squares problem that arises when one attempts to train a feedforward net with no hidden neurons.

Journal ArticleDOI
TL;DR: Three approaches to minimizing mesh error are presented: adaptive meshes, edge elements, and crunched meshes, the latter of which is shown to be significantly faster for optimization, although the field solutions in the iterations have accuracy depending on the fineness of the initial mesh.
Abstract: The authors points out the nature of the discontinuities in the object function as a result of mesh error. That is, they are artificial and have no physical basis. These discontinuities, however accurate the mesh might be, persist. As such, the solution of inverse problems gets to be slow. The authors present three approaches to minimizing this error: adaptive meshes, edge elements, and crunched meshes. The latter is shown to be significantly faster for optimization, although the field solutions in the iterations have accuracy depending on the fineness of the initial mesh. Adaptive approaches on the other hand significantly slow down convergence. Edge elements improve flux-density-based object functions by making them C/sup 1/ continuous because no derivative of the potential is required, although multiple minima continue to exist; but the C/sup 1/ continuity makes it possible to utilize faster algorithms using the Hessian. >

Patent
28 Jun 1991
TL;DR: In this paper, a linear probing method was proposed for determining local minima values computed first along the gradient of the weight space and then adjusting the slope and direction of a linear probe line after determining the likelihood that a "ravine" has been encountered in the terrain of a weight space.
Abstract: A method and apparatus for speeding and enhancing the "learning" function of a computer configured as a multilayered, feed format artificial neural network using logistic functions as an activation function. The enhanced learning method provides a linear probing method for determining local minima values computed first along the gradient of the weight space and then adjusting the slope and direction of a linear probe line after determining the likelihood that a "ravine" has been encountered in the terrain of the weight space.

Proceedings ArticleDOI
08 Jul 1991
TL;DR: It is shown that, in a feedforward net of logistic units, if there are as many hidden nodes as patterns to learn then almost certainly a solution exists, and the error function has no local minima.
Abstract: It is shown that, in a feedforward net of logistic units, if there are as many hidden nodes as patterns to learn then almost certainly a solution exists, and the error function has no local minima. A large enough feedforward net can reproduce almost any finite set of targets for almost any set of input patterns, and will almost certainly not be trapped in a local minimum while learning to do so. >

Journal ArticleDOI
TL;DR: It is shown that the behavior of the validation function depends critically on the initial conditions and on the characteristics of the noise, and that under certain simple assumptions, if the initial weights are sufficiently small, the validate function has a unique minimum corresponding to an optimal stopping time for training.
Abstract: We study generalization in a simple framework of feedforward linear networks with n inputs and n outputs, trained from examples by gradient descent on the usual quadratic error function. We derive analytical results on the behavior of the validation function corresponding to the LMS error function calculated on a set of validation patterns. We show that the behavior of the validation function depends critically on the initial conditions and on the characteristics of the noise. Under certain simple assumptions, if the initial weights are sufficiently small, the validation function has a unique minimum corresponding to an optimal stopping time for training for which simple bounds can be calculated. There exists also situations where the validation function can have more complicated and somewhat unexpected behavior such as multiple local minima (at most n) of variable depth and long but finite plateau effects. Additional results and possible extensions are briefly discussed.

Journal ArticleDOI
TL;DR: A general technique is presented for updating the maximum (minimum) value of a decomposable function as elements are inserted into and deleted from the set S, a model of dynamization where when an element is inserted, how long it will stay is told.
Abstract: Let S be a set, f: S×S→R+ a bivariate function, and f(x,S) the maximum value of f(x,y) over all elements y∈S. We say that f is decomposable with respect with the maximum if f(x,S) = max {f(x,S1),f(x,S2),…,f(x,Sk)} for any decomposition S = ∪i=1i=kSi. Computing the maximum (minimum) value of a decomposable function is inherent in many problems of computational geometry and robotics. In this paper, a general technique is presented for updating the maximum (minimum) value of a decomposable function as elements are inserted into and deleted from the set S. Our result holds for a semi-online model of dynamization: When an element is inserted, we are told how long it will stay. Applications of this technique include efficient algorithms for dynamically computing the diameter or closest pair of a set of points, minimum separation among a set of rectangles, smallest distance between a set of points and a set of hyperplanes, and largest or smallest area (perimeter) retangles determined by a set of points. These problems are fundamental to application areas such as robotics, VLSI masking, and optimization.

Proceedings ArticleDOI
26 Jun 1991
TL;DR: In this article, an upper bound for the number of extrema expected in the step response of a transfer function is given, which is sufficient to determine whether a closed-loop system will display overshoot.
Abstract: Despite the power of modern control techniques, the problem of determining the number of extrema expected in the step response of a transfer function remains open. This note gives an easily computed upper bound for the number of extrema. In several cases this bound is sufficient to determine whether the step response of a closed-loop system will display overshoot. The analysis also permits determining the existence of inverse-response behavior. Criteria helpful in a pole-placement design context are given.

Journal ArticleDOI
TL;DR: It is shown that the distance geometry approach has some nice geometry not associated with other methods that allows one to prove detailed results with regard to the location of local minima and is exploited to develop some algorithms which are faster and find more minima than the algorithms presently used.

Book
01 Jan 1991
TL;DR: Linear and Nonlinear Optimization | PDF Free Download 1.1.
Abstract: Linear and Nonlinear Optimization | PDF Free Download 1.1. Optimization 1 1.2. Types of Problems 2 1.3. Size of Problems 5 1.4. Iterative Algorithms and Convergence 6 PART I Linear Programming Chapter 2. Basic Properties of Linear Programs 11 2.