scispace - formally typeset
Search or ask a question

Showing papers on "Maxima and minima published in 1989"


Journal ArticleDOI
TL;DR: The main result is a complete description of the landscape attached to E in terms of principal component analysis, showing that E has a unique minimum corresponding to the projection onto the subspace generated by the first principal vectors of a covariance matrix associated with the training patterns.

1,456 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal orientation of an anisotropic material with respect to the actual strain condition was investigated and complete analytical results were derived, including local as well as global maxima and minima.
Abstract: In order to use an anisotropic material effectively it should be oriented optimally with respect to the actual strain condition. Orientations with extreme energy density are obtained for orthotropic materials. It is found that the optimal orientation depends on one non-dimensional material parameter only, plus the ratio of the two principal strains. Complete analytical results are derived, including local as well as global maxima and minima.

330 citations


Journal ArticleDOI
TL;DR: In this paper, a combinatorial optimization procedure based on the physical idea of using the quantum tunnel effect to allow the search of global minima of a function of many Boolean variables to escape from poor local minima is proposed.

165 citations


Journal Article
TL;DR: It is shown that the objective function of a least squares type nonlinear parameter estimation problem can be any non- negative real function, and therefore this class of problems corre- sponds to global optimization.
Abstract: In this paper we first show that the objective function of a least squares type nonlinear parameter estimation problem can be any non- negative real function, and therefore this class of problems corre- sponds to global optimization. Two non-derivative implementations of a global optimization method are presented; with nine standard test functions applied to measure their efficiency. A new nonlinear test problem is then presented for testing the reliability of global op- timization algorithms. This test function has a countable infinity of local minima and only one global minimizer. The region of attraction of the global minimum is of zero measure. The results of efficiency and reliability tests are given.

157 citations


01 Jan 1989
TL;DR: The SUMT algorithm as discussed by the authors transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method, and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function.
Abstract: A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.

109 citations


Journal Article
TL;DR: An example of a neural net without hidden layers and with a sigmoid transfer function, together with a training set of binary vectors, for which the sum of the squared errors, regarded as a function of the weights, has a local minimum which is not a global minimum.
Abstract: We give an example of a neural net without hidden layers and with a sigmoid transfer function, together with a training set of binary vectors, for which the sum of the squared errors, regarded as a function of the weights, has a local minimum which is not a global minimum. The example consists of a set of 125 training instances, with four weights and a threshold to be learnt. We do not know if substantially smaller binary examples exist.

101 citations


Journal ArticleDOI
TL;DR: Two different methods provide converging evidence, from two different methods, which supports Hoffman and Richards's minima rule, which divides three-dimensional shapes into parts at negative minima of curvature.
Abstract: Three experiments were conducted to test Hoffman and Richards's (1984) hypothesis that, for purposes of visual recognition, the human visual system divides three-dimensional shapes into parts at negative minima of curvature. In the first two experiments, subjects observed a simulated object (surface of revolution) rotating about a vertical axis, followed by a display of four alternative parts. They were asked to select a part that was from the object. Two of the four parts were divided at negative minima of curvature and two at positive maxima. When both a minima part and a maxima part from the object were presented on each trial (experiment 1), most of the correct responses were minima parts (101 versus 55). When only one part from the object—either a minima part or a maxima part—was shown on each trial (experiment 2), accuracy on trials with correct minima parts and correct maxima parts did not differ significantly. However, some subjects indicated that they reversed figure and ground, thereby changing ...

84 citations


Journal ArticleDOI
TL;DR: In this paper, the Lyapunov stabilities of some "semiclassical" bound states of the non-homogeneous nonlinear Schrodinger equation were studied.
Abstract: In this paper, we study the Lyapunov stabilities of some “semiclassical” bound states of the (nonhomogeneous) nonlinear Schrodinger equation, We prove that among those bound states, those which are “concentrated” near local minima (respectively maxima) of the potentialV are stable (respectively unstable). We also prove that those bound states are positive if is sufficiently small.

79 citations


Journal ArticleDOI
TL;DR: A design technique to eliminate local minima in the Hopfield neural-based analog-to-digital converter has been developed and experimental data agree well with theoretical results in the output characteristics of the neural- based data converter.
Abstract: The architecture associated with the Hopfield network can be utilized in the VLSI realization of several important engineering optimization functions for signal processing purposes. The properties of local minima in the energy function of Hopfield networks are investigated. A design technique to eliminate these local minima in the Hopfield neural-based analog-to-digital converter has been developed. Experimental data agree well with theoretical results in the output characteristics of the neural-based data converter. >

74 citations


Patent
08 Aug 1989
TL;DR: In this paper, a method and apparatus which assesses the mechanical properties of a material by launching an ultrasound signal at the material while varying the angle of incidence and analyzing the amplitude of the ultrasound wave reflected by the material was presented.
Abstract: A method and apparatus which assesses the mechanical properties of a material by launching an ultrasound signal at the material while varying the angle of incidence and analyzing the amplitude of the ultrasound wave reflected by the material. The method and apparatus correlates extrema (maxima or minima inflection points) in the reflected angle with the angle of incidence of the transmitted signal to identify critical angles of incidence. The velocity of the pressure wave in the material has been found to be a function of a first critical angle corresponding to a first maxima as the angle of incidence is increased in the range 0°-90°. The velocity of the shear wave in the material has been found to be a function of a second critical angle corresponding to a second maxima following the first maxima. Young's modulus of elasticity, Poissons's modulus, and density can be approximated using the velocity of the pressure wave and shear wave for isotropic materials. A third critical angle corresponding to a minima after the first critical angle (reflected amplitude approaching 0) has been found particularly useful in conjunction with the first and second critical angles in assessing bone density and in determining whether the second critical point is at a maximum or an inflection point. The extension of the method in which the plane of scattering is rotated around the normal to bone while keeping the point of observation fixed has been found particularly useful in assessing the mechanical properties of anisotropic materials such as cortical bone.

68 citations


Journal ArticleDOI
TL;DR: The class of acceptance-rejection based algorithms is introduced in order to investigate intermediate procedures in the tunneling algorithm, and the motivation for such an approach comes from recent works on the simulated annealing approach in global optimization.
Abstract: Any global minimization algorithm is made by several local searches performed sequentially. In the classical multistart algorithm, the starting point for each new local search is selected at random uniformly in the region of interest. In the tunneling algorithm, such a starting point is required to have the same function value obtained by the last local minimization. We introduce the class of acceptance-rejection based algorithms in order to investigate intermediate procedures. A particular instance is to choose at random the new point approximately according to a Boltzmann distribution, whose temperatureT is updated during the algorithm. AsT → 0, such distribution peaks around the global minima of the cost function, producing a kind of random tunneling effect. The motivation for such an approach comes from recent works on the simulated annealing approach in global optimization. The resulting algorithm has been tested on several examples proposed in the literature.

Journal ArticleDOI
TL;DR: The existence of a manifold of exact solutions (mean-square error E = 0) of weights and thresholds for sigmoidal networks for XOR and other 2-variable Boolean functions is proved.
Abstract: We prove the existence of a manifold of exact solutions (mean-square error E = 0) of weights and thresholds for sigmoidal networks for XOR and other 2-variable Boolean functions. We also prove the existence of a manifold of local minima of E where E 0.

Journal ArticleDOI
TL;DR: In this article, a method for determining the stationary phase points for multidimensional path integrals employed in the calculation of finite-temperature quantum time correlation functions is presented, where steepest descent and simulated annealing procedures are utilized to search for extrema in the action functional.
Abstract: A method is presented for determining the stationary phase points for multidimensional path integrals employed in the calculation of finite‐temperature quantum time correlation functions. The method can be used to locate stationary paths at any physical time; in the case that t≫βℏ, the stationary points are the classical paths linking two points in configuration space. Both steepest descent and simulated annealing procedures are utilized to search for extrema in the action functional. Only the first derivatives of the action functional are required. Examples are presented first of the harmonic oscillator for which the analytical solution is known, and then for anharmonic systems, where multiple stationary phase points exist. Suggestions for Monte Carlo sampling strategies utilizing the stationary points are made. The existence of many and closely spaced stationary paths as well as caustics presents no special problems. The method is applicable to a range of problems involving functional integration, where optimal paths linking two end points are desired.

Proceedings ArticleDOI
J. M. Holtzman1
13 Dec 1989
TL;DR: In this article, it has been pointed out that in stochastic approximation contexts, there is an advantage in using derivatives as opposed to differences and for sensitivity analysis, advantages in using differences rather than derivatives are pointed out.
Abstract: One use of perturbation analysis is to compute the derivative of a performance measure P with respect to a parameter theta for use in iterative optimization schemes to locate local maxima by driving the derivative to zero. It has been pointed out that in stochastic approximation contexts, there is an advantage in using derivatives as opposed to differences. Another use of perturbation analysis is just to do a sensitivity analysis with respect to a parameter without relation to any optimization. For sensitivity analysis, advantages in using differences rather than derivatives is pointed out. For this use of differences, three simulation evaluations are needed. This motivates the use of noninfinitesimal perturbation analysis (or other techniques) whereby perturbation information can be obtained from a single run. >

Book ChapterDOI
TL;DR: This chapter presents an alternative approach that has the added capability of handling Hessians, which may contain negative eigenvalues, which will prove especially useful for starting geometries, which are well removed from the desired minima.
Abstract: Publisher Summary This chapter examines the various optimization procedures that build up information on the second derivatives as they minimize the energy. The focus is only on energy minima, although the connection with techniques for finding saddle point, such as transition states, is reasonably straight forward, at least in concept. The descriptions of the various potentially useful methods are reasonably complete, although the examples for the methods are found most promising. For comparisons, the exact Hessian matrix is calculated and true Newton steps are performed. Newton's method includes the quadratic nature of the energy hypersurface in the search direction computation. The Quasi-Newton methods are very powerful techniques for locating local minima. The success of these procedures can largely be attributed to the retention of the positive definiteness of the Hessian. The chapter presents an alternative approach that has the added capability of handling Hessians, which may contain negative eigenvalues. Such procedures will prove especially useful for starting geometries, which are well removed from the desired minima.

01 Jan 1989
TL;DR: An extremely simple linear time ordered algorithm for finding column minima in triangular totally monotone matrices that is essentially the same of that of the Galil-Park algorithm, and since the algorithm is so simple to program, it is expected to be the algorithm of choice in implementations.
Abstract: Following [KK89] we will say that an algorithm for finding the column minima of a matrix is ordered if the algorithm never evaluates the $(i,j)$ entry of the matrix until the minima of columns $1, 2, \ldots , i$ are known. This note presents an extremely simple linear time ordered algorithm for finding column minima in triangular totally monotone matrices. Analogous to [KK89] this immediately yields a linear time algorithm for the concave one-dimensional dynamic programming problem. Wilber [W88] gave the first linear time algorithm for the concave one-dimensional dynamic programming problem, but his algorithm was not ordered and hence could not be applied in some situations. Examples of these situations are given in [GP89] and [L89]. Galil and Park [GP89] and Larmore [L89] independently found quite different ordered linear time algorithms. All of these algorithms, and ours as well, rely on the original linear-time algorithm known as SMAWK for finding column minima in totally monotone matrices [AKMSW87]. The constant in our algorithm is essentially the same of that of the Galil-Park algorithm, and since our algorithm is so simple to program, we expect it to be the algorithm of choice in implementations.

Proceedings ArticleDOI
Sontag1, Sussmann1
01 Jan 1989
TL;DR: In this article, the authors show that the continuous gradient adjustment procedure is such that from any initial weight configuration, a separating set of weights is obtained in finite time, and they have a precise analog of the perceptron learning theorem.
Abstract: Consideration is given to the behavior of the least-squares problem that arises when one attempts to train a feedforward net with no hidden neurons. It is assumed that the net has monotonic nonlinear output units. Under the assumption that a training set is separable, that is, that there is a set of achievable outputs for which the error is zero, the authors show that there are no nonglobal minima. More precisely, they assume that the error is of a threshold least-mean square (LMS) type, in that the error function is zero for values beyond the target value. The authors' proof gives, in addition, the following stronger result: the continuous gradient adjustment procedure is such that from any initial weight configuration a separating set of weights is obtained in finite time. Thus they have a precise analog of the perceptron learning theorem. The authors contrast their results with the more classical pattern recognition problem of threshold LMS with linear output units. >

Proceedings ArticleDOI
01 Jan 1989
TL;DR: A discussion is presented of the results of the exploration of the error surface for two networks, and the discovery of a true local minimum is documented.
Abstract: Summary form only given, as follows. The possible existence of local minima in the error surfaces of backpropagation neural networks has been an important unanswered question. Evidence has demonstrated that error surface regions of small slope with a high mean square error are frequently encountered during training. Such regions are often mistakenly believed to be local minima since no significant decrease in error occurs over considerable training time. In many cases, if training is continued, the shallow region is traversed. Given these experiences, it became plausible to suggest that backpropagation error surfaces have no local minima. A discussion is presented of the results of the exploration of the error surface for two networks, and the discovery of a true local minimum is documented. >


Book ChapterDOI
TL;DR: In this paper, a dependence function for extreme random pairs with Gumbel margins was obtained for the distribution function in the case of maxima, or for the survival function in case of minima.
Abstract: Bivariate extreme random pairs, with extreme margins, have for the distribution function, in the case of maxima, or for the survival function, in the case of minima, a dependence function. For the cases of Gumbel margins for maxima or of the exponential margins for minima an index of dependence as well the correlation coefficient are obtained; analogous results could be obtained for other margins and also study the non-parametric correlation coefficients.

Journal ArticleDOI
TL;DR: A procedure of practical use for representing functions of several variables as superpositions of functions of only one variable is presented and how the procedure works when applied, for example, to the location of global minima is shown.
Abstract: We present a procedure of practical use for representing functions of several variables as superpositions of functions of only one variable. We show how the procedure works when applied, for example, to the location of global minima. Our numerical examples are restricted here, for simplicity, to functions of two variables. The straightforward extension to functions of more variables will be discussed elsewhere

Journal ArticleDOI
TL;DR: A simple heuristic technique to deal with multiple local minima in nonconvex, nonlinear, power system optimization problems by solving a sequence of interior point subproblems is presented.
Abstract: This paper presents a simple heuristic technique to deal with multiple local minima in nonconvex, nonlinear, power system optimization problems by solving a sequence of interior point subproblems. Both the real-valued and the mixed-integer cases are discussed separately. The method is then applied to the unit commitment problem, and its performance on realistic cases is compared with that of a genetic algorithm.

Proceedings ArticleDOI
Foo1
01 Jan 1989
TL;DR: The authors present a neural network algorithm that is based on a divide-and-conquer strategy for solving large-scale traveling-salesman problems (TSPs) and confirms the invariant property that the sum of the energy fixed points is equal to that of the fixed points of its parts.
Abstract: The authors present a neural network algorithm that is based on a divide-and-conquer strategy for solving large-scale traveling-salesman problems (TSPs). This ad hoc cut-and-splice algorithm utilizes a two-layer network with a partitioned first layer and a combining second layer. The cities of a large-scale TSP are first divided into clusters or local sets. Each local city set is represented by a neural network in the first layer, which upon convergence yields the local minimum tour. The second layer optimizes the interconnections between the local sets, forming a global final tour. The regional global (or local) minima of each cluster can be combined to form the global minima of the overall energy functions. Results of the cut-and-splice algorithm are compared with those of the single-layer modified Hopefield-Tank neural network algorithm. The invariant property is confirmed in the sense that the sum of the energy fixed points is equal to that of the fixed points of its parts, and the sum is insensitive to the partitioning of the energy landscape. This means that the linear boundary perturbation created by the partitioning is relatively small as the quadratic area of the problem increases. >

Journal ArticleDOI
TL;DR: In this article, an analytical expression for the effective interaction force of a bound kink-antikink state in a general potential possessing two minima is obtained, and it is shown how the interaction force depends on the anharmonicity near the minima.

Journal ArticleDOI
TL;DR: In this paper, the authors present some basic stability results for minima of proper, lower semicontinuous, quasi-convex functions with respect to the topology of epic on verge nee.
Abstract: This paper presents, in finite dimensions, some basic stability results for minima of proper, lower semicontinuous, quasi-convex functions with respect to the topology of epic on verge nee. Equipped with this topology the proper, lower semicontinuous, quasi-convex functions form a Baire space, and most such functions have unique minimizers (in the sense of Baire category).

Journal ArticleDOI
TL;DR: In this paper, a procedure for computing molecular electrostatic potentials (MEPs) at low computational cost is tested using a mixed basis set, which assigns split-valence and minimal basis sets to heavy and hydrogen atoms, respectively.

Proceedings ArticleDOI
Caviglia, Bisio, Curatelli, Giovannacci, Raffo 
01 Jan 1989
TL;DR: The authors present a modified Tank and Hopfield neural network for solving the problem of cell placement in integrated circuits, a constrained optimization problem that is NP-complete.
Abstract: The authors present a modified Tank and Hopfield neural network for solving the problem of cell placement in integrated circuits, a constrained optimization problem that is NP-complete. The neural network is composed of two mutually interconnected subcircuits. One determines the configuration of cells on the plane for which the bounding box area and connections reach a minimum, whereas the other satisfies the nonoverlapping constraints among cells. The global-local minima issue is addressed and solved in two steps. First, the initial X-Y condition from which the system is permitted to evolve toward minima is determined by solving a relaxed problem that has global minima located in regions of the state space close to those of the original problem. Second, the initial orientation of blocks is determined by a more detailed analysis of connectivity requirements. The proposed neural network paradigm has been simulated and tested for small and medium-sized integrated circuits. >

Journal ArticleDOI
TL;DR: In this article, a simple approximation of the Rain-Flow-Cycle (RFC) distribution for an ergodic random load is presented, based on a discrete Markov chain approximation.

Journal ArticleDOI
TL;DR: In this paper, a global, iterative stack power optimization method for reflection seismic data shot on high velocity crystalline rock can be calculated directly from the data using a global algorithm.
Abstract: Static corrections for reflection seismic data shot on high velocity crystalline rock can be calculated directly from the data using a global, iterative stackpower optimization. The reflections consist of several peaks, creating the possibility of aligning the wrong peak in the signals. Due to misalignments a large number of local maxima in the stackpower exist in a cyclic manner, only slightly smaller than thes stackpower of the‘best’stack. Therefore the search must be of a global nature. A Monte Carlo search requires long run times. A global search method is presented using a varying sequence of parameters within each iteration. The ability for a set of static corrections and the associated stackpower to move from a local maximum to a larger maximum is enhanced. The performance of the iterative process is improved, so a relatively small number (about 20) of iterations is needed to obtain the optimum set of corrections. The risk of misalignment of traces by using the wrong peak in a signal consisting of several peaks is diminished, and it is less important that the initial set of corrections is close to the final set. The method is illustrated by a synthetic example and on a data set shot on the granites of Dalarna, Sweden.

Journal ArticleDOI
TL;DR: In this article, the relationship between convergence properties of a family of extended real valued functions and the convergence of the sets of their saddle points is investigated and the use of Γ-limits on convergence spaces enables to refine several theorems and to get entirely new results.