scispace - formally typeset
Search or ask a question

Showing papers on "Maxima and minima published in 1973"


Journal ArticleDOI
TL;DR: A simple parallel procedure for selecting significant curvature maxima and minima on a digital curve is described.
Abstract: A simple parallel procedure for selecting significant curvature maxima and minima on a digital curve is described.

524 citations


Journal ArticleDOI
TL;DR: In this article, a nonlinear programming problem with inequality constraints and with unknown vectorx is converted to an unconstrained minimization problem in unknownsx and λ, where λ is a vector of Lagrange multipliers.
Abstract: A nonlinear programming problem with inequality constraints and with unknown vectorx is converted to an unconstrained minimization problem in unknownsx and λ, where λ is a vector of Lagrange multipliers. It is shown that, if the original problem possesses standard convexity properties, then local minima of the associated unconstrained problem are in fact global minima of that problem and, consequently, Kuhn-Tucker points for the original problem. A computational procedure based on the conjugate residual scheme is applied in thexλ-space to solve the associated unconstrained problem. The resulting algorithm requires only first-order derivative information on the functions involved and will solve a quadratic programming problem in a finite number of steps.

41 citations


Journal ArticleDOI
TL;DR: In this article, a self-contained theory of extrema (viz., suprema, maxima, minima, and infima) of differentiable functions of several (possibly infinitely many) variables mapping into finite-dimensional integrally closed directed partially ordered linear spaces is reported.
Abstract: A self-contained theory of extrema (viz., suprema, maxima, minima, and infima) of differentiable functions of several (possibly infinitely many) variables mapping into finite-dimensional integrally closed directed partially ordered linear spaces is reported. The applicability of the theory to the analysis of linear least squares vector estimation problem is demonstrated.

22 citations


Journal ArticleDOI
TL;DR: The methods for finding the minima of convex functions with constraints of the recursive type on the range of variation of the argument are proposed and used to solve discontinuous games and to find saddle points.
Abstract: METHODS of finding the minima of convex functions with constraints of the recursive type on the range of variation of the argument, are proposed. A generalization of the algorithms for minimizing non-smooth functions is given. The methods are used to solve discontinuous games and to find saddle points. Some results of numerical calculations are presented.

15 citations


Journal ArticleDOI
TL;DR: In this article, a departure from the usual procedure for obtaining the optimal dimensions of a four bar function generator by iteration is made, by imposing additional constraints upon the problem, which results in a more extensive set of equations to be solved than the conventional method.
Abstract: This paper is a departure from the usual procedure for obtaining the optimal dimensions of a four bar function generator by iteration In the usual procedure, the accuracy points are first chosen by means of Chebishev spacing or some other means Using these accuracy points, a four bar linkage is synthesized and the error calculated Freudenstein’s respacing formula may then be used to respace the accuracy points so as to minimize the errors After the respacing of the accuracy points is calculated, a new mechanism is synthesized The process is repeated until the magnitudes of the extreme errors occurring between accuracy points are equalized The procedure adopted in this paper is to immediately force the extreme errors between accuracy points to be equal in magnitude by imposing additional constraints upon the problem These constraints eliminate the arbitrary choice of the first set of accuracy points This procedure results in a more extensive set of equations to be solved than the conventional method However, once the equations are solved, they lead directly to equalized (and thus minimized) extrema of the magnitude of structural errors between the precision points Thus there is no need to perform the iterative steps of conventional optimization The proposed method is illustrated with an example

15 citations


Journal ArticleDOI
01 Jan 1973
TL;DR: A heuristic search is described which has the aim of finding practically all the extrema of a given nonlinear functional and is very efficient for the functionals of dimensions 15-20 with 20-25extrema.
Abstract: A heuristic search is described which has the aim of finding practically all the extrema of a given nonlinear functional. A standard unimodal descent algorithm is employed for finding individual extrema. This basic algorithm is applied repeatedly using various computed initial points and starting directions. Through the additional use of several learning cycles most of the available extrema can be found. Numerical experiments indicate that the method is very efficient for the functionals of dimensions 15-20 with 20-25 extrema.

13 citations


Journal ArticleDOI
TL;DR: In this article, sufficient conditions are developed such that any local minimum of such an objective function is also a global minimum, and a twoparameter design problem associated with an M/Ek/1 system is used as an example to show how the conditions are utilized.
Abstract: This research was motivated by design problems in queueing theory where the objective function is composed of a discrete variable and a continuous variable. Sufficient conditions are developed such that any local minimum of such an objective function is also a global minimum. A two-parameter design problem associated with an M/Ek/1 system is used as an example to show how the conditions are utilized.

12 citations


Journal ArticleDOI
TL;DR: The algorithm is structured in a way which enables many of the operations to be performed in parallel; the parallelisms in the algorithm have had an impact on the conceptual design of an array computer.

8 citations



Journal ArticleDOI
TL;DR: Several sufficient conditions for global constrained minima are given in this paper, which consist of necessary conditions for local minima together with generalized convexity assumptions and eonvexity-type assumptions on the Lagrangian function.
Abstract: Several sufficient conditions for global constrained minima are given These conditions consist of necessary conditions for local minima together with generalized convexity assumptions The eonvexity-type assumptions are made on the Lagrangian function, which presents the advantage of not requiring any (generalized convexity) assumption on each function involved in the problem and of allowing problems with several local extrema Previously obtained results are used to replace pseudo-convexity by a more workable condition

4 citations


Journal ArticleDOI
TL;DR: In this paper, sufficient conditions for global constrained minima and for global minima in some restricted domain of the problem are given, expressing the fact that the Lagrangian function is convex, which does not require any convexity property of the functional and constraints.
Abstract: Sufficient conditions for global constrained minima and for global minima in some restricted domain of the problem are given. Both conditions express the fact that the Lagrangian function is convex, which does not require any convexity property of the functional and the constraints.


Book ChapterDOI
01 Jan 1973
TL;DR: The polynomial is, in many respects, the bestbehaved of all mathematical functions as discussed by the authors, and it is continuous, differentiable, and easy to evaluate for all values of its argument.
Abstract: The polynomial is, in many respects, the best-behaved of all mathematical functions. It is continuous, differentiable, and easy to evaluate for all values of its argument.It may readily be differentiated or integrated, the result of either operation being,moreover, another polynomial. Well-established methods exist for the location of its zeros, and the problem of locating its maxima or minima is simply that of finding the zeros of a derived polynomial.


Book ChapterDOI
01 Jan 1973
TL;DR: In this article, approximate solutions of variational principles are obtained by considering some closed subset of the space of permissible functions to provide an upper and lower bound for the theoretical solution of the variational principle.
Abstract: A physical problem may be formulated as a variational principle rather than as a differential equation with associated conditions. The basic problem of the variational principle is to determine the function from an admissible class of functions such that a certain definite integral involving the function and some of its derivatives takes on a maximum or minimum value in a closed region R. This is a generalisation of the elementary theory of maxima and minima of the calculus which is concerned with the problem of finding a point in a closed region at which a function has a maximum or minimum value compared with neighboring points in the region. The definite integral in the variational principle is referred to as a functional, since it depends on the entire course of a function rather than on a number of variables. The domain of the functional is the space of the admissible functions. The main difficulty with the variational principle approach is that problems which can be meaningfully formulated as variational principles may not have solutions. This is reflected in mathematical terms by the domain of admissible functions of the functional not forming a closed set. Thus the existence of an extremum (maximum or minimum) cannot be assumed for a variational principle. However, in this text we are concerned with approximate solutions of variational principles. These are obtained by considering some closed subset of the space of permissible functions to provide an upper and lower bound for the theoretical solution of the variational principle.

01 Apr 1973
TL;DR: In this article, the authors proposed an algorithm that finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.
Abstract: An algorithm is described which solves the parameters X = (x1,x2,...,xm) and p in an approximation problem Ax nearly equal to y(p), where the parameter p occurs nonlinearly in y. Instead of linearization methods, which require an approximate value of p to be supplied as a priori information, and which may lead to the finding of local minima, the proposed algorithm finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.