scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 1975"


Journal ArticleDOI
Shih Han1
TL;DR: In this paper, a stepsize procedure is proposed to maintain monotone decrease of an exact penalty function, and the convergence of the damped Newton method is globalized in unconstrained optimization.
Abstract: Recently developd Newton and quasi-Newton methods for nonlinear programming possess only local convergence properties. Adopting the concept of the damped Newton method in unconstrained optimization, we propose a stepsize procedure to maintain monotone decrease of an exact penalty function. In so doing, the convergence of the method is globalized. Keywords: nonlinear programming, global convergence, exact penalty function.

1,077 citations


Journal ArticleDOI
TL;DR: A dual problem associated with a primal nonlinear programming problem is presented in this paper, where second derivatives of the functions constituting the primal problem are derived for this pair of problems, and duality results for these problems are also given.

225 citations


Journal ArticleDOI
01 Nov 1975
TL;DR: In this article, the elastic analysis of structures with "unilateral contact" and "friction" boundary conditions is considered and it is proved that the considered inequality boundary value problems can be formulated equivalently as variational inequalities, which permit the derivation of theorems of minimum potential and complementary energy, to account for this type of boundary conditions.
Abstract: In the present paper the elastic analysis of structures with “unilateral contact” — and “friction” — boundary conditions is considered. It is proved that the considered inequality boundary value problems can be formulated equivalently as variational inequalities, which permit the derivation of the theorems of minimum potential and complementary energy, to account for this type of boundary conditions. These minimum theorems are used to formulate the analysis as a nonlinear programming problem. Numerical examples on structures with coupled unilateral contact- and friction-boundary conditions illustrate the theory.

216 citations


Journal ArticleDOI
TL;DR: In this paper, the collocation method meshed with non-linear programming techniques provides an efficient strategy for the numerical solution of optimal control problems, and good accuracy can be obtained for the state and the control trajectories as well as for the value of the objective function.
Abstract: The collocation method meshed with non-linear programming techniques provides an efficient strategy for the numerical solution of optimal control problems. Good accuracy can be obtained for the state and the control trajectories as well as for the value of the objective function. In addition, the control strategy can be quite flexible in form. However, it is necessary to select the appropriate number of collocation points and number of parameters in the approximating functions with care.

137 citations


Journal ArticleDOI
TL;DR: An algorithm is developed for solving the convex programming problem by constructing a cutting plane through the center of a polyhedral approximation to the optimum, which generates a sequence of primal feasible points whose limit points satisfy the Kuhn—Tucker conditions of the problem.
Abstract: An algorithm is developed for solving the convex programming problem which iteratively proceeds to the optimum by constructing a cutting plane through the center of a polyhedral approximation to the optimum. This generates a sequence of primal feasible points whose limit points satisfy the Kuhn—Tucker conditions of the problem. Additionally, we present a simple, effective rule for dropping prior cuts, an easily calculated bound on the objective function, and a rate of convergence.

137 citations


Book ChapterDOI
01 Jan 1975
TL;DR: A systematic approach for minimization of a wide class of non-differentiable functions based on approximation of the nondifferentiable function by a smooth function and related to penalty and multiplier methods for constrained minimization is presented.
Abstract: This paper presents a systematic approach for minimization of a wide class of non-differentiable functions. The technique is based on approximation of the nondifferentiable function by a smooth function and is related to penalty and multiplier methods for constrained minimization. Some convergence results are given and the method is illustrated by means of examples from nonlinear programming.

125 citations


Journal ArticleDOI
TL;DR: The main purpose of this work is to associate a wide class of Lagrangian functions with a nonconvex, inequality and equality constrained optimization problem in such a way that unconstrained stationary points and local saddle points of each Lagrangians are related to Kuhn–Tucker points or local or global solutions of the optimization problem.
Abstract: The main purpose of this work is to associate a wide class of Lagrangian functions with a nonconvex, inequality and equality constrained optimization problem in such a way that unconstrained stationary points and local saddle points of each Lagrangian are related to Kuhn–Tucker points or local or global solutions of the optimization problem. As a consequence of this we are able to obtain duality results and two computational algorithms for solving the optimization problem. One algorithm is a Newton algorithm which has a local superlinear or quadratic rate of convergence. The other method is a locally linearly convergent method for finding stationary points of the Lagrangian and is an extension of the method of multipliers of Hestenes and Powell to inequalities.

79 citations


Journal ArticleDOI
Rein Luus1
TL;DR: The effectiveness of the method is shown with a 15-variable problem, which requires about 1 day's FORTRAN programming effort and 8 seconds of computer time for its solution on an IBM 370/165 digital computer.
Abstract: This paper presents a useful procedure of solving nonlinear integer programming problems. It finds, first, a pseudo-solution to the problem, as if the variables were continuous. Then it uses direct search in the neighbourhood of the pseudo-solution to find the optimum. The effectiveness of the method is shown with a 15-variable problem, which requires about 1 day's FORTRAN programming effort and 8 seconds of computer time for its solution on an IBM 370/165 digital computer.

70 citations



Book
01 Jan 1975

56 citations


Journal ArticleDOI
TL;DR: In this article, an iterative procedure using conjugate directions to minimize a nonlinear function subject to linear inequality constraints is presented, which converges to a stationary point assuming only first-order differentiability, and has ann-q step superlinear or quadratic rate of convergence with stronger assumptions.
Abstract: An iterative procedure is presented which uses conjugate directions to minimize a nonlinear function subject to linear inequality constraints. The method (i) converges to a stationary point assuming only first-order differentiability, (ii) has ann-q step superlinear or quadratic rate of convergence with stronger assumptions (n is the number of variables,q is the number of constraints which are binding at the optimum), (iii) requires the computation of only the objective function and its first derivatives, and (iv) is experimentally competitive with well-known methods.

ReportDOI
01 Mar 1975
TL;DR: A modified version of that initial design of Generalized Reduced Gradient methods, including the experiences that led to the modifications, is described.
Abstract: : Generalized Reduced Gradient (GRG) Methods are algorithms for solving nonlinear programs of general structure. An earlier paper discussed the basic principles of GRG and presented the preliminary design of a GRG computer code. This paper describes a modified version of that initial design, including the experiences that led to the modifications. This paper also is intended to serve as partial system documentation. The code is compared computationally with an interior penalty function code, and anticipated future work on the algorithm is outlined.

Journal ArticleDOI
TL;DR: A modified technique of separable programming was used to maximize the squared correlation ratio of weighted responses to partially ordered categories.
Abstract: A modified technique of separable programming was used to maximize the squared correlation ratio of weighted responses to partially ordered categories. The technique employs a polygonal approximation to each single-variable function by choosing mesh points around the initial approximation supplied by Nishisato's method. The major characteristics of this approach are: (i) it does not require any grid refinement; (ii) the entire process of computation quickly converges to the acceptable level of accuracy, and (iii) the method employs specific sets of mesh points for specific variables, whereby it reduces the number of variables for the separable programming technique. Numerical examples were provided to illustrate the procedure.

Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions for a local minimum to be global are derived, and the main result is that every local minimum is global if, and only if, its level sets are lower-semicontinuous point-to-set mappings.
Abstract: In this paper, necessary and sufficient conditions for a local minimum to be global are derived. The main result is that a real function, defined on a subset ofRn, has the property that every local minimum is global if, and only if, its level sets are lower-semicontinuous point-to-set mappings.

Journal ArticleDOI
TL;DR: In this article, the problem of selecting the points in an (n×n) Vandermonde matrix so as to minimize the condition number of the matrix has been discussed, and numerical answers for 2?n?6 in the case of symmetric point configurations have been given.
Abstract: We discuss the problem of selecting the points in an (n×n) Vandermonde matrix so as to minimize the condition number of the matrix. We give numerical answers for 2?n?6 in the case of symmetric point configurations. We also consider points on the non-negative real line, and give numerical results forn=2 andn=3. For generaln, the problem can be formulated as a nonlinear minimax problem with nonlinear constraints or, equivalently, as a nonlinear programming problem.

Journal ArticleDOI
TL;DR: A comprehensive engine/airframe screening methodology has been developed based on surface fitting and nonlinear optimization procedures, and a gradient-based method for nonlinear constrained minimization.
Abstract: A comprehensive engine/airframe screening methodology has been developed based on surface fitting and nonlinear optimization procedures. These procedures include the use of experimental design techniques, and a gradient-based method for nonlinear constrained minimization. The methodology has been programed for use on the CDC 6600 computer and has been successfully demonstrated on extensive test cases. One test case involved selection of an optimum airplane design using performance data for only 28 designs. A total of 256 designs were required to locate this optimum graphically.

Journal ArticleDOI
TL;DR: In this article, the NIP problem is reformulated into a 0-1 linear programming (ZOLP) problem and a one-to-one correspondence is shown between this NIP and the ZOLP problem.
Abstract: Mathematical models for reliability of a redundant system with two classes of failure modes are usually formulated as a nonlinear integer programming (NIP) problem. This paper reformulates the NIP problem into a 0-1 linear programming (ZOLP) problem and a one-to-one correspondence is shown between this NIP problem and the ZOLP problem. A NIP example treated by Tillman is formulated into a ZOLP problem and optimal solutions, identical to Tillman's are obtained by an implicit enumeration method. Calculating the new coefficients of the objective function and the constraints in the ZOLP are straight forward. There are not many constraints or variables in the proposed ZOLP. Consequently, the computation (CPU) time is less.


Journal ArticleDOI
TL;DR: A finite procedure for locating a global minimum of a problem with linear objective constraints except for one nonlinear constraint, which is of the “reverse convex” variety; that is, the direction of the inequality is the opposite of that requited for a convex constraint.
Abstract: This paper describes a finite procedure for locating a global minimum of a problem with linear objective constraints except for one nonlinear constraint, which is of the “reverse convex” variety; that is, the direction of the inequality is the opposite of that requited for a convex constraint. Budget constraints in which the cost functions are subject to economies of scale are typically of this form. An illustrative example of the procedure is provided.

Journal ArticleDOI
Richard A. Tapia1
TL;DR: A modification to the algorithm of Ref. 1 is given to solve the inequality of the following type: For α ≥ 1, β ≥ 1 using LaSalle's inequality.
Abstract: The usual approach to Newton's method for mathematical programming problems with equality constraints leads to the solution of linear systems ofn +m equations inn +m unknowns, wheren is the dimension of the space andm is the number of constraints. Moreover, these linear systems are never positive definite. It is our feeling that this approach is somewhat artificial, since in the unconstrained case the linear systems are very often positive definite. With this in mind, we present an alternate Newton-like approach for the constrained problem in which all the linear systems are of order less than or equal ton. Furthermore, when the Hessian of the Lagrangian at the solution is positive definite (a situation frequently occurring), all our systems will be positive definite. Hence, in all cases, our Newton-like method offers greater numerical stability. We demonstrate that the convergence properties of this Newton-like method are superior to those of the standard approach to Newton's method. The operation count for the new method using Gaussian elimination is of the same order as the operation count for the standard method. However, if the Hessian of the Lagrangian at the solution is positive definite and we use Cholesky decomposition, then the order of the operation count for the new method is half that for the standard approach to Newton's method. This theory is generalized to problems with both equality and inequality constraints.

Journal ArticleDOI
P. Huard1
TL;DR: Two general nonlinear optimization algorithms generating a sequence of feasible solutions based on the concept of point-to-set mapping continuity are described and the results unify these apparently diverse approaches.
Abstract: Two general nonlinear optimization algorithms generating a sequence of feasible solutions are described. The justifications for their convergence are based on the concept of point-to-set mapping continuity. These two algorithms cover many conventional feasible solution methods. The convergence results unify these apparently diverse approaches.

Journal ArticleDOI
TL;DR: In this article, necessary and sufficient conditions are obtained for the existence of an optimal solution, and appropriate duality theorems are established for this problem, where the objective function includes the square root of a quadratic form.
Abstract: Duality relations for various classes of complex nonlinear programming problems have recently appeared in the literaure These problems are special cases of a complex programming problem whose objective function includes the square root of a quadratic form, and so may not be differentiable For this problem, necessary and sufficient conditions are obtained for the existence of an optimal solution, and appropriate duality theorems are established

Journal ArticleDOI
TL;DR: An effective algorithm is described for solving the general constrained parameter optimization problem and a rank-one optimization algorithm is developed that takes advantage of the special properties of the augmented performance index.
Abstract: An effective algorithm is described for solving the general constrained parameter optimization problem. The method is quasi-second-order and requires only function and gradient information. An exterior point penalty function method is used to transform the constrained problem into a sequence of unconstrained problems. The penalty weightr is chosen as a function of the pointx such that the sequence of optimization problems is computationally easy. A rank-one optimization algorithm is developed that takes advantage of the special properties of the augmented performance index. The optimization algorithm accounts for the usual difficulties associated with discontinuous second derivatives of the augmented index. Finite convergence is exhibited for a quadratic performance index with linear constraints; accelerated convergence is demonstrated for nonquadratic indices and nonlinear constraints. A computer program has been written to implement the algorithm and its performance is illustrated in fourteen test problems.

01 Sep 1975
TL;DR: In this article, a theory and methods for analyzing sensitivity of the optimal value and optimal solution set to perturbations in problem data in nonlinear bounded optimization problems with discrete variables are presented.
Abstract: : Theory and methods are presented for analyzing sensitivity of the optimal value and optimal solution set to perturbations in problem data in nonlinear bounded optimization problems with discrete variables. Emphasis is given to studying behavior of the optimal value function. Theory is developed primarily for mixed integer programming (MIP) problems, where the domain is a subset of a Euclidean vector space.

Journal ArticleDOI
TL;DR: In this article, the cable behavior is described by a multilinear force-elongation law and a simple analytical representation of this type of law is used and shown to be easily adjusted in order to allow for irreversible plastic strains.

Journal ArticleDOI
TL;DR: Under fairly general conditions, a nonlinear fractional program, where the function to be maximized has the form f(x)/g(x), is shown to be equivalent to a non linear program not involving fractions.
Abstract: Under fairly general conditions, a nonlinear fractional program, where the function to be maximized has the form f(x)/g(x), is shown to be equivalent to a nonlinear program not involving fractions. The latter program is not generally a convex program, but there is often a convex program equivalent to it, to which the known algorithms for convex programming may be applied. An application to duality of a fractional program is discussed.

Journal ArticleDOI
TL;DR: The application of a new algorithm for minimax optimization that uses to its advantage certain obvious properties of the minimax function, namely, that the discontinuities in the first derivatives can be characterized by projections is investigated.
Abstract: The application of a new algorithm for minimax optimization is investigated. Unlike most of the previously published algorithms the new algorithm uses to its advantage certain obvious properties of the minimax function, namely, that the discontinuities in the first derivatives can be characterized by projections. An N-section transmission-line transformer is used as a test problem.

Journal ArticleDOI
TL;DR: In this paper, the existence of finite optimum solutions for linear programming problems with absolute value functions subject to linear constraints is proved. But the necessary and sufficient conditions for such problems are not discussed.
Abstract: This paper considers some programming problems with absolute-value (objective) functions subject to linear constraints. Necessary and sufficient conditions for the existence of finite optimum solutions to these problems are proved.

Journal ArticleDOI
TL;DR: The continuous nonlinear programming problem is formulated as an unconstrained minimax problem using the Bandler-Charalambous approach, and Dakin's branch-and-bound technique is used in conjunction with Fletcher's unconStrained minimization program to discretize the continuous solution.
Abstract: The problem of designing recursive digital filters with optimized word length coefficients to meet arbitrary, prescribed magnitude characteristics in the frequency domain is numerically investigated. The continuous nonlinear programming problem is formulated as an unconstrained minimax problem using the Bandler-Charalambous approach, and Dakin's branch-and-bound technique is used in conjunction with Fletcher's unconstrained minimization program to discretize the continuous solution. The objective function to be minimized is directly concerned with the word lengths of the coefficients, which are also introduced as variables. †

Journal ArticleDOI
TL;DR: In this article, a technique for preliminary structural design is developed, based upon the logic used by practicing designers, by considering only the conditions of static equilibrium and stress admissibility, a linear problem results that can be solved by ordinary linear programming.
Abstract: A technique for preliminary structural design is developed, based upon the logic used by practicing designers. By considering only the conditions of static equilibrium and stress admissibility, a linear problem results that can be solved by ordinary linear programming. The solution to this problem gives an initial design that is generally close to the final optimum solution. An exact elastic solution satisfying compatibility as well as equilibrium and stress admissibility can be found by subsequent nonlinear optimization based upon a flexibility approach. Computer results presented for steel grillage design problems with multiple local optimums indicate that the proposed design approach can increase significantly the likelihood of finding the global optimum solution, as opposed to other optimization methods.