scispace - formally typeset
Search or ask a question

Showing papers on "Quadratically constrained quadratic program published in 1987"


Book
01 Oct 1987
TL;DR: This paper presents a meta-modelling framework for solving the optimization problems that can be formulated as nonconvex quadratic problems and some of the methods used for solving these problems have been developed.
Abstract: Convex sets and functions.- Optimality conditions in nonlinear programming.- Combinatorial optimization problems that can be formulated as nonconvex quadratic problems.- Enumerative methods in nonconvex programming.- Cutting plane methods.- Branch and bound methods.- Bilinear programming methods for nonconvex quadratic problems.- Large scale problems.- Global minimization of indefinite quadratic problems.- Test problems for global nonconvex quadratic programming algorithms.

472 citations


Journal ArticleDOI
TL;DR: In this article, the authors extend the framework of minimum discrimination information estimation (MDI) techniques to include the quadratic and linear inequality constrained problems, and demonstrate a new solution procedure for an important problem in actuarial science.
Abstract: In this paper, we extend the framework of minimum discrimination information estimation (MDI) techniques to include the quadratic and linear inequality constrained problems. Duality theory and computational aspects of the technique are exposed, and the methodology is illustrated by a new solution procedure for an important problem in actuarial science.

21 citations


Journal ArticleDOI
TL;DR: In this article, a solution procedure for linear programs with one convex quadratic constraint is proposed, where the method finds an optimal solution in finitely many iterations and is shown to be optimal in terms of time complexity.
Abstract: A solution procedure for linear programs with one convex quadratic constraint is suggested The method finds an optimal solution in finitely many iterations

14 citations


Journal ArticleDOI
TL;DR: This problem is solved by applying Lagrangian relaxation to the convex quadratic constraints to find near-optimal solutions of problems with up to 391 variables and 579 constraints.
Abstract: Minimax optimal design of sonar transducer arrays can be formulated as a nonlinear program with many convex quadratic constraints and a nonconvex quadratic efficiency constraint. The variables of this problem are a scaling and phase shift applied to the output of each sensor. This problem is solved by applying Lagrangian relaxation to the convex quadratic constraints. Extensive computational experience shows that this approach can efficiently find near-optimal solutions of problems with up to 391 variables and 579 constraints.

11 citations



Journal ArticleDOI
TL;DR: In this paper, a new computational criterion is proposed to check whether the solution of the perturbed quadratic program provides the least 2-norm solution for the original linear program.
Abstract: By perturbing properly a linear program to a separable quadratic program it is possible to solve the latter in its dual variable space by iterative techniques such as sparsity-preserving SOR (successive overtaxation techniques). In this way large sparse linear programs can be handled. In this paper we give a new computational criterion to check whether the solution of the perturbed quadratic program provides the least 2-norm solution of the original linear program. This criterion improves on the criterion proposed in an earlier paper. We also describe an algorithm for solving linear programs which is based on the SOR methods. The main property of this algorithm is that, under mild assumptions, it finds the least 2-norm solution of a linear program in a finite number of iteration.s

9 citations


01 Jun 1987
TL;DR: A primal-dual interior point algorithm for convex quadratic programming problems which requires a total of O(cu n L) arithmetic operations and finds an approximate Newton's direction associated with the Kuhn-Tucker system of equations which characterizes a solution of the logarithm barrier function problem.
Abstract: : The authors describe a primal-dual interior point algorithm for convex quadratic programming problems which requires a total of O(cu n L) arithmetic operations. Each iteration updates a penalty parameter and finds an approximate Newton's direction associated with the Kuhn-Tucker system of equations which characterizes a solution of the logarithm barrier function problem. This direction is then used to find the next iterate. The algorithm is based on the path following idea. The total number of iterations is shown to be of the order of O (square root of n L). Keywords: Interior-point methods; Convex quadratic programming; Karmarkar's algorithm; Polynomial-time algorithms; Barrier function; Path following. (Author)

6 citations


Journal ArticleDOI
TL;DR: In this article, Dantzig's method is shown to be applicable to solving any problem of convex quadratic programming of the type min { 1 2 (Dz, z) + (d, x)¦Ax = b, x⩾0 ].
Abstract: Dantzig's method is shown to be applicable to solving any problem of convex quadratic programming of the type min { 1 2 (Dz, z) + (d, x)¦Ax = b, x⩾0 ]. A refined algorithm, using Bland's idea, is shown to be finite.

Journal ArticleDOI
Shucheng Liu1
TL;DR: It is demonstrated that the general quadratic programming algorithms of Fletcher, and Keller are equivalent in the sense that they generate identical sequences of points.
Abstract: In this paper, we demonstrate that the general quadratic programming algorithms of Fletcher, and Keller are equivalent in the sense that they generate identical sequences of points. Such an equivalence of the two algorithms extends earlier results for convex quadratic programming due to Best. Several computational results have verified this equivalence.

Journal ArticleDOI
TL;DR: In this paper, a regularization of the Wilson's method is proposed to guarantee global convergence by changing the objective function of the quadratic subproblems of the original Wilson method, and the method reaches its local phase by the descent of a specific measure of nonoptimality.
Abstract: A regularization of Wilson's method is proposed that guarantees global convergence. During the global phase the descent of some penalty function is obtained by changing of the objective function of the quadratic subproblem. In this way the method reaches its local phase which is characterized by the descent of a specific measure of non-optimality. In the final phase the method proposed then becomes identical to the quadratically convergent Wilson-method.