scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Optimization in 1998"


Journal ArticleDOI
TL;DR: This paper presents convergence properties of the Nelder--Mead algorithm applied to strictly convex functions in dimensions 1 and 2, and proves convergence to a minimizer for dimension 1, and various limited convergence results for dimension 2.
Abstract: The Nelder--Mead simplex algorithm, first published in 1965, is an enormously popular direct search method for multidimensional unconstrained minimization. Despite its widespread use, essentially no theoretical results have been proved explicitly for the Nelder--Mead algorithm. This paper presents convergence properties of the Nelder--Mead algorithm applied to strictly convex functions in dimensions 1 and 2. We prove convergence to a minimizer for dimension 1, and various limited convergence results for dimension 2. A counterexample of McKinnon gives a family of strictly convex functions in two dimensions and a set of initial conditions for which the Nelder--Mead algorithm converges to a nonminimizer. It is not yet known whether the Nelder--Mead method can be proved to converge to a minimizer for a more specialized class of convex functions in two dimensions.

7,141 citations


Journal ArticleDOI
TL;DR: In this paper, an alternate method for finding several Pareto optimal points for a general nonlinear multicriteria optimization problem is proposed, which can handle more than two objectives while retaining the computational efficiency of continuation-type algorithms.
Abstract: This paper proposes an alternate method for finding several Pareto optimal points for a general nonlinear multicriteria optimization problem. Such points collectively capture the trade-off among the various conflicting objectives. It is proved that this method is independent of the relative scales of the functions and is successful in producing an evenly distributed set of points in the Pareto set given an evenly distributed set of parameters, a property which the popular method of minimizing weighted combinations of objective functions lacks. Further, this method can handle more than two objectives while retaining the computational efficiency of continuation-type algorithms. This is an improvement over continuation techniques for tracing the trade-off curve since continuation strategies cannot easily be extended to handle more than two objectives.

2,094 citations


Journal ArticleDOI
TL;DR: This paper shows how to formulate sufficient conditions for a robust solution to exist as SDPs, and provides sufficient conditions which guarantee that the robust solution is unique and continuous (Holder-stable) with respect to the unperturbed problem's data.
Abstract: In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worst-case) objective while satisfying the constraints for every possible value of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist as SDPs. When the perturbation is "full," our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique and continuous (Holder-stable) with respect to the unperturbed problem's data. The approach can thus be used to regularize ill-conditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation, and integer programming.

985 citations


Journal ArticleDOI
TL;DR: It is shown that in the nonconvex case, the DCA converges to the global solution of the trust-region problem, using only matrix-vector products and requiring at most 2m+2 restarts, where m is the number of distinct negative eigenvalues of the coefficient matrix that defines the problem.
Abstract: This paper is devoted to difference of convex functions (d.c.) optimization: d. c. duality, local and global optimality conditions in d. c. programming, the d. c. algorithm (DCA), and its application to solving the trust-region problem. The DCA is an iterative method that is quite different from well-known related algorithms. Thanks to the particular structure of the trust-region problem, the DCA is very simple (requiring only matrix-vector products) and, in practice, converges to the global solution. The inexpensive implicitly restarted Lanczos method of Sorensen is used to check the optimality of solutions provided by the DCA. When a nonglobal solution is found, a simple numerical procedure is introduced both to find a feasible point having a smaller objective value and to restart the DCA at this point. It is shown that in the nonconvex case, the DCA converges to the global solution of the trust-region problem, using only matrix-vector products and requiring at most 2m+2 restarts, where m is the number of distinct negative eigenvalues of the coefficient matrix that defines the problem. Numerical simulations establish the robustness and efficiency of the DCA compared to standard related methods, especially for large-scale problems.

544 citations


Journal ArticleDOI
TL;DR: This paper presents efficiency estimates for several symmetric primal-dual methods that can loosely be classified as path-following methods for convex programming problems expressed in conic form when the cone and its associated barrier are self-scaled.
Abstract: In this paper we continue the development of a theoretical foundation for efficient primal-dual interior-point algorithms for convex programming problems expressed in conic form, when the cone and its associated barrier are self-scaled (see Yu. E. Nesterov and M. J. Todd, Math. Oper. Res., 22 (1997), pp. 1--42). The class of problems under consideration includes linear programming, semidefinite programming, and convex quadratically constrained, quadratic programming problems. For such problems we introduce a new definition of affine-scaling and centering directions. We present efficiency estimates for several symmetric primal-dual methods that can loosely be classified as path-following methods. Because of the special properties of these cones and barriers, two of our algorithms can take steps that typically go a large fraction of the way to the boundary of the feasible region, rather than being confined to a ball of unit radius in the local norm defined by the Hessian of the barrier.

532 citations


Journal ArticleDOI
TL;DR: The XZ+ZX method is more robust with respect to its ability to step close to the boundary, converges more rapidly, and achieves higher accuracy than other methods considered, including Mehrotra predictor-corrector variants and issues of numerical stability.
Abstract: Primal-dual interior-point path-following methods for semidefinite programming are considered. Several variants are discussed, based on Newton's method applied to three equations: primal feasibility, dual feasibility, and some form of centering condition. The focus is on three such algorithms, called the XZ, XZ+ZX, and Q methods. For the XZ+ZX and Q algorithms, the Newton system is well defined and its Jacobian is nonsingular at the solution, under nondegeneracy assumptions. The associated Schur complement matrix has an unbounded condition number on the central path under the nondegeneracy assumptions and an additional rank assumption. Practical aspects are discussed, including Mehrotra predictor-corrector variants and issues of numerical stability. Compared to the other methods considered, the XZ+ZX method is more robust with respect to its ability to step close to the boundary, converges more rapidly, and achieves higher accuracy.

488 citations


Journal ArticleDOI
TL;DR: This paper analyzes the behavior of the Nelder--Mead simplex method for a family of examples which cause the method to converge to a nonstationary point and shows that this behavior cannot occur for functions with more than three continuous derivatives.
Abstract: This paper analyzes the behavior of the Nelder--Mead simplex method for a family of examples which cause the method to converge to a nonstationary point All the examples use continuous functions of two variables The family of functions contains strictly convex functions with up to three continuous derivatives In all the examples the method repeatedly applies the inside contraction step with the best vertex remaining fixed The simplices tend to a straight line which is orthogonal to the steepest descent direction It is shown that this behavior cannot occur for functions with more than three continuous derivatives The stability of the examples is analyzed

473 citations


Journal ArticleDOI
TL;DR: How the search directions for the Nesterov--Todd (NT) method can be computed efficiently and how they can be viewed as Newton directions are discussed and demonstrated.
Abstract: We study different choices of search direction for primal-dual interior-point methods for semidefinite programming problems. One particular choice we consider comes from a specialization of a class of algorithms developed by Nesterov and Todd for certain convex programming problems. We discuss how the search directions for the Nesterov--Todd (NT) method can be computed efficiently and demonstrate how they can be viewed as Newton directions. This last observation also leads to convenient computation of accelerated steps, using the Mehrotra predictor-corrector approach, in the NT framework. We also provide an analytical and numerical comparison of several methods using different search directions, and suggest that the method using the NT direction is more robust than alternative methods.

279 citations


Journal ArticleDOI
TL;DR: A concise derivation of the key equalities and inequalities for complexity analysis along the exact line used in linear programming (LP) is given, producing basic relationships that have compact forms almost identical to their counterparts in LP.
Abstract: This work concerns primal--dual interior-point methods for semidefinite programming (SDP) that use a search direction originally proposed by Helmberg et al [SIAM J Optim, 6 (1996), pp 342--361] and Kojima, Shindoh, and Hara [SIAM J Optim, 7 (1997), pp 86--125] and recently rediscovered by Monteiro [SIAM J Optim, 7 (1997), pp 663--678] in a more explicit form In analyzing these methods, a number of basic equalities and inequalities were developed in [Kojima, Shindoh, and Hara] and also in [Monteiro] through different means and in different forms In this paper, we give a concise derivation of the key equalities and inequalities for complexity analysis along the exact line used in linear programming (LP), producing basic relationships that have compact forms almost identical to their counterparts in LP We also introduce a new formulation of the central path and variable-metric measures of centrality These results provide convenient tools for deriving polynomiality results for primal--dual algorithms extended from LP to SDP using the aforementioned and related search directions We present examples of such extensions, including the long-step infeasible-interior-point algorithm of Zhang [SIAM J Optim, 4 (1994), pp 208--227]

259 citations


Journal ArticleDOI
TL;DR: It is shown how lower bounds can be computed efficiently during the branch-and-bound process to reduce the number of quadratic programming (QP) problems that have to be solved.
Abstract: The solution of convex mixed-integer quadratic programming (MIQP) problems with a general branch-and-bound framework is considered It is shown how lower bounds can be computed efficiently during the branch-and-bound process Improved lower bounds such as the ones derived in this paper can reduce the number of quadratic programming (QP) problems that have to be solved The branch-and-bound approach is also shown to be superior to other approaches in solving MIQP problems Numerical experience is presented which supports these conclusions

216 citations


Journal ArticleDOI
TL;DR: A new technique is presented which identifies active constraints in a neighborhood of a solution and which requires neither complementary slackness nor uniqueness of the multipliers.
Abstract: We consider nonlinear programs with inequality constraints, and we focus on the problem of identifying those constraints which will be active at an isolated local solution. The correct identification of active constraints is important from both a theoretical and a practical point of view. Such an identification removes the combinatorial aspect of the problem and locally reduces the inequality constrained minimization problem to an equality constrained problem which can be more easily dealt with. We present a new technique which identifies active constraints in a neighborhood of a solution and which requires neither complementary slackness nor uniqueness of the multipliers. We also present extensions to variational inequalities and numerical examples illustrating the identification technique.

Journal ArticleDOI
TL;DR: Large-scale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available is concerned, and a method suitable for large problems can be obtained.
Abstract: This paper concerns large-scale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penalty-barrier function that involves both primal and dual variables. Each subproblem is solved with a modified Newton method that generates search directions from a primal-dual system similar to that proposed for interior methods. The augmented penalty-barrier function may be interpreted as a merit function for values of the primal and dual variables. An inertia-controlling symmetric indefinite factorization is used to provide descent directions and directions of negative curvature for the augmented penalty-barrier merit function. A method suitable for large problems can be obtained by providing a version of this factorization that will treat large sparse indefinite systems.

Journal ArticleDOI
TL;DR: An incremental gradient method with momentum term for minimizing the sum of continuously differentiable functions is considered, which uses a new adaptive stepsize rule that decreases the stepsize whenever sufficient progress is not made.
Abstract: We consider an incremental gradient method with momentum term for minimizing the sum of continuously differentiable functions. This method uses a new adaptive stepsize rule that decreases the stepsize whenever sufficient progress is not made. We show that if the gradients of the functions are bounded and Lipschitz continuous over a certain level set, then every cluster point of the iterates generated by the method is a stationary point. In addition, if the gradient of the functions have a certain growth property, then the method is either linearly convergent in some sense or the stepsizes are bounded away from zero. The new stepsize rule is much in the spirit of heuristic learning rules used in practice for training neural networks via backpropagation. As such, the new stepsize rule may suggest improvements on existing learning rules. Finally, extension of the method and the convergence results to constrained minimization is discussed, as are some implementation issues and numerical experience.

Journal ArticleDOI
TL;DR: The classical condition of a positive-definite Hessian in smooth problems without constraints is found to have an exact counterpart much more broadly in the positivity of a certain generalized Hessian mapping.
Abstract: The behavior of a minimizing point when an objective function is tilted by adding a small linear term is studied from the perspective of second-order conditions for local optimality. The classical condition of a positive-definite Hessian in smooth problems without constraints is found to have an exact counterpart much more broadly in the positivity of a certain generalized Hessian mapping. This fully characterizes the case where tilt perturbations cause the minimizing point to shift in a Lipschitzian manner.

Journal ArticleDOI
TL;DR: It is proved that the sequence converges (weakly) if and only if the problem has solutions, in which case the weak limit is a solution, and if the solution does not have solutions, then the sequence is unbounded.
Abstract: We consider a generalized proximal point method for solving variational inequality problems with monotone operators in a Hilbert space. It differs from the classical proximal point method (as discussed by Rockafellar for the problem of finding zeroes of monotone operators) in the use of generalized distances, called Bregman distances, instead of the Euclidean one. These distances play not only a regularization role but also a penalization one, forcing the sequence generated by the method to remain in the interior of the feasible set so that the method becomes an interior point one. Under appropriate assumptions on the Bregman distance and the monotone operator we prove that the sequence converges (weakly) if and only if the problem has solutions, in which case the weak limit is a solution. If the problem does not have solutions, then the sequence is unbounded. We extend similar previous results for the proximal point method with Bregman distances which dealt only with the finite dimensional case and which applied only to convex optimization problems or to finding zeroes of monotone operators, which are particular cases of variational inequality problems.

Journal ArticleDOI
TL;DR: This paper describes a software implementation of Byrd and Omojokun's trust region algorithm for solving nonlinear equality constrained optimization problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm.
Abstract: This paper describes a software implementation of Byrd and Omojokun's trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasi-Newton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection.

Journal ArticleDOI
TL;DR: A primal-dual infeasible-interior-point path-following algorithm for solving semidefinite programming (SDP) problems and a sufficient condition for the superlinear convergence of the algorithm is proposed.
Abstract: A primal-dual infeasible-interior-point path-following algorithm is proposed for solving semidefinite programming (SDP) problems. If the problem has a solution, then the algorithm is globally convergent. If the starting point is feasible or close to being feasible, the algorithm finds an optimal solution in at most $O(\sqrt{n}L)$ iterations, where n is the size of the problem and L is the logarithm of the ratio of the initial error and the tolerance. If the starting point is large enough, then the algorithm terminates in at most O(nL) steps either by finding a solution or by determining that the primal-dual problem has no solution of norm less than a given number. Moreover, we propose a sufficient condition for the superlinear convergence of the algorithm. In addition, we give two special cases of SDP for which the algorithm is quadratically convergent.

Journal ArticleDOI
TL;DR: It is shown that ill-conditioning in the exact condensed matrix closely resembles that known for the primal barrier Hessian, and the influence of cancellation in the computed constraints of a primal-dual method is examined.
Abstract: Ill-conditioning has long been regarded as a plague on interior methods, but its damaging effects have rarely been documented. In fact, implementors of interior methods who ignore warnings about the dire consequences of ill-conditioning usually manage to compute accurate solutions. We offer some insight into this seeming contradiction by analyzing ill-conditioning within a primal-dual method in which the full, usually well-conditioned primal-dual matrix is transformed to a "condensed," inherently ill-conditioned matrix Mpd. We show that ill-conditioning in the exact condensed matrix closely resembles that known for the primal barrier Hessian, and then examine the influence of cancellation in the computed constraints. Using the structure of Mpd, various bounds are obtained on the absolute accuracy of the computed primal-dual steps. Without cancellation, the portion of the computed x step in the small space of Mpd (a subspace close to the null space of the Jacobian of the active constraints) has an absolute error bound comparable to machine precision, and its large-space component has a much smaller error bound. With cancellation (the usual case), the absolute error bounds for both the small- and large-space components of the computed x step are comparable to machine precision. In either case, the absolute error bound for the computed multiplier steps associated with active constraints is comparable to machine precision; the computed multiplier steps for inactive constraints, although converging to zero, retain (approximately) full relative precision. Because of errors in forming the right-hand side, the absolute error in the computed solution of the full, well-conditioned primal-dual system is shown to be comparable to machine precision. Thus, under quite general conditions, ill-conditioning in Mpd does not noticeably impair the accuracy of the computed primal-dual steps. (A similar analysis applies to search directions obtained by direct solution of the primal Newton equations.)

Journal ArticleDOI
TL;DR: A new algorithm for large-scale nonlinear programs with box constraints that possesses global and superlinear convergence properties under standard assumptions and a new technique for generating test problems with known characteristics is introduced.
Abstract: A new algorithm for large-scale nonlinear programs with box constraints is introduced. The algorithm is based on an efficient identification technique of the active set at the solution and on a nonmonotone stabilization technique. It possesses global and superlinear convergence properties under standard assumptions. A new technique for generating test problems with known characteristics is also introduced. The implementation of the method is described along with computational results for large-scale problems.

Journal ArticleDOI
TL;DR: This paper establishes the superlinear convergence of a symmetric primal-dual path following algorithm for semidefinite programming (SDP) under the assumptions that the semideFinite program has a strictly complementary primal- dual optimal solution and that the size of the central path neighborhood tends to zero.
Abstract: This paper establishes the superlinear convergence of a symmetric primal-dual path following algorithm for semidefinite programming (SDP) under the assumptions that the semidefinite program has a strictly complementary primal-dual optimal solution and that the size of the central path neighborhood tends to zero. The interior point algorithm considered here closely resembles the Mizuno--Todd--Ye predictor-corrector method for linear programming which is known to be quadratically convergent. It is shown that when the iterates are well centered, the duality gap is reduced superlinearly after each predictor step. Indeed, if each predictor step is succeeded by r consecutive corrector steps then the predictor reduces the duality gap superlinearly with order 2 ( 1 + 2 - r ). The proof relies on a careful analysis of the central path for SDP. It is shown that under the strict complementarity assumption, the primal-dual central path converges to the analytic center of the primal-dual optimal solution set, and the distance from any point on the central path to this analytic center is bounded by the duality gap.

Journal ArticleDOI
TL;DR: Algorithms for solving problems to minimize F1 subject to F_2 subject to K and for the construction of the Pareto set and the Paredto set $\epsilon$-approximation for the corresponding bicriterion problems are presented.
Abstract: A bicriterion problem of scheduling jobs on a single machine is studied. The processing time of each job is a linear decreasing function of the amount of a common discrete resource allocated to the job. A solution is specified by a sequence of the jobs and a resource allocation. The quality of a solution is measured by two criteria, F1 and F2. The first criterion is the maximal or total (weighted) resource consumption, and the second criterion is a regular scheduling criterion depending on the job completion times. Both criteria have to be minimized. General schemes for the construction of the Pareto set and the Pareto set $\epsilon$-approximation are presented. Computational complexities of problems to minimize F1 subject to F_2\le K$ and to minimize F2 subject to $F_1\le K$, where K is any number, are studied for various functions F1 and F2. Algorithms for solving these problems and for the construction of the Pareto set and the Pareto set $\epsilon$-approximation for the corresponding bicriterion problems are presented.

Journal ArticleDOI
TL;DR: This article presents a primal-dual predictor-corrector interior-point method for solving quadratically constrained convex optimization problems that arise from truss design problems and illustrates the surprising efficiency of the method.
Abstract: This article presents a primal-dual predictor-corrector interior-point method for solving quadratically constrained convex optimization problems that arise from truss design problems. We investigate certain special features of the problem, discuss fundamental differences of interior-point methods for linearly and nonlinearly constrained problems, extend Mehrotra's predictor-corrector strategy to nonlinear programs, and establish convergence of a long step method. Numerical experiments on large scale problems illustrate the surprising efficiency of the method.

Journal ArticleDOI
TL;DR: This work considers a class of trajectories that are similar to the central path but can be constructed to pass through any given interior feasible or infeasible point, and study their convergence.
Abstract: In this paper we study interior point trajectories in semidefinite programming (SDP) including the central path of an SDP. This work was inspired by the seminal work of Megiddo on linear programming trajectories [ Progress in Math. Programming: Interior-Point Algorithms and Related Methods, N. Megiddo, ed., Springer-Verlag, Berlin, 1989, pp. 131--158]. Under an assumption of primal and dual strict feasibility, we show that the primal and dual central paths exist and converge to the analytic centers of the optimal faces of, respectively, the primal and the dual problems. We consider a class of trajectories that are similar to the central path but can be constructed to pass through any given interior feasible or infeasible point, and study their convergence. Finally, we study the derivatives of these trajectories and their convergence.

Journal ArticleDOI
TL;DR: It is proved that the solution set of the Huber M-estimator problem is Lipschitz continuous with respect to perturbations of the tuning parameter $\gamma$ and that the Hubers problem has many solutions for small tuning parameter$ if the linear l1 estimation problem has multiple solutions.
Abstract: Relationships between a linear l1 estimation problem and the Huber M-estimator problem can be easily established by their dual formulations. The least norm solution of a linear programming problem studied by Mangasarian and Meyer [SIAM J. Control Optim., 17 (1979), pp. 745--752] provides a key link between the dual problems. Based on the dual formulations, we establish a local linearity property of the Huber M-estimators with respect to the tuning parameter $\gamma$ and prove that the solution set of the Huber M-estimator problem is Lipschitz continuous with respect to perturbations of the tuning parameter $\gamma$. As a consequence, the set of the linear l1 estimators is the limit of the set of the Huber M-estimators as $\gamma\to 0+. Thus, the Huber M-estimator problem has many solutions for small tuning parameter $\gamma$ if the linear l1 estimation problem has multiple solutions. A recursive version of Madsen and Nielsen's algorithm [SIAM J. Optim., 3 (1993), pp. 223--235] based on computation of the Huber M-estimator is proposed for finding a linear l1 estimator.

Journal ArticleDOI
TL;DR: Convergence rate results are derived for a stochastic optimization problem where a performance measure is minimized with respect to a vector parameter t for applications including the optimization of steady-state simulation models with likelihood ratio, perturbation analysis, or finite-difference gradient estimators.
Abstract: Convergence rate results are derived for a stochastic optimization problem where a performance measure is minimized with respect to a vector parameter t. Assuming that a gradient estimator is available and that both the bias and the variance of the estimator are (known) functions of the budget devoted to its computation, the gradient estimator is employed in conjunction with a stochastic approximation (SA) algorithm. Our interest is to figure out how to allocate the total available computational budget to the successive SA iterations. The effort is devoted to solving the asymptotic version of this problem by finding the convergence rate of SA toward the optimizer, first as a function of the number of iterations and then as a function of the total computational effort. As a result the optimal rate of increase of the computational budget per iteration can be found. Explicit expressions for the case where the computational budget devoted to an iteration is a polynomial in the iteration number, and where the bias and variance of the gradient estimator are polynomials of the computational budget, are derived. Applications include the optimization of steady-state simulation models with likelihood ratio, perturbation analysis, or finite-difference gradient estimators; optimization of infinite-horizon models with discounting; optimization of functions of several expectations; and so on. Several examples are discussed. Our results readily generalize to general root-finding problems.

Journal ArticleDOI
TL;DR: This paper shows that various matrices from the linear programming and mixed integer programming libraries Netlib and Miplib can indeed be decomposed into so-called bordered block diagonal form by computing optimal decompositions or decomposition with proven quality.
Abstract: In this paper we investigate whether matrices arising from linear or integer programming problems can be decomposed into so-called bordered block diagonal form. More precisely, given some matrix A, we try to assign as many rows as possible to some number $\beta$ of blocks of size $\kappa$ such that no two rows assigned to different blocks intersect in a common column. Bordered block diagonal form is desirable because it can guide and speed up the solution process for linear and integer programming problems. We show that various matrices from the linear programming and mixed integer programming libraries Netlib and Miplib can indeed be decomposed into this form by computing optimal decompositions or decompositions with proven quality. These computations are done with a branch-and-cut algorithm based on polyhedral investigations of the matrix decomposition problem. In practice, however, one would use heuristics to find a good decomposition. We present several heuristic ideas and test their performance. Finally, we investigate the usefulness of optimal matrix decompositions into bordered block diagonal form for integer programming by using such decompositions to guide the branching process in a branch-and-cut code for general mixed integer programs.

Journal ArticleDOI
TL;DR: It is shown that, under suitable assumptions, the program's optimum value can be approximated by the values of finite-dimensional linear programs, and that every accumulation point of a sequence of optimal solutions for the approximating programs is an optimal solution for the original problem.
Abstract: This paper presents approximation schemes for an infinite linear program. In particular, it is shown that, under suitable assumptions, the program's optimum value can be approximated by the values of finite-dimensional linear programs, and that, in addition, every accumulation point of a sequence of optimal solutions for the approximating programs is an optimal solution for the original problem.

Journal ArticleDOI
TL;DR: The polynomial convergence of the class of primal-dual feasible interior-point algorithms for semidefinite programming (SDP) based on the Monteiro and Zhang family of search directions is established for the first time.
Abstract: This paper establishes the polynomial convergence of the class of primal-dual feasible interior-point algorithms for semidefinite programming (SDP) based on the Monteiro and Zhang family of search directions. In contrast to Monteiro and Zhang's work [Math. Programming, 81 (1998), pp. 281--299], here no condition is imposed on the scaling matrix that determines the search direction. We show that the polynomial iteration-complexity bounds of two well-known algorithms for linear programming, namely the short-step path-following algorithm of Kojima, Mizuno, and Yoshise [Math. Programming, 44 (1989), pp. 1--26] and Monteiro and Adler [Math. Programming, 44 (1989), pp. 27--41 and pp. 43--66] and the predictor-corrector algorithm of Mizuno, Todd, and Ye [Math. Oper. Res., 18 (1993), pp. 945--981] carry over into the context of SDP. Since the Monteiro and Zhang family of directions includes the Alizadeh, Haeberly, and Overton direction, we establish for the first time the polynomial convergence of algorithms based on this search direction.

Journal ArticleDOI
TL;DR: Global convergence and, under a nonsingularity assumption, local Q-superlinear (or quadratic) convergence of the algorithm are established and calculation of a generalized Jacobian is discussed and numerical results are presented.
Abstract: Based on a semismooth equation reformulation using Fischer's function, a trust region algorithm is proposed for solving the generalized complementarity problem (GCP). The algorithm uses a generalized Jacobian of the function involved in the semismooth equation and adopts the squared natural residual of the semismooth equation as a merit function. The proposed algorithm is applicable to the nonlinear complementarity problem because the latter problem is a special case of the GCP. Global convergence and, under a nonsingularity assumption, local Q-superlinear (or quadratic) convergence of the algorithm are established. Moreover, calculation of a generalized Jacobian is discussed and numerical results are presented.

Journal ArticleDOI
TL;DR: An error is pointed out in the local convergence proof in the quoted paper and a correct proof is given.
Abstract: An error is pointed out in the local convergence proof in the quoted paper [J. L. Zhou and A. L. Tits, SIAM J. Optim., 6 (1996), pp. 461--487]. A correct proof is given.