scispace - formally typeset
Search or ask a question

Showing papers in "Computational Optimization and Applications in 1996"


Journal ArticleDOI
TL;DR: A class of parametric smooth functions that approximate the fundamental plus function, (x)+=max{0, x}, by twice integrating a probability density function leads to classes of smooth parametric nonlinear equation approximations of nonlinear and mixed complementarity problems (NCPs and MCPs).
Abstract: We propose a class of parametric smooth functions that approximate the fundamental plus function, (x)+=max{0, x}, by twice integrating a probability density function. This leads to classes of smooth parametric nonlinear equation approximations of nonlinear and mixed complementarity problems (NCPs and MCPs). For any solvable NCP or MCP, existence of an arbitrarily accurate solution to the smooth nonlinear equations as well as the NCP or MCP, is established for sufficiently large value of a smoothing parameter α. Newton-based algorithms are proposed for the smooth problem. For strongly monotone NCPs, global convergence and local quadratic convergence are established. For solvable monotone NCPs, each accumulation point of the proposed algorithms solves the smooth problem. Exact solutions of our smooth nonlinear equation for various values of the parameter α, generate an interior path, which is different from the central path for interior point method. Computational results for 52 test problems compare favorably with these for another Newton-based method. The smooth technique is capable of solving efficiently the test problems solved by Dirkse and Ferris [6], Harker and Xiao [11] and Pang & Gabriel [28].

465 citations


Journal ArticleDOI
TL;DR: A modification of the (infeasible) primal-dual interior point method that uses multiple corrections to improve the centrality of the current iterate and gives on the average a 25% to 40% reduction in the number of iterations compared with the widely used second-order predictor-corrector method.
Abstract: A modification of the (infeasible) primal-dual interior point method is developed. The method uses multiple corrections to improve the centrality of the current iterate. The maximum number of corrections the algorithm is encouraged to make depends on the ratio of the efforts to solve and to factorize the KKT systems. For any LP problem, this ratio is determined right after preprocessing the KKT system and prior to the optimization process. The harder the factorization, the more advantageous the higher-order corrections might prove to be. The computational performance of the method is studied on more difficult Netlib problems as well as on tougher and larger real-life LP models arising from applications. The use of multiple centrality corrections gives on the average a 25% to 40% reduction in the number of iterations compared with the widely used second-order predictor-corrector method. This translates into 20% to 30% savings in CPU time.

254 citations


Journal ArticleDOI
TL;DR: It is shown that any stationary point of the unconstrained objective function is a solution of NCP if the mapping F involved in NCP is continuously differentiable and monotone, and that the level sets are bounded if F is continuous and strongly monotones.
Abstract: A reformulation of the nonlinear complementarity problem (NCP) as an unconstrained minimization problem is considered. It is shown that any stationary point of the unconstrained objective function is a solution of NCP if the mapping F involved in NCP is continuously differentiable and monotone, and that the level sets are bounded if F is continuous and strongly monotone. A descent algorithm is described which uses only function values of F. Some numerical results are given.

185 citations


Journal ArticleDOI
TL;DR: In the paper a new heuristic algorithm is proposed solving the linear ordering problem using the sorting through insertion pattern as well as of the operation of permutation reversal, which seems to be the result of a unique property of the problem.
Abstract: The linear ordering problem is an NP-hard combinatorial problem with a large number of applications. Contrary to another very popular problem from the same category, the traveling salesman problem, relatively little space in the literature has been devoted to the linear ordering problem so far. This is particularly true for the question of developing good heuristic algorithms solving this problem. In the paper we propose a new heuristic algorithm solving the linear ordering problem. In this algorithm we made use of the sorting through insertion pattern as well as of the operation of permutation reversal. The surprisingly positive effect of the reversal operation, justified in part theoretically and confirmed in computational examples, seems to be the result of a unique property of the problem, called in the paper the symmetry of the linear ordering problem. This property consists in the fact that if a given permutation is an optimal solution of the problem with the criterion function being maximized, then the reversed permutation is a solution of the problem with the same criterion function being minimized.

85 citations


Journal ArticleDOI
TL;DR: A fast algorithm, introduced by Brenier, which computes the Legendre-Fenchel transform of a real-valued function is investigated and the new approach of separating primal and dual spaces allows a clearer understanding of the algorithm and yields better numerical behavior.
Abstract: We investigate a fast algorithm, introduced by Brenier, which computes the Legendre-Fenchel transform of a real-valued function. We generalize his work to boxed domains and introduce a parameter in order to build an iterative algorithm. The new approach of separating primal and dual spaces allows a clearer understanding of the algorithm and yields better numerical behavior. We extend known complexity results and give new ones about the convergence of the algorithm.

49 citations


Journal ArticleDOI
TL;DR: An algorithm is presented for solving two-level programming problems that combines a direction finding problem with a regularization of the lower level problem and the upper level objective function is included in the regularzation to yield uniqueness of the follower's solution set.
Abstract: In the paper, an algorithm is presented for solving two-level programming problems. This algorithm combines a direction finding problem with a regularization of the lower level problem. The upper level objective function is included in the regularzation to yield uniqueness of the follower's solution set. This is possible if the problem functions are convex and the upper level objective function has a positive definite Hessian. The computation of a direction of descent and of the step size is discussed in more detail. Afterwards the convergence proof is given.

40 citations


Journal ArticleDOI
TL;DR: A new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch that is a combination of an approximate Newton direction and a direction of negative curvature is presented.
Abstract: We present a new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch. The search direction used in our framework is a combination of an approximate Newton direction and a direction of negative curvature. Global convergence to a stationary point where the Hessian matrix is positive semidefinite is exhibited for this class of algorithms by means of a nonmonotone stabilization strategy. An implementation using the Bunch-Parlett decomposition is shown to outperform several other techniques on a large class of test problems.

38 citations


Journal ArticleDOI
TL;DR: This paperforms this problem of predicting the quasistatic planar motion of a passive rigid body in frictional contact with a set of active rigid bodies as a certain uncoupled complementarity problem, and shows that it belongs to the class of NP-complete problems.
Abstract: In this paper, we study the problem of predicting the quasistatic planar motion of a passive rigid body in frictional contact with a set of active rigid bodies. The active bodies can be thought of as the links of a mechanism or robot manipulator whose positions can be actively controlled by actuators. The passive body can be viewed as a “grasped” object, which moves only in response to contact forces and other external forces such as those due to gravity. We formulate this problem as a certain uncoupled complementarity problem, and show that it belongs to the class of NP-complete problems. Finally, numerical results of our proposed linear programming-based solution algorithm for this class of problems are presented and compared to the only other currently available solution algorithm.

38 citations


Journal ArticleDOI
TL;DR: A quasi-Newton algorithm for semi-infinite programming using an L∞ exact penalty function is described, and numerical results are presented.
Abstract: A quasi-Newton algorithm for semi-infinite programming using an L ∞ exact penalty function is described, and numerical results are presented. Comparisons with three Newton algorithms and one other quasi-Newton algorithm show that the algorithm is very promising in practice.

36 citations


Journal ArticleDOI
TL;DR: A variant of the simulated annealing algorithm, based on the generalized method of Bohachevsky et al., is proposed for continuous optimization problems, which shows substantial improvements both in solution quality and efficiency.
Abstract: A variant of the simulated annealing algorithm, based on the generalized method of Bohachevsky et al., is proposed for continuous optimization problems. The algorithm automatically adjusts the step sizes to reflect the local slopes and function values, and it controls the random directions to point favorably toward potential improvements. Computational results on some well known functions show substantial improvements both in solution quality and efficiency.

35 citations


Journal ArticleDOI
TL;DR: This paper shows how a genetic algorithm (GA) is used to construct an optimal arrangement of two-dimensional rectilinear blocks using a decoding technique known as circular placement.
Abstract: Most existing placement algorithms were designed to handle blocks that are rectangular in shape. In this paper, we show how a genetic algorithm (GA) is used to construct an optimal arrangement of two-dimensional rectilinear blocks. Our approach does not require the orientation of each block to be fixed. To transform the placement problem to a GA problem, we devised a decoding technique known as circular placement. The novelty of the circular placement technique is that it configures the rectilinear blocks by building up potentially good groupings of blocks starting from the corners of the placement area. To complement the circular placement approach, we present a methodology for deriving a suitable objective function. We confirm the performance of our GA-based placement algorithm by presenting simulation results of some problems on tiling with up to 128 polyominoes. The algorithm described in this paper has great potential for applications in packing, compacting and general component placement in the various disciplines of engineering.

Journal ArticleDOI
TL;DR: For the general quadratic programming problem (including an equivalent form of the linear complementarity problem), a new solution method of branch and bound type is proposed that uses a well-known simplicial subdivision and the bound estimation is performed by solving certain linear programs.
Abstract: For the general quadratic programming problem (including an equivalent form of the linear complementarity problem) a new solution method of branch and bound type is proposed. The branching procedure uses a well-known simplicial subdivision and the bound estimation is performed by solving certain linear programs.

Journal ArticleDOI
TL;DR: Oracles for the problem of minimizing a convex (possibly nondifferentiable) function subject to box constraints, and corresponding complexity estimates are discussed.
Abstract: Recently Goffin, Luo and Ye have analyzed the complexity of an analytic center algorithm for convex feasibility problems defined by a separation oracle. The oracle is called at the (possibly approximate) analytic center of the set given by the linear inequalities which are the previous answers of the oracle. We discuss oracles for the problem of minimizing a convex (possibly nondifferentiable) function subject to box constraints, and give corresponding complexity estimates.

Journal ArticleDOI
TL;DR: A heuristic for the Steiner problem in graphs (SPG) is presented based on an approach similar to Prim's algorithm for the minimum spanning tree, but in this approach, arcs are associated with preference weights which are used to break ties among alternative choices of shortest paths occurring during the course of the algorithm.
Abstract: In this paper, we present a heuristic for the Steiner problem in graphs (SPG) along with some experimental results. The heuristic is based on an approach similar to Prim's algorithm for the minimum spanning tree. However, in this approach, arcs are associated with preference weights which are used to break ties among alternative choices of shortest paths occurring during the course of the algorithm. The preference weights are calculated according to a global view which takes into consideration the effect of all the regular nodes, nodes to be connected, on determining the choice of an arc in the solution tree.

Journal ArticleDOI
TL;DR: It is proved that the Q1 factor of the duality gap sequence is exactly 1/4, and the convergence of the Tapia indicators is also discussed.
Abstract: In the absence of strict complementarity, Monteiro and Wright [7] proved that the convergence rate for a class of Newton interior-point methods for linear complementarity problems is at best linear They also established an upper bound of 1/4 for the Q 1-factor of the duality gap sequence when the steplengths converge to one In the current paper, we prove that the Q 1 factor of the duality gap sequence is exactly 1/4 In addition, the convergence of the Tapia indicators is also discussed

Journal ArticleDOI
TL;DR: A parallel decomposition algorithm for solving a class of convex optimization problems, which is broad enough to contain ordinary convex programming problems with a strongly convex objective function, is proposed.
Abstract: In this paper, we propose a parallel decomposition algorithm for solving a class of convex optimization problems, which is broad enough to contain ordinary convex programming problems with a strongly convex objective function. The algorithm is a variant of the trust region method applied to the Fenchel dual of the given problem. We prove global convergence of the algorithm and report some computational experience with the proposed algorithm on the Connection Machine Model CM-5.

Journal ArticleDOI
TL;DR: This work presents and proves convergence of gradient and asynchronous gradient algorithms for solving the dual problem of the single commodity strictly convex network flow problem.
Abstract: We consider the single commodity strictly convex network flow problem The dual of this problem is unconstrained, differentiable, and well suited for solution via distributed or parallel iterative methods We present and prove convergence of gradient and asynchronous gradient algorithms for solving the dual problem Computational results are given and analysed

Journal ArticleDOI
TL;DR: This paper explores a hybrid approach that employs a general-purpose subset enumeration scheme together with problem-specific directives to guide an efficient search for combinatorial optimization problems.
Abstract: Algebraic languages are at the heart of many successful optimization modeling systems, yet they have been used with only limited success for combinatorial (or discrete) optimization. We show in this paper, through a series of examples, how an algebraic modeling language might be extended to help with a greater variety of combinatorial optimization problems. We consider specifically those problems that are readily expressed as the choice of a subset from a certain set of objects, rather than as the assignment of numerical values to variables. Since there is no practicable universal algorithm for problems of this kind, we explore a hybrid approach that employs a general-purpose subset enumeration scheme together with problem-specific directives to guide an efficient search.

Journal ArticleDOI
TL;DR: Tensor methods for large systems of nonlinear equations based on Krylov subspace techniques for approximately solving the linear systems that are required in each tensor iteration are described.
Abstract: In this paper, we describe tensor methods for large systems of nonlinear equations based on Krylov subspace techniques for approximately solving the linear systems that are required in each tensor iteration. We refer to a method in this class as a tensor-Krylov algorithm. We describe comparative testing for a tensor-Krylov implementation versus an analogous implementation based on a Newton-Krylov method. The test results show that tensor-Krylov methods are much more efficient and robust than Newton-Krylov methods on hard nonlinear equations problems.

Journal ArticleDOI
TL;DR: This paper characterize the solution in the case when e and d are linear in each interval and approximate the problem by a sequence of finite-dimensional minimization problems and prove that the sequence of solutions to the approximating problems converges in the norm of W2,2 to the solution of the original problem.
Abstract: In this paper, we study the problem of finding a real-valued function f on the interval [0, 1] with minimal L2 norm of the second derivative that interpolates the points (ti, yi) and satisfies e(t) ≤ f(t) ≤ d(t) for t ∈ [0, 1]. The functions e and d are continuous in each interval (ti, ti+1) and at t1 and tnbut may be discontinuous at ti. Based on an earlier paper by the first author [7] we characterize the solution in the case when e and d are linear in each interval (ti, ti+1). We present a method for the reduction of the problem to a convex finite-dimensional unconstrained minimization problem. When e and d are arbitrary continuous functions we approximate the problem by a sequence of finite-dimensional minimization problems and prove that the sequence of solutions to the approximating problems converges in the norm of W2,2 to the solution of the original problem. Numerical examples are reported.

Journal ArticleDOI
TL;DR: The connection with the Group optimization representation of an ILPC is given together with a discussion of the difficulty of calculating the value function for a general Integer Programme.
Abstract: The value function of an Integer Programme is the optimal objective value expressed as a function of the right-hand-side coefficients. For an Integer Programme over a Cone (ILPC) this takes the form of a Chvatal Function which is built up from the operations of taking non-negative linear combinations and integer round-up. A doubly recursive procedure for calculating such a value function is given. This is illustrated by a small numerical example. It is also shown how the optimal solution of an ILPC can be obtained as a function of the right-hand-side through this recursion. The connection with the Group optimization representation of an ILPC is also given together with a discussion of the difficulty of calculating the value function for a general Integer Programme.

Journal ArticleDOI
TL;DR: A hybrid algorithm combining these two techniques is introduced and seems to be the most robust procedure for the solution of large-scale LCPs with symmetric positive definite matrices.
Abstract: In this paper we describe a computational study of block principal pivoting (BP) and interior-point predictor-corrector (PC) algorithms for the solution of large-scale linear complementarity problems (LCP) with symmetric positive definite matrices. This study shows that these algorithms are in general quite appropriate for this type of LCPs. The BP algorithm does not seem to be sensitive to bad scaling and degeneracy of the unique solution of the LCP, while these aspects have some effect on the performance of the PC algorithm. On the other hand, the BP method has not performed well in two LCPs with ill-conditioned matrices for which the PC algorithm has behaved quite well. A hybrid algorithm combining these two techniques is also introduced and seems to be the most robust procedure for the solution of large-scale LCPs with symmetric positive definite matrices.

Journal ArticleDOI
TL;DR: A data parallel primal-dual augmenting path algorithm for the dense linear many-to-one assignment problem also known as semi-assignment is described and it is shown that the best known sequential computational complexity of O(mn2) for dense problems, is reduced to the parallel complexity ofO(mn), on a machine with n processors supporting reductions in O(1) time.
Abstract: The purpose of this study is to describe a data parallel primal-dual augmenting path algorithm for the dense linear many-to-one assignment problem also known as semi-assignment. This problem could for instance be described as assigning n persons to m(≤n) job groups.

Journal ArticleDOI
TL;DR: An algorithm, based on spherical trigonometry, for finding the minimax point is presented and the minimx point thus obtained is unique and the algorithm is O(n2) in the worst case.
Abstract: A particular continuous single facility minimax location problem on the surface of a hemisphere is discussed. We assume that all the demand points are equiweighted. An algorithm, based on spherical trigonometry, for finding the minimax point is presented. The minimax point thus obtained is unique and the algorithm is O(n2) in the worst case.

Journal ArticleDOI
TL;DR: This article deals with a method to compute bounds in algorithms for solving the generalized set packing/partitioning problems by using the dual (Lagrange) bounds instead of the linear bounds.
Abstract: This article deals with a method to compute bounds in algorithms for solving the generalized set packing/partitioning problems. The problems under investigation can be solved by the branch and bound method. Linear bounds computed by the simplex method are usually used. It is well known that this method breaks down on some occasions because the corresponding linear programming problems are degenerate. However, it is possible to use the dual (Lagrange) bounds instead of the linear bounds. A partial realization of this approach is described that uses a network relaxation of the initial problem. The possibilities for using the dual network bounds in the approximation techniques to solve the problems under investigation are described.

Journal ArticleDOI
TL;DR: It is reported on computational experience with an implementation of three algorithms for the general economic equilibrium problem that the projection algorithm for variational inequalities increases the size of solvable models by a Factor 5–10 in comparison with the classical homotopy method.
Abstract: We report on computational experience with an implementation of three algorithms for the general economic equilibrium problem. As a result we get that the projection algorithm for variational inequalities increases the size of solvable models by a factor of 5–10 in comparison with the classical homotopy method. As a third approach we implemented a simulated annealing heuristic which might be suitable to estimate equilibria for very large models.

Journal ArticleDOI
TL;DR: It is shown that this special class of problems can be recognized from the class of all set covering problems, by a polynomial algorithm with O (MN) complexity, where M and N are numbers of constraints and variables of a given instant, respectively.
Abstract: A class of set covering problems is being introduced. This class is obtained from reformulation of a well-known combinatorial problem of Erdos on the hypercube. An algorithmic method of solution to the problem is proposed. Max-flow algorithms are the main ingredients of our method. The computational results which will be presented here, improves the best existing bound related to the combinatorial problem. This, at the same time, provides a good approximate solution to the corresponding set covering problem of more than a thousand variables and constraints. Moreover, we show that our special class of problems can be recognized from the class of all set covering problems, by a polynomial algorithm with O (MN) complexity, where M and N are numbers of constraints and variables of a given instant, respectively.