scispace - formally typeset
Search or ask a question

Showing papers in "Computational Optimization and Applications in 2001"


Journal ArticleDOI
TL;DR: Smoothing methods are applied here to generate and solve an unconstrained smooth reformulation of the support vector machine for pattern classification using a completely arbitrary kernel, which converges globally and quadratically.
Abstract: Smoothing methods, extensively used for solving important mathematical programming problems and applications, are applied here to generate and solve an unconstrained smooth reformulation of the support vector machine for pattern classification using a completely arbitrary kernel. We term such reformulation a smooth support vector machine (SSVM). A fast Newton–Armijo algorithm for solving the SSVM converges globally and quadratically. Numerical results and comparisons are given to demonstrate the effectiveness and speed of the algorithm. On six publicly available datasets, tenfold cross validation correctness of SSVM was the highest compared with four other methods as well as the fastest. On larger problems, SSVM was comparable or faster than SVMlight (T. Joachims, in Advances in Kernel Methods—Support Vector Learning, MIT Press: Cambridge, MA, 1999), SOR (O.L. Mangasarian and David R. Musicant, IEEE Transactions on Neural Networks, vol. 10, pp. 1032–1037, 1999) and SMO (J. Platt, in Advances in Kernel Methods—Support Vector Learning, MIT Press: Cambridge, MA, 1999). SSVM can also generate a highly nonlinear separating surface such as a checkerboard.

565 citations


Journal ArticleDOI
TL;DR: The nonlinear solver that is considered in this paper is a Sequential Quadratic Programming solver, which is based on branch-and-bound, but does not require the NLP problem at each node to be solved to optimality.
Abstract: This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving Nonlinear Programming (NLP) and Mixed Integer Linear Programming problems. In contrast, an integrated approach to solving MINLP problems is considered here. This new algorithm is based on branch-and-bound, but does not require the NLP problem at each node to be solved to optimality. Instead, branching is allowed after each iteration of the NLP solver. In this way, the nonlinear part of the MINLP problem is solved whilst searching the tree. The nonlinear solver that is considered in this paper is a Sequential Quadratic Programming solver. A numerical comparison of the new method with nonlinear branch-and-bound is presented and a factor of up to 3 improvement over branch-and-bound is observed.

291 citations


Journal ArticleDOI
TL;DR: This paper develops a tabu search algorithm which integrates some important features including an efficient neighborhood, a dynamic tabu tenure mechanism, techniques for constraint handling, intensification and diversification, and large numbers of binary and ternary “logical” constraints.
Abstract: The daily photograph scheduling problem of earth observation satellites such as Spot 5 consists of scheduling a subset of mono or stereo photographs from a given set of candidates to different cameras. The scheduling must maximize a profit function while satisfying a large number of constraints. In this paper, we first present a formulation of the problem as a generalized version of the well-known knapsack model, which includes large numbers of binary and ternary “logical” constraints. We then develop a tabu search algorithm which integrates some important features including an efficient neighborhood, a dynamic tabu tenure mechanism, techniques for constraint handling, intensification and diversification. Extensive experiments on a set of large and realistic benchmark instances show the effectiveness of this approach.

219 citations


Journal ArticleDOI
Song Xu1
TL;DR: A smoothing method based on the exponential penalty function of Kort and Bertsekas for constrained optimization for minimax problem is proposed and preliminary numerical experiments indicate the promising of the algorithm.
Abstract: In this paper, we propose a smoothing method for minimax problem. The method is based on the exponential penalty function of Kort and Bertsekas for constrained optimization. Under suitable condition, the method is globally convergent. Preliminary numerical experiments indicate the promising of the algorithm.

125 citations


Journal ArticleDOI
TL;DR: This article improves the branch-and-bound algorithm of Goh, Safonov and Papavassilopoulos by applying a better convex relaxation of the BMI Eigenvalue Problem (BMIEP), and proposes new Branch- and-Bound and Branch-And-Cut Algorithms.
Abstract: The optimization problem with the Bilinear Matrix Inequality (BMI) is one of the problems which have greatly interested researchers of system and control theory in the last few years. This inequality permits to reduce in an elegant way various problems of robust control into its form. However, in contrast to the Linear Matrix Inequality (LMI), which can be solved by interior-point-methods, the BMI is a computationally difficult object in theory and in practice. This article improves the branch-and-bound algorithm of Goh, Safonov and Papavassilopoulos (Journal of Global Optimization, vol. 7, pp. 365–380, 1995) by applying a better convex relaxation of the BMI Eigenvalue Problem (BMIEP), and proposes new Branch-and-Bound and Branch-and-Cut Algorithms. Numerical experiments were conducted in a systematic way over randomly generated problems, and they show the robustness and the efficiency of the proposed algorithms.

120 citations


Journal ArticleDOI
TL;DR: A new approach for solving the airline crew scheduling problem that is based on enumerating hundreds of millions random pairings is developed, which produces solutions that are significantly better than ones found by current practice.
Abstract: The airline crew scheduling problem is the problem of assigning crew itineraries to flights. We develop a new approach for solving the problem that is based on enumerating hundreds of millions random pairings. The linear programming relaxation is solved first and then millions of columns with best reduced cost are selected for the integer program. The number of columns is further reduced by a linear programming based heuristic. Finally an integer solution is obtained with a commercial integer programming solver. The branching rule of the solver is enhanced with a combination of strong branching and a specialized branching rule. The algorithm produces solutions that are significantly better than ones found by current practice.

113 citations


Journal ArticleDOI
TL;DR: A specialized variant of bundle methods suitable for large-scale problems with separable objective, applied to the resolution of a stochastic unit-commitment problem solved by Lagrangian relaxation, is presented.
Abstract: A specialized variant of bundle methods suitable for large-scale problems with separable objective is presented. The method is applied to the resolution of a stochastic unit-commitment problem solved by Lagrangian relaxation. The model includes hydro- as well as thermal-powered plants. Uncertainties lie in the demand, which evolves in time according to a tree of scenarios. Dual variables are preconditioned by using probabilities associated to nodes in the tree The approach is illustrated by numerical results, obtained on a model of the French production mix over a time horizon of 10 days and 1 month.

90 citations


Journal ArticleDOI
Alexander J. Robertson1
TL;DR: Four GRASP implementations for the multidimensional assignment problem are introduced, which are combinations of two constructive methods (randomized reduced cost greedy and randomized max regret) and two local search methods (two-assignment-exchange and variable depth exchange).
Abstract: The focal problem for centralized multisensor multitarget tracking is the data association problem of partitioning the observations into tracks and false alarms so that an accurate estimate of the true tracks can be recovered. Large classes of these association problems can be formulated as multidimensional assignment problems, which are known to be NP-hard for three dimensions or more. The assignment problems that result from tracking are large scale, sparse and noisy. Solution methods must execute in real-time. The Greedy Randomized Adaptive Local Search Procedure (GRASP) has proven highly effective for solving many classes NP-hard optimization problems. This paper introduces four GRASP implementations for the multidimensional assignment problem, which are combinations of two constructive methods (randomized reduced cost greedy and randomized max regret) and two local search methods (two-assignment-exchange and variable depth exchange). Numerical results are shown for a two random problem classes and one tracking problem class.

85 citations


Journal ArticleDOI
TL;DR: Empirical results indicate that the proposed GRASP implementation compares favorably to classical heuristics and implementations of simulated annealing and tabu search, and is found to be competitive with a genetic algorithm that is considered one of the best currently available for graph coloring.
Abstract: We first present a literature review of heuristics and metaheuristics developed for the problem of coloring graphs. We then present a Greedy Randomized Adaptive Search Procedure (GRASP) for coloring sparse graphs. The procedure is tested on graphs of known chromatic number, as well as random graphs with edge probability 0.1 having from 50 to 500 vertices. Empirical results indicate that the proposed GRASP implementation compares favorably to classical heuristics and implementations of simulated annealing and tabu search. GRASP is also found to be competitive with a genetic algorithm that is considered one of the best currently available for graph coloring.

72 citations


Journal ArticleDOI
Mhand Hifi1
TL;DR: Two exact algorithms for solving both two-staged and three staged unconstrained (un)weighted cutting problems and their performance is evaluated on some problem instances of the literature and other hard randomly-generated problem instances.
Abstract: In this paper we propose two exact algorithms for solving both two-staged and three staged unconstrained (un)weighted cutting problems. The two-staged problem is solved by applying a dynamic programming procedure originally developed by Gilmore and Gomory [Gilmore and Gomory, Operations Research, vol. 13, pp. 94–119, 1965]. The three-staged problem is solved by using a top-down approach combined with a dynamic programming procedure. The performance of the exact algorithms are evaluated on some problem instances of the literature and other hard randomly-generated problem instances (a total of 53 problem instances). A parallel implementation is an important feature of the algorithm used for solving the three-staged version.

70 citations


Journal ArticleDOI
TL;DR: A new framework for the application of preconditioned conjugate gradients in the solution of large-scale linear equality constrained minimization problems and indicates computational promise.
Abstract: We propose a new framework for the application of preconditioned conjugate gradients in the solution of large-scale linear equality constrained minimization problems. This framework allows for the exploitation of structure and sparsity in the context of solving the reduced Newton system (despite the fact that the reduced system may be dense). Numerical experiments performed on a variety of test problems from the Netlib LP collection indicate computational promise.

Journal ArticleDOI
TL;DR: It is shown that every accumulation point of the sequence of iterates generated by the proposed algorithm is a well-defined approximate solution of the exact minimization problem.
Abstract: In this paper a proximal bundle method is introduced that is capable to deal with approximate subgradients. No further knowledge of the approximation quality (like explicit knowledge or controllability of error bounds) is required for proving convergence. It is shown that every accumulation point of the sequence of iterates generated by the proposed algorithm is a well-defined approximate solution of the exact minimization problem. In the case of exact subgradients the algorithm behaves like well-established proximal bundle methods. Numerical tests emphasize the theoretical findings.

Journal ArticleDOI
TL;DR: It is shown that a well-designed branch-and-bound algorithm using Dinkelbach's parametric strategy, linear overestimating function and ω-subdivision strategy can solve problems of practical size in an efficient way.
Abstract: In this paper, we will develop an algorithm for solving a quadratic fractional programming problem which was recently introduced by Lo and MacKinlay to construct a maximal predictability portfolio, a new approach in portfolio analysis The objective function of this problem is defined by the ratio of two convex quadratic functions, which is a typical global optimization problem with multiple local optima We will show that a well-designed branch-and-bound algorithm using (i) Dinkelbach's parametric strategy, (ii) linear overestimating function and (iii) ω-subdivision strategy can solve problems of practical size in an efficient way This algorithm is particularly efficient for Lo-MacKinlay's problem where the associated nonconvex quadratic programming problem has low rank nonconcave property

Journal ArticleDOI
TL;DR: A distributed optimal control problem for time-dependent Burgers equation is analyzed and the augmented Lagrangian-SQP technique is used depending upon a second-order sufficient optimality condition.
Abstract: In this work a distributed optimal control problem for time-dependent Burgers equation is analyzed. To solve the nonlinear control problems the augmented Lagrangian-SQP technique is used depending upon a second-order sufficient optimality condition. Numerical test examples are presented.

Journal ArticleDOI
TL;DR: Algorithms of polynomial complexity for solving the three problems are suggested and their convergence is proved and some important forms of convex functions and computational results are given.
Abstract: A minimization problem with convex and separable objective function subject to a separable convex inequality constraint “≤” and bounded variables is considered. A necessary and sufficient condition is proved for a feasible solution to be an optimal solution to this problem. Convex minimization problems subject to linear equality/linear inequality “≥” constraint, and bounds on the variables are also considered. A necessary and sufficient condition and a sufficient condition, respectively, are proved for a feasible solution to be an optimal solution to these two problems. Algorithms of polynomial complexity for solving the three problems are suggested and their convergence is proved. Some important forms of convex functions and computational results are given in the Appendix.

Journal ArticleDOI
TL;DR: In this paper, the authors explore the use of general disjunctions for branching when solving linear programs with general-integer variables and give computational results that show that the size of the enumeration tree can be greatly reduced by branching on such disjunction rather than on single variables.
Abstract: Typical implementations of branch-and-bound for integer linear programs choose to branch on single variables. In this paper we explore the use of general disjunctions for branching when solving linear programs with general-integer variables. We give computational results that show that the size of the enumeration tree can be greatly reduced by branching on such disjunctions rather than on single variables.

Journal ArticleDOI
TL;DR: An algorithm to approximate the nondominated set of continuous and discrete bicriteria programs is proposed, which employs block norms to find an approximation and evaluate its quality.
Abstract: An algorithm to approximate the nondominated set of continuous and discrete bicriteria programs is proposed. The algorithm employs block norms to find an approximation and evaluate its quality. By automatically adapting to the problem's structure and scaling, the approximation is constructed objectively without interaction with the decision maker. Mathematical and practical examples are included.

Journal ArticleDOI
TL;DR: This research focuses on building a general-purpose combinatorial optimisation problem solver using a variety of meta-heuristic algorithms including Simulated Annealing and Tabu Search and achieves good performance in terms of solution quality and runtime.
Abstract: In recent years, there have been many studies in which tailored heuristics and meta-heuristics have been applied to specific optimisation problems. These codes can be extremely efficient, but may also lack generality. In contrast, this research focuses on building a general-purpose combinatorial optimisation problem solver using a variety of meta-heuristic algorithms including Simulated Annealing and Tabu Search. The system is novel because it uses a modelling environment in which the solution is stored in dense dynamic list structures, unlike a more conventional sparse vector notation. Because of this, it incorporates a number of neighbourhood search operators that are normally only found in tailored codes and it performs well on a range of problems. The general nature of the system allows a model developer to rapidly prototype different problems. The new solver is applied across a range of traditional combinatorial optimisation problems. The results indicate that the system achieves good performance in terms of solution quality and runtime.

Journal ArticleDOI
TL;DR: An algorithm for minimizing a product of p (≥2) affine functions over a polytope and installing a second-stage bounding procedure, which requires O(p) additional time in each iteration but remarkably reduces the number of branching operations.
Abstract: On the basis of Soland's rectangular branch-and-bound, we develop an algorithm for minimizing a product of p (≥2) affine functions over a polytope. To tighten the lower bound on the value of each subproblem, we install a second-stage bounding procedure, which requires O(p) additional time in each iteration but remarkably reduces the number of branching operations. Computational results indicate that the algorithm is practical if p is less than 15, both in finding an exact optimal solution and an approximate solution.

Journal ArticleDOI
TL;DR: An O(kn2) time sequential algorithm is designed in this paper to solve the maximum weight k-independent set problem on weighted trapezoid graphs.
Abstract: The maximum weight k-independent set problem has applications in many practical problems like k-machines job scheduling problem, k-colourable subgraph problem, VLSI design layout and routing problem. Based on DAG (Directed Acyclic Graph) approach, an O(kn2) time sequential algorithm is designed in this paper to solve the maximum weight k-independent set problem on weighted trapezoid graphs. The weights considered here are all non-negative and associated with each of the n vertices of the graph.

Journal ArticleDOI
TL;DR: This paper considers the problem of finding a point on a general network using two objectives, maximizing the minimum weighted distance from the point to the vertices (Maximin) and maximizing the sum of weighted distances between the point and the vertice (Maxisum).
Abstract: In this paper, we consider the problem of finding a point on a general network using two objectives, maximizing the minimum weighted distance from the point to the vertices (Maximin) and maximizing the sum of weighted distances between the point and the vertices (Maxisum). This bicriterion model can be used to locate an obnoxious facility on a network. We will identify the model properties, develop a polynomial algorithm for generating the efficient set and provide a numerical example.

Journal ArticleDOI
TL;DR: This article presents the HOC algorithm and several new sufficient conditions for convergence of the algorithm to the optimum in the case of convex problems with linear constraints, including the rank of the constraint matrix that is computationally efficient to verify.
Abstract: Decomposition of multidisciplinary engineering system design problems into smaller subproblems is desirable because it enhances robustness and understanding of the numerical results. Moreover, subproblems can be solved in parallel using the optimization technique most suitable for the underlying mathematical form of the subproblem. Hierarchical overlapping coordination (HOC) is an interesting strategy for solving decomposed problems. It simultaneously uses two or more design problem decompositions, each of them associated with different partitions of the design variables and constraints. Coordination is achieved by the exchange of information between decompositions. This article presents the HOC algorithm and several new sufficient conditions for convergence of the algorithm to the optimum in the case of convex problems with linear constraints. One of these equivalent conditions involves the rank of the constraint matrix that is computationally efficient to verify. Computational results obtained by applying the HOC algorithm to quadratic programming problems of various sizes are included for illustration.

Journal ArticleDOI
TL;DR: An algorithm for determining a minimax location to service demand points that are equally weighted and distributed over a sphere that has polynomial time complexity is presented.
Abstract: This paper presents an algorithm for determining a minimax location to service demand points that are equally weighted and distributed over a sphere. The norm under consideration is geodesic. The algorithm presented here is based on enumeration and has a polynomial time complexity.

Journal ArticleDOI
TL;DR: An iterative algorithm for solving variational inequalities under the weakest monotonicity condition proposed so far is presented and relies on a new cutting plane and on analytic centers.
Abstract: We present an iterative algorithm for solving variational inequalities under the weakest monotonicity condition proposed so far The method relies on a new cutting plane and on analytic centers

Journal ArticleDOI
TL;DR: The algorithm produces a near-optimal primal integral solution and an optimum solution to the Lagrangian dual, which greatly reduces computation time and memory use for real-world instances derived from an air traffic control model.
Abstract: In a multiperiod dynamic network flow problem, we model uncertain arc capacities using scenario aggregation. This model is so large that it may be difficult to obtain optimal integer or even continuous solutions. We develop a Lagrangian decomposition method based on the structure recently introduced in G.D. Glockner and G.L. Nemhauser, Operations Research, vol. 48, pp. 233–242, 2000. Our algorithm produces a near-optimal primal integral solution and an optimum solution to the Lagrangian dual. The dual is initialized using marginal values from a primal heuristic. Then, primal and dual solutions are improved in alternation. The algorithm greatly reduces computation time and memory use for real-world instances derived from an air traffic control model.

Journal ArticleDOI
TL;DR: Two new diagonal global optimization algorithms are introduced unifying the power of the following three approaches: efficient univariate information global optimization methods, diagonal approach for generalizing univariate algorithms to the multidimensional case, and local tuning on the behaviour of the objective function during the global search.
Abstract: In this paper we face a classical global optimization problem—minimization of a multiextremal multidimensional Lipschitz function over a hyperinterval. We introduce two new diagonal global optimization algorithms unifying the power of the following three approaches: efficient univariate information global optimization methods, diagonal approach for generalizing univariate algorithms to the multidimensional case, and local tuning on the behaviour of the objective function (estimates of the local Lipschitz constants over different subregions) during the global search. Global convergence conditions of a new type are established for the diagonal information methods. The new algorithms demonstrate quite satisfactory performance in comparison with the diagonal methods using only global information about the Lipschitz constant.

Journal ArticleDOI
TL;DR: A decomposition approach to solve three types of realistic problems: block-angular linear programs arising in energy planning, Markov decision problems arising in production planning and multicommodity network Problems arising in capacity planning for survivable telecommunication networks.
Abstract: We use a decomposition approach to solve three types of realistic problems: block-angular linear programs arising in energy planning, Markov decision problems arising in production planning and multicommodity network problems arising in capacity planning for survivable telecommunication networks. Decomposition is an algorithmic device that breaks down computations into several independent subproblems. It is thus ideally suited to parallel implementation. To achieve robustness and greater reliability in the performance of the decomposition algorithm, we use the Analytic Center Cutting Plane Method (ACCPM) to handle the master program. We run the algorithm on two different parallel computing platforms: a network of PC's running under Linux and a genuine parallel machine, the IBM SP2. The approach is well adapted for this coarse grain parallelism and the results display good speed-up's for the classes of problems we have treated.

Journal ArticleDOI
TL;DR: This work presents a definition of a basis certificate and develops a strongly polynomial algorithm which given a Farkas type certificate of infeasibility computes a basis certificates of inf Easibility.
Abstract: In general if a linear program has an optimal solution, then a primal and dual optimal solution is a certificate of the solvable status. Furthermore, it is well known that in the solvable case, then the linear program always has an optimal basic solution. Similarly, when a linear program is primal or dual infeasible then by Farkas's Lemma a certificate of the infeasible status exists. However, in the primal or dual infeasible case then there is not an uniform definition of what a suitable basis certificate of the infeasible status is. In this work we present a definition of a basis certificate and develop a strongly polynomial algorithm which given a Farkas type certificate of infeasibility computes a basis certificate of infeasibility. This result is relevant for the recently developed interior-point methods because they do not compute a basis certificate of infeasibility in general. However, our result demonstrates that a basis certificate can be obtained at a moderate computational cost.

Journal ArticleDOI
TL;DR: An enhanced version of the primal-dual interior point algorithm is described, designed to improve convergence with minimal loss of efficiency, and designed to solve large sparse nonlinear problems which may not be convex.
Abstract: We describe an enhanced version of the primal-dual interior point algorithm in Lasdon, Plummer, and Yu (ORSA Journal on Computing, vol. 7, no. 3, pp. 321–332, 1995), designed to improve convergence with minimal loss of efficiency, and designed to solve large sparse nonlinear problems which may not be convex. New features include (a) a backtracking linesearch using an L1 exact penalty function, (b) ensuring that search directions are downhill for this function by increasing Lagrangian Hessian diagonal elements when necessary, (c) a quasi-Newton option, where the Lagrangian Hessian is replaced by a positive definite approximation (d) inexact solution of each barrier subproblem, in order to approach the central trajectory as the barrier parameter approaches zero, and (e) solution of the symmetric indefinite linear Newton equations using a multifrontal sparse Gaussian elimination procedure, as implemented in the MA47 subroutine from the Harwell Library (Rutherford Appleton Laboratory Report RAL-95-001, Oxfordshire, UK, Jan. 1995). Second derivatives of all problem functions are required when the true Hessian option is used. A Fortran implementation is briefly described. Computational results are presented for 34 smaller models coded in Fortran, where first and second derivatives are approximated by differencing, and for 89 larger GAMS models, where analytic first derivatives are available and finite differencing is used for second partials. The GAMS results are, to our knowledge, the first to show the performance of this promising class of algorithms on large sparse NLP's. For both small and large problems, both true Hessian and quasi- Newton options are quite reliable and converge rapidly. Using the true Hessian, INTOPT is as reliable as MINOS on the GAMS models, although not as reliable as CONOPT. Computation times are considerably longer than for the other 2 solvers. However, interior point methods should be considerably faster than they are here when analytic second derivatives are available, and algorithmic improvements and problem preprocessing should further narrow the gap.

Journal ArticleDOI
TL;DR: In this paper, a memoryless version of Shor and Zhurbenko's r-algorithm was proposed, motivated by the memoryless and limited memory updates for differentiable quasi-Newton methods.
Abstract: In this paper, we present variants of Shor and Zhurbenko's r-algorithm, motivated by the memoryless and limited memory updates for differentiable quasi-Newton methods. This well known r-algorithm, which employs a space dilation strategy in the direction of the difference between two successive subgradients, is recognized as being one of the most effective procedures for solving nondifferentiable optimization problems. However, the method needs to store the space dilation matrix and update it at every iteration, resulting in a substantial computational burden for large-sized problems. To circumvent this difficulty, we first propose a memoryless update scheme, which under a suitable choice of parameters, yields a direction of motion that turns out to be a convex combination of two successive anti-subgradients. Moreover, in the space transformation sense, the new update scheme can be viewed as a combination of space dilation and reduction operations. We prove convergence of this new method, and demonstrate how it can be used in conjunction with a variable target value method that allows a practical, convergent implementation of the method. We also examine a memoryless variant that uses a fixed dilation parameter instead of varying degrees of dilation and/or reduction as in the former algorithm, as well as another variant that examines a two-step limited memory update. These variants are tested along with Shor's r-algorithm and also a modified version of a related algorithm due to Polyak that employs a projection onto a pair of Kelley's cutting planes. We use a set of standard test problems from the literature as well as randomly generated dual transportation and assignment problems in our computational experiments. The results exhibit that the proposed space dilation and reduction method and the modification of Polyak's method are competitive, and offer a substantial advantage over the r-algorithm and over the other tested limited memory variants with respect to accuracy as well as effort.