scispace - formally typeset
Search or ask a question

Showing papers in "Computational Optimization and Applications in 2005"


Journal ArticleDOI
TL;DR: It is shown that the new discretization concept for optimal control problems with control constraints is numerically implementable with only slight increase in program management and an optimal error estimate is proved.
Abstract: A new discretization concept for optimal control problems with control constraints is introduced which utilizes for the discretization of the control variable the relation between adjoint state and control. Its key feature is not to discretize the space of admissible controls but to implicitly utilize the first order optimality conditions and the discretization of the state and adjoint equations for the discretization of the control. For discrete controls obtained in this way an optimal error estimate is proved. The application to control of elliptic equations is discussed. Finally it is shown that the new concept is numerically implementable with only slight increase in program management. A numerical test confirms the theoretical investigations.

529 citations


Journal ArticleDOI
TL;DR: This work considers the approximation of nonlinear bilevel mathematical programs by solvable programs of the same type, i.e., bileVEL programs involving linear approximations of the upper-level objective and all constraint-defining functions.
Abstract: We consider the approximation of nonlinear bilevel mathematical programs by solvable programs of the same type, i.e., bilevel programs involving linear approximations of the upper-level objective and all constraint-defining functions, as well as a quadratic approximation of the lower-level objective. We describe the main features of the algorithm and the resulting software. Numerical experiments tend to confirm the promising behavior of the method.

148 citations


Journal ArticleDOI
TL;DR: A new version of the tabu search algorithm for the well-known problem, the quadratic assignment problem (QAP), with an efficient use of mutations applied to the best solutions found so far.
Abstract: Tabu search approach based algorithms are among the widest applied to various combinatorial optimization problems. In this paper, we propose a new version of the tabu search algorithm for the well-known problem, the quadratic assignment problem (QAP). One of the most important features of our tabu search implementation is an efficient use of mutations applied to the best solutions found so far. We tested this approach on a number of instances from the library of the QAP instances--QAPLIB. The results obtained from the experiments show that the proposed algorithm belongs to the most efficient heuristics for the QAP. The high efficiency of this algorithm is also demonstrated by the fact that the new best known solutions were found for several QAP instances.

127 citations


Journal ArticleDOI
TL;DR: The uniform convergence of discretized controls to optimal controls is proven under natural assumptions by taking piecewise constant controls.
Abstract: We study the numerical approximation of boundary optimal control problems governed by semilinear elliptic partial differential equations with pointwise constraints on the control. The analysis of the approximate control problems is carried out. The uniform convergence of discretized controls to optimal controls is proven under natural assumptions by taking piecewise constant controls. Finally, error estimates are established and some numerical experiments, which confirm the theoretical results, are performed.

125 citations


Journal ArticleDOI
TL;DR: The modifications were exploited to find the rate of convergence in terms of the spectral condition number of the Hessian matrix, to prove its finite termination property even for problems whose solution does not satisfy the strict complementarity condition, and to avoid any backtracking at the cost of evaluation of an upper bound for the spectral radius of thehessian matrix.
Abstract: A new active set based algorithm is proposed that uses the conjugate gradient method to explore the face of the feasible region defined by the current iterate and the reduced gradient projection with the fixed steplength to expand the active set. The precision of approximate solutions of the auxiliary unconstrained problems is controlled by the norm of violation of the Karush-Kuhn-Tucker conditions at active constraints and the scalar product of the reduced gradient with the reduced gradient projection. The modifications were exploited to find the rate of convergence in terms of the spectral condition number of the Hessian matrix, to prove its finite termination property even for problems whose solution does not satisfy the strict complementarity condition, and to avoid any backtracking at the cost of evaluation of an upper bound for the spectral radius of the Hessian matrix. The performance of the algorithm is illustrated on solution of the inner obstacle problems. The result is an important ingredient in development of scalable algorithms for numerical solution of elliptic variational inequalities.

125 citations


Journal ArticleDOI
TL;DR: A numerical comparison between many of the Augmented Lagrangian methods for minimization with inequality constraints is performed using all the suitable problems of the CUTE collection.
Abstract: Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods a simpler optimization problem is solved, for which efficient algorithms can be used, especially when the problems are large. The most famous Augmented Lagrangian algorithm for minimization with inequality constraints is known as Powell-Hestenes-Rockafellar (PHR) method. The main drawback of PHR is that the objective function of the subproblems is not twice continuously differentiable. This is the main motivation for the introduction of many alternative Augmented Lagrangian methods. Most of them have interesting interpretations as proximal point methods for solving the dual problem, when the original nonlinear programming problem is convex. In this paper a numerical comparison between many of these methods is performed using all the suitable problems of the CUTE collection.

122 citations


Journal ArticleDOI
TL;DR: A quasi Monte Carlo (QMC) variant of a multi level single linkage (MLSL) algorithm for global optimization is compared with an original stochastic MLSL algorithm for a number of test problems of various complexities.
Abstract: It has been recognized through theory and practice that uniformly distributed deterministic sequences provide more accurate results than purely random sequences. A quasi Monte Carlo (QMC) variant of a multi level single linkage (MLSL) algorithm for global optimization is compared with an original stochastic MLSL algorithm for a number of test problems of various complexities. An emphasis is made on high dimensional problems. Two different low-discrepancy sequences (LDS) are used and their efficiency is analysed. It is shown that application of LDS can significantly increase the efficiency of MLSL. The dependence of the sample size required for locating global minima on the number of variables is examined. It is found that higher confidence in the obtained solution and possibly a reduction in the computational time can be achieved by the increase of the total sample size N. N should also be increased as the dimensionality of problems grows. For high dimensional problems clustering methods become inefficient. For such problems a multistart method can be more computationally expedient.

98 citations


Journal ArticleDOI
TL;DR: This paper gives an overview on methodologies that can be used to model the evolution of risk factors over a one-year horizon and performs backtesting on their expected shortfall predictions.
Abstract: The question of the measurement of strategic long-term financial risks is of considerable importance. Existing modelling instruments allow for a good measurement of market risks of trading books over relatively small time intervals. However, these approaches may have severe deficiencies if they are routinely applied to longer time periods. In this paper we give an overview on methodologies that can be used to model the evolution of risk factors over a one-year horizon. Different models are tested on financial time series data by performing backtesting on their expected shortfall predictions.

94 citations


Journal ArticleDOI
TL;DR: The proposed GRASP algorithm has two phases: in the first phase the algorithm finds an initial solution of the problem and in the second phase a local search procedure is utilized for the improvement of the initial solution.
Abstract: In this paper, we present the application of a modified version of the well known Greedy Randomized Adaptive Search Procedure (GRASP) to the TSP. The proposed GRASP algorithm has two phases: In the first phase the algorithm finds an initial solution of the problem and in the second phase a local search procedure is utilized for the improvement of the initial solution. The local search procedure employs two different local search strategies based on 2-opt and 3-opt methods. The algorithm was tested on numerous benchmark problems from TSPLIB. The results were very satisfactory and for the majority of the instances the results were equal to the best known solution. The algorithm is also compared to the algorithms presented and tested in the DIMACS Implementation Challenge that was organized by David Johnson.

79 citations


Journal ArticleDOI
TL;DR: A successive linearization method with a trust region-type globalization for the solution of nonlinear semidefinite programs is presented and is shown to be globally convergent under certain assumptions.
Abstract: We present a successive linearization method with a trust region-type globalization for the solution of nonlinear semidefinite programs. At each iteration, the method solves a quadratic semidefinite program, which can be converted to a linear semidefinite program with a second order cone constraint. A subproblem of this kind can be solved quite efficiently by using some recent software for semidefinite and second-order cone programs. The method is shown to be globally convergent under certain assumptions. Numerical results on some nonlinear semidefinite programs including optimization problems with bilinear matrix inequalities are reported to illustrate the behaviour of the proposed method.

75 citations


Journal ArticleDOI
TL;DR: The multilevel structure of global optimization problems, which can often be seen at different levels, is discussed, which represents a more complete measure of the difficulty of the problem with respect to the standard measure given by the total number of local minima.
Abstract: In this paper we will discuss the multilevel structure of global optimization problems. Such problems can often be seen at different levels, the number of which varies from problem to problem. At each level different objects are observed, but all levels display a similar structure. The number of levels which can be recognized for a given optimization problem represents a more complete measure of the difficulty of the problem with respect to the standard measure given by the total number of local minima. Moreover, the subdivision in levels will also suggest the introduction of appropriate tools, which will be different for each level but, in accordance with the fact that all levels display a similar structure, will all be based on a common concept namely that of local move. Some computational experiments will reveal the effectiveness of such tools.

Journal ArticleDOI
TL;DR: This paper presents an integrated production, marketing and inventory model which determines the production lot size, marketing expenditure and product’s selling price and uses Geometric Programming (GP) to locate the optimal solution.
Abstract: This paper presents an integrated production, marketing and inventory model which determines the production lot size, marketing expenditure and product's selling price. Our model is highly nonlinear and non-convex and cannot be solved directly. Therefore, Geometric Programming (GP) is used to locate the optimal solution of the proposed model. In our GP implementation, we use a transformed dual problem in order to reduce the model to an optimization of an unconstrained problem in a single variable and the resulting problem is solved using a simple line search. We analyze the solution in different cases in order to study the behaviour of the model and for each case, a numerical example is used to demonstrate the implementation of our analysis.

Journal ArticleDOI
TL;DR: The method is shown to converge for SIPs which do not satisfy regularity assumptions required by reduction-based methods, and for which certain points in the feasible set are subject to an infinite number of active constraints.
Abstract: A new approach for the numerical solution of smooth, nonlinear semi-infinite programs whose feasible set contains a nonempty interior is presented. Interval analysis methods are used to construct finite nonlinear, or mixed-integer nonlinear, reformulations of the original semi-infinite program under relatively mild assumptions on the problem structure. In certain cases the finite reformulation is exact and can be solved directly for the global minimum of the semi-infinite program (SIP). In the general case, this reformulation is over-constrained relative to the SIP, such that solving it yields a guaranteed feasible upper bound to the SIP solution. This upper bound can then be refined using a subdivision procedure which is shown to converge to the true SIP solution with finite e-optimality. In particular, the method is shown to converge for SIPs which do not satisfy regularity assumptions required by reduction-based methods, and for which certain points in the feasible set are subject to an infinite number of active constraints. Numerical results are presented for a number of problems in the SIP literature. The solutions obtained are compared to those identified by reduction-based methods, the relative performances of the nonlinear and mixed-integer nonlinear formulations are studied, and the use of different inclusion functions in the finite reformulation is investigated.

Journal ArticleDOI
TL;DR: Numerical experiments reported in this paper on several test problems with up to 200 variables have demonstrated the applicability and efficiency of the proposed discrete filled function method.
Abstract: A discrete filled function method is developed in this paper to solve discrete global optimization problems over "strictly pathwise connected domains." Theoretical properties of the proposed discrete filled function are investigated and a solution algorithm is proposed. Numerical experiments reported in this paper on several test problems with up to 200 variables have demonstrated the applicability and efficiency of the proposed method.

Journal ArticleDOI
TL;DR: The efficient and robust computational performance of the present multigrid scheme allows to investigate bang-bang control problems.
Abstract: A multigrid scheme for the solution of constrained optimal control problems discretized by finite differences is presented. This scheme is based on a new relaxation procedure that satisfies the given constraints pointwise on the computational grid. In applications, the cases of distributed and boundary control problems with box constraints are considered. The efficient and robust computational performance of the present multigrid scheme allows to investigate bang-bang control problems.

Journal ArticleDOI
TL;DR: Two algorithms are developed that are particularly suitable for problems where n is large and based on log-exponential aggregation of the maximum function and reduces the problem into an unconstrained convex program.
Abstract: Consider the problem of computing the smallest enclosing ball of a set of m balls in ?n. Existing algorithms are known to be inefficient when n > 30. In this paper we develop two algorithms that are particularly suitable for problems where n is large. The first algorithm is based on log-exponential aggregation of the maximum function and reduces the problem into an unconstrained convex program. The second algorithm is based on a second-order cone programming formulation, with special structures taken into consideration. Our computational experiments show that both methods are efficient for large problems, with the product mn on the order of 107. Using the first algorithm, we are able to solve problems with n = 100 and m = 512,000 in about 1 hour.

Journal ArticleDOI
TL;DR: It is shown that this algorithm can solve a problem of practical size and that the long-short strategy leads to a portfolio with significantly better risk-return structure compared with standard purchase only portfolio both in terms of ex-ante and ex-post performance.
Abstract: The purpose of this paper is to propose a practical branch and bound algorithm for solving a class of long-short portfolio optimization problem with concave and d.c. transaction cost and complementarity conditions on the variables. We will show that this algorithm can solve a problem of practical size and that the long-short strategy leads to a portfolio with significantly better risk-return structure compared with standard purchase only portfolio both in terms of ex-ante and ex-post performance.

Journal ArticleDOI
TL;DR: For this problem set the implementation of the revised simplex method which exploits hyper-sparsity is shown to be competitive with the leading commercial solver and significantly faster than the leading public-domain solver.
Abstract: The revised simplex method is often the method of choice when solving large scale sparse linear programming problems, particularly when a family of closely-related problems is to be solved. Each iteration of the revised simplex method requires the solution of two linear systems and a matrix vector product. For a significant number of practical problems the result of one or more of these operations is usually sparse, a property we call hyper-sparsity. Analysis of the commonly-used techniques for implementing each step of the revised simplex method shows them to be inefficient when hyper-sparsity is present. Techniques to exploit hyper-sparsity are developed and their performance is compared with the standard techniques. For the subset of our test problems that exhibits hyper-sparsity, the average speedup in solution time is 5.2 when these techniques are used. For this problem set our implementation of the revised simplex method which exploits hyper-sparsity is shown to be competitive with the leading commercial solver and significantly faster than the leading public-domain solver.

Journal ArticleDOI
TL;DR: A measure of risk is introduced for a sequence of random incomes adapted to some filtration as the optimal net present value of a stream of adaptively planned commitments for consumption by exploiting the convexity and duality structure of the stochastic dynamic linear problem.
Abstract: A measure of risk is introduced for a sequence of random incomes adapted to some filtration. This measure is formulated as the optimal net present value of a stream of adaptively planned commitments for consumption. The new measure is calculated by solving a stochastic dynamic linear optimization problem which, for finite filtrations, reduces to a deterministic linear programming problem. We analyze properties of the new measure by exploiting the convexity and duality structure of the stochastic dynamic linear problem. The measure depends on the full distribution of the income process (not only on its marginal distributions) as well as on the filtration, which is interpreted as the available information about the future. The features of the new approach are illustrated by a numerical example.

Journal ArticleDOI
TL;DR: The new algorithm is based on the differential evolution algorithm of Storn and Price and is tested on two different potential energy functions considered for silicon-silicon atomic interactions.
Abstract: In this paper we propose an algorithm for the minimization of potential energy functions. The new algorithm is based on the differential evolution algorithm of Storn and Price (Journal of Global Optimization, vol. 11, pp. 341--359, 1997). The algorithm is tested on two different potential energy functions. The first function is the Lennard Jones energy function and the second function is the many-body potential energy function of Tersoff (Physics Review B, vol. 37, pp. 6991--7000, 1988; vol. 38, pp. 9902--9905, 1988). The first problem is a pair potential and the second problem is a semi-empirical many-body potential energy function considered for silicon-silicon atomic interactions. The minimum binding energies of up to 30 atoms are reported.

Journal ArticleDOI
TL;DR: The relations of the below-mean downside stochastic dominance are formally introduced and the corresponding techniques to enhance risk measures are derived and the resulting mean-risk models generate efficient solutions with respect to second degree stochastically dominance, while at the same time preserving simplicity and LP computability of the original models.
Abstract: A mathematical model of portfolio optimization is usually quantified with mean-risk models offering a lucid form of two criteria with possible trade-off analysis. In the classical Markowitz model the risk is measured by a variance, thus resulting in a quadratic programming model. Following Sharpe's work on linear approximation to the mean-variance model, many attempts have been made to linearize the portfolio optimization problem. There were introduced several alternative risk measures which are computationally attractive as (for discrete random variables) they result in solving linear programming (LP) problems. Typical LP computable risk measures, like the mean absolute deviation (MAD) or the Gini's mean absolute difference (GMD) are symmetric with respect to the below-mean and over-mean performances. The paper shows how the measures can be further combined to extend their modeling capabilities with respect to enhancement of the below-mean downside risk aversion. The relations of the below-mean downside stochastic dominance are formally introduced and the corresponding techniques to enhance risk measures are derived. The resulting mean-risk models generate efficient solutions with respect to second degree stochastic dominance, while at the same time preserving simplicity and LP computability of the original models. The models are tested on real-life historical data.

Journal ArticleDOI
TL;DR: This work uses augmented Lagrangians to transform generalized semi-infinite min-max problems into ordinary semi- Inferno min- max problems, with the same set of local and global solutions as well as the same stationary points.
Abstract: We present an approach for the solution of a class of generalized semi-infinite optimization problems. Our approach uses augmented Lagrangians to transform generalized semi-infinite min-max problems into ordinary semi-infinite min-max problems, with the same set of local and global solutions as well as the same stationary points. Once the transformation is effected, the generalized semi-infinite min-max problems can be solved using any available semi-infinite optimization algorithm. We illustrate our approach with two numerical examples, one of which deals with structural design subject to reliability constraints.

Journal ArticleDOI
TL;DR: A model for foreign exchange exposure management and (international) cash management taking into consideration random fluctuations of exchange rates is formulated, showing that there is a considerable improvement to “spot only” strategy.
Abstract: In this paper we formulate a model for foreign exchange exposure management and (international) cash management taking into consideration random fluctuations of exchange rates. A vector error correction model (VECM) is used to predict the random behaviour of the forward as well as spot rates connecting dollar and sterling. A two-stage stochastic programming (TWOSP) decision model is formulated using these random parameter values. This model computes currency hedging strategies, which provide rolling decisions of how much forward contracts should be bought and how much should be liquidated. The model decisions are investigated through ex post simulation and backtesting in which value at risk (VaR) for alternative decisions are computed. The investigation (a) shows that there is a considerable improvement to "spot only" strategy, (b) provides insight into how these decisions are made and (c) also validates the performance of this model.

Journal ArticleDOI
TL;DR: A novel staged continuous Tabu search (SCTS) algorithm is proposed for solving global optimization problems of multi-minima functions with multi-variables and indicates that the proposed method is more efficient than an improved genetic algorithm published previously.
Abstract: A novel staged continuous Tabu search (SCTS) algorithm is proposed for solving global optimization problems of multi-minima functions with multi-variables. The proposed method comprises three stages that are based on the continuous Tabu search (CTS) algorithm with different neighbor-search strategies, with each devoting to one task. The method searches for the global optimum thoroughly and efficiently over the space of solutions compared to a single process of CTS. The effectiveness of the proposed SCTS algorithm is evaluated using a set of benchmark multimodal functions whose global and local minima are known. The numerical test results obtained indicate that the proposed method is more efficient than an improved genetic algorithm published previously. The method is also applied to the optimization of fiber grating design for optical communication systems. Compared with two other well-known algorithms, namely, genetic algorithm (GA) and simulated annealing (SA), the proposed method performs better in the optimization of the fiber grating design.

Journal ArticleDOI
TL;DR: A method of generating an axial MAP of controllable size with a known unique solution is presented and certain characteristics of the generated MAPs that determine realism and difficulty are investigated.
Abstract: The multidimensional assignment problem (MAPs) is a higher dimensional version of the standard linear assignment problem. Test problems of known solution are useful in exercising solution methods. A method of generating an axial MAP of controllable size with a known unique solution is presented. Certain characteristics of the generated MAPs that determine realism and difficulty are investigated.

Journal ArticleDOI
TL;DR: A simple yet efficient randomized algorithm for finding the maximum distance from a point set to an arbitrary compact set in Rd is presented and can be used for accelerating the computation of the Hausdorff distance between complex polytopes.
Abstract: In this paper, a simple yet efficient randomized algorithm (Exterior Random Covering) for finding the maximum distance from a point set to an arbitrary compact set in Rd is presented. This algorithm can be used for accelerating the computation of the Hausdorff distance between complex polytopes.

Journal ArticleDOI
Jinbao Jian1
TL;DR: This paper reformulates the complementarity constraints as a standard nonlinear equality and inequality constraints by making use of a class of generalized smoothing complementarity functions, then presents a new SQP algorithm for the discussed problems.
Abstract: This paper discusses a special class of mathematical programs with nonlinear complementarity constraints, its goal is to present a globally and superlinearly convergent algorithm for the discussed problems We first reformulate the complementarity constraints as a standard nonlinear equality and inequality constraints by making use of a class of generalized smoothing complementarity functions, then present a new SQP algorithm for the discussed problems At each iteration, with the help of a pivoting operation, a master search direction is yielded by solving a quadratic program, and a correction search direction for avoiding the Maratos effect is generated by an explicit formula Under suitable assumptions, without the strict complementarity on the upper-level inequality constraints, the proposed algorithm converges globally to a B-stationary point of the problems, and its convergence rate is superlinear

Journal ArticleDOI
TL;DR: An algorithm for finding an approximate global minimum of a funnel shaped function with many local minima is described, applied to compute the minimum energy docking position of a ligand with respect to a protein molecule.
Abstract: An algorithm for finding an approximate global minimum of a funnel shaped function with many local minima is described. It is applied to compute the minimum energy docking position of a ligand with respect to a protein molecule. The method is based on the iterative use of a convex, general quadratic approximation that underestimates a set of local minima, where the error in the approximation is minimized in the L1 norm. The quadratic approximation is used to generate a reduced domain, which is assumed to contain the global minimum of the funnel shaped function. Additional local minima are computed in this reduced domain, and an improved approximation is computed. This process is iterated until a convergence tolerance is satisfied. The algorithm has been applied to find the global minimum of the energy function generated by the Docking Mesh Evaluator program. Results for three different protein docking examples are presented. Each of these energy functions has thousands of local minima. Convergence of the algorithm to an approximate global minimum is shown for all three examples.

Journal ArticleDOI
TL;DR: This paper develops trading strategies for liquidation of a financial security, which maximize the expected return and provides path-dependent strategies, i.e., the fraction of security sold depends upon price sample-path of the security up to the current moment.
Abstract: This paper develops trading strategies for liquidation of a financial security, which maximize the expected return. The problem is formulated as a stochastic programming problem that utilizes the scenario representation of possible returns. Two cases are considered, a case with no constraint on risk and a case when the risk of losses associated with trading strategy is constrained by Conditional Value-at-Risk (CVaR) measure. In the first case, two algorithms are proposed; one is based on linear programming techniques, and the other uses dynamic programming to solve the formulated stochastic program. The third proposed algorithm is obtained by adding the risk constraints to the linear program. The algorithms provide path-dependent strategies, i.e., the fraction of security sold depends upon price sample-path of the security up to the current moment. The performance of the considered approaches is tested using a set of historical sample-paths of prices.

Journal ArticleDOI
TL;DR: The rail car management at industrial in-plant railroads is considered, and mixed integer programming formulations for the two problem levels are presented, including the NP-hard shunting minimal allocation of cars per region.
Abstract: We consider the rail car management at industrial in-plant railroads. Demands for loaded or empty cars are characterized by a track, a car type, and the desired quantity. If available, we assign cars from the stock, possibly substituting types, otherwise we rent additional cars. Transportation requests are fulfilled as a short sequence of pieces of work, the so-called blocks. Their design at a minimal total transportation cost is the planning task considered in this paper. It decomposes into the rough distribution of cars among regions, and the NP-hard shunting minimal allocation of cars per region. We present mixed integer programming formulations for the two problem levels. Our computational experience from practical data encourages an installation in practice.