scispace - formally typeset
Search or ask a question

Showing papers in "Optimization Methods & Software in 1998"


Journal ArticleDOI
TL;DR: It is proved that in some cases this semidefinite relaxation of some global optimization problems provides us with a constant relative accuracy estimate for the exact solution.
Abstract: In this paper we study the quality of semidefinite relaxation for a global quadratic optimization problem with diagonal quadratic consraints. We prove that such relaxation approximates the exact solution of the problem with relative accuracy mu = (pi/2)-1. We consider some applications of this result.

443 citations


Journal ArticleDOI
Yin Zhang1
TL;DR: This paper describes the implementation of a primal-dual infeasible-interior-point algorithm for large-scale linear programming under the MATLAB environment, and discusses in detail a technique for overcoming numerical instability in Cholesky factorization at the end-stage of iterations in interior-point algorithms.
Abstract: In this paper, we describe our implementation of a primal-dual infeasible-interior-point algorithm for large-scale linear programming under the MATLAB environment. The resulting software is called LIPSOL — Linear-programming Interior-Point SOLvers. LIPSOL is designed to take the advantages of MATLAB's sparse-matrix functions and external interface facilities, and of existing Fortran sparse Cholesky codes. Under the MATLAB environment, LIPSOL inherits a high degree of simplicity and versatility in comparison to its counterparts in Fortran or C language. More importantly, our extensive computational results demonstrate that LIPSOL also attains an impressive performance comparable with that of efficient Fortran or C codes in solving large-scale problems. In addition, we discuss in detail a technique for overcoming numerical instability in Cholesky factorization at the end-stage of iterations in interior-point algorithms.

377 citations


Journal ArticleDOI
TL;DR: In this paper, a primal-dual interior point method for solving general nonlinearly constrained optimization problems is proposed, which is based on solving the Barrier Karush-Kuhn-Tucker conditions for optimality by the Newton method.
Abstract: This paper proposes a primal-dual interior point method for solving general nonlinearly constrained optimization problems. The method is based on solving the Barrier Karush-Kuhn-Tucker conditions for optimality by the Newton method. To globalize the iteration we introduce the Barrier-penalty function and the optimality condition for minimizing this function. Our basic iteration is the Newton iteration for solving the optimality conditions with respect to the Barrier-penalty function which coincides with the Newton iteration for the Barrier Karush-Kuhn-Tucker conditions if the penalty parameter is sufficiently large. It is proved that the method is globally convergent from an arbitrary initial point that strictly satisfies the bounds on the variables. Implementations of the given algorithm are done for small dense nonlinear programs. The method solves all the problems in Hock and Schittkowski's textbook efficiently. Thus it is shown that the method given in this paper possesses a good theoretical convergen...

98 citations


Journal ArticleDOI
TL;DR: It follows that the co-ocurrence graph of the dual of a positive Boolean function can be always generated in time polynomial in the size of the function.
Abstract: Given a positive Boolean function fand a subset δ of its variables, we give a combinatorial condition characterizing the existence of a prime implicant Dˆof the Boolean dual f d of f having the property that every variable in δ appears in Dˆ We show that the recognition of this property is an NP-complete problem, suggesting an inherent computational difficulty of Boolean dualization, independently of the size of the dual function. Finally it is shown that if the cardinality of δ is bounded by a constant, then the above recognition problem is polynomial. In particular, it follows that the co-ocurrence graph of the dual of a positive Boolean function can be always generated in time polynomial in the size of the function.

78 citations


Journal ArticleDOI
TL;DR: The main parts of the FINCLAS system and the UTADIS method are discussed, and an application of the system is presented.
Abstract: Several techniques and methods have been proposed in the past for the study of financial classification problems, including statistical analysis techniques, mathematical programming, multicriteria decision aid and artificial intelligence. The application of these method^ in real world problems, where the decisions have to be taken in real time, calls upoip a powerful and efficient tool for supporting practitioners financial analysts in applying these techniques. This paper presents the FINCLAS (FINancial CLASsification) decin sion support system for financial classification problems. The classification is achieved through the use of the UTADIS (UTilites Additives DIScriminantes) multicriteria decih sion aid method. The main parts of the FINCLAS system and the UTADIS method are discussed, and an application of the system is presented

70 citations


Journal ArticleDOI
TL;DR: In this paper, reverse accumulation of the first derivatives of target functions is used to obtain automatic numerical error estimates for calculated function values, including the effects of inaccurate equation solution as well as rounding error.
Abstract: We begin by introducing a simple technique for using reverse accumulation the first derivatives of target functions which include in their construction the solution of systems of linear or nonlinear equations. In the linear case solving Ay= bfor ycorresponds to the adjoint operations [bbar]:=[bbar]+vand [Abar] :=yvwhere vis the solution to the adjoint equations vA= [ybar]. A more sophisticated construction applies in the nonlinear case We apply these technique to obtain automatic numerical error estimates for calculated function values. These error estimates include the effects of inaccurate equation solution as well as rounding error Our basic techiques can br generalized to functions which contain several (linear or nonlinear) implicit functions in their constuction, either serially or nested. In the case of scalar-valued target functions that include equation solution as part of their construction. Our algorithms involve at most the same order of computational effort as the computattion of the target f...

69 citations


Journal ArticleDOI
TL;DR: This paper proposes a variable depth search algorithm for the generalized assignment problem (GAP), which is one of the representative combinatorial optimization problems, and is known to be NP-hard.
Abstract: In this paper, we propose a variable depth search (VDS) algorithm for the generalized assignment problem (GAP), which is one of the representative combinatorial optimization problems, and is known to be NP-hard. The VDS is a generalization of the local search. The main idea of VDS is to change the size of the neighborhood adaptively so that the algorithm can effectively traverse larger search space within reasonable computational time. In our previous paper (M. Yagiura, T. Yamaguchi and T. Ibaraki, “A variable depth search algorithm for the generalized assignment problem,” Proc. 2nd Metaheuristics International Conference (MIC97), 1997 129-130 (full version is to appear in the post-conference book)), we proposed a simple VDS algorithm for the GAP, and obtained good results. To further improve the performance of the VDS, we examine the effectiveness of incorporating branching search processes to construct the neighborhoods. Various types of branching rules are examined, and it is observed that appropriate ...

61 citations


Journal ArticleDOI
TL;DR: This paper considers general, typically nonconvex, Quadratic programming Problem, and the Semidefinite relaxation proposed by Shor provides bounds on the optimal solution, but it does not always provide sufficiently strong bounds if linear constraint are also involved.
Abstract: We consider general, typically nonconvex, Quadratic programming Problem. The Semidefinite relaxation proposed by Shor provides bounds on the optimal solution, but it does not always provide sufficiently strong bounds if linear constraintare also involved. To get rid of the linear side-constraints, another, stronger convex relaxation is derved. This relaxation uses copositive matrices. Special cases are dicussed for which both relaxations are equal. At end of the paper, the complexity and solvablility of the relaxation are discussed.

60 citations


Journal ArticleDOI
TL;DR: In this article, a simple homogeneous primal-dual feasibility model is proposed for semidefinite programming (SDP) problems, and two infeasible-interior-point algorithms are applied to the homogeneous formulation.
Abstract: A simple homogeneous primal-dual feasibility model is proposed for semidefinite programming (SDP) problems. Two infeasible-interior-point algorithms are applied to the homogeneous formulation. The algorithms do not need a big M initialization. If the original SDP problem has a solution (X * y * S *), then both algorithms find an ϵ-approximate solution (i.e., a solution with residual error less than or equal to ϵ) in at most steps, where ρ* = Tr(X * + *), and ϵ0 is the residual error at the (normalized) starting point. A simple way of monitoring possible infeasibility of the original SDP problem is provided such that in at most steps either an 6-approximate solution is obtained, or it is determined that there is no solution (X * y * S *) with Tr(X * + S *) less than or equal to a given number ρ > 0. Numerical results on Mehrotra type primal-dual predictor-corrector algorithms show that the homogeneous algorithms outperform their non-homogeneous counterparts, with improvement of ore than 20% in many cases, ...

58 citations


Journal ArticleDOI
TL;DR: A new and unified methodology for computing first order derivatives of functions obtained in complex multistep processes is developed on the basis of general expressions for differentiating a composite function, and the formulas for fap automatic differentiation of elementary functions are derived.
Abstract: A new and unified methodology for computing first order derivatives of functions obtained in complex multistep processes is developed on the basis of general expressions for differentiating a composite function. From these results, we derive the formulas for fap automatic differentiation of elementary functions, for gradients arising in optimal control problems, nonlinear programming and gradients arising in discretizations of processes governed by partial differential equations. In the proposed approach we start with a chosqn discretization scheme for the state equation and derive the exact gradient expression. Thus a unique discretization scheme is automatically generated for the adjoint equation For optimal control problems, the proposed computational formulas correspond to the integration of the adjoint system of equations that appears in Pontryagin's maximum principle. This technique appears to be very efficient, universal, and applicable to a wide variety of distributed controlled dynamic systems an...

49 citations


Journal ArticleDOI
TL;DR: In this paper, two-sided approximations of trajectory tubes for dynamic systems by paralle otopes are considered, and families of parallelotopic estimates that ensure exact representations of the tybes sections throught intersections and unions are introduced each particular particular
Abstract: Two-sided appioximations of trajectory tubes for dynamic systems by paralle otopes ar considered. The families of parallelotopic estimates that ensure exact representations of the tybes sections throught intersections and unions are introduced each particular

Journal ArticleDOI
TL;DR: In this paper, the exact penalty functions for nonsmooth constrained optimization problems are analyzed using the notion of (Dini) Hadamard directional derivative with respect to the constraint set.
Abstract: Exact penalty functions for nonsmooth constrained optimization problems are analyzed tfy using the notion of (Dini) Hadamard directional derivative with respect to the constraint set. Weak conditions are given guaranteeing equivalence of the sets of stationary, global minimum, local minimum points of the constrained problem and of the penalty function

Journal ArticleDOI
TL;DR: This paper shows that it is possible to exploit sparsity both in columns and rows by employing the forward and the reverse mode of Automatic differentiation and a graph-theoretic characterization of the problem is given.
Abstract: Efficient estimation of large sparse Jacobian matrices has been studied extensively in the last couple of years. It has been observed that the estimation of Jacobian matrix can be posed as a graph coloring problem. Elements of the matrix are estimated by taking divided difference in several directions corresponding to a group of structurally independent columns. Another possibility is to obtain the nonzero elements by means of the so called Automatic differentiation, which gives the estimates free of truncation error that one encounters in a divided difference scheme. In this paper we show that it is possible to exploit sparsity both in columns and rows by employing the forward and the reverse mode of Automatic differentiation. A graph-theoretic characterization of the problem is given.

Journal ArticleDOI
TL;DR: This work considers a nonsmooth, convex program solved by a conditional subgradient optimization scheme with divergent series step lengths, and shows that the elements of the ergodic sequence of subgradients in the limit fulfill the optimality conditions at the optimal solution.
Abstract: When nonsmooth, convex minimization problems are solved by subgradient optimization methods, the subgradients used will in general not accumulate to subgradients which verify the optimality of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of a subgradient method in terms of the approximate fulfillment of optimality conditions. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers and convergent lower bounds on the optimal objective value, is not directly available in subgradient schemes As a means of overcoming these weaknesses in subgradient methods, we introduce the computation of an ergodic (averaged) sequence of subgradients. Specifically, we consider a nonsmooth, convex program solved by a conditional subgradient optimization scheme with divergent series step lengths, and show that the elements of the ergodic sequence of subgradients in the limit fulfill the optimality conditions at the optimal solution,...

Journal ArticleDOI
David Avis1
TL;DR: In this paper, the authors describe computational experience obtained in the development of the IRS code, which uses the reverse search technique to solve the vertex enumeration/convex hull problem for d-dimensional convex polyhedra.
Abstract: This paper describes computational experience obtained in the development of the Irs code, which uses the reverse search technique to solve the vertex enumeration/convex hull problem for d-dimensional convex polyhedra. We give empirical results showing improvements obtained by the use of lexicographic perturbation, lifting, and integer pivoting. We also give some indication of the cost of using extended precision arithmetic and illustrate the use of the estimation function of Irs.The empirical results are obtained by running various versions of the program on a set of well-known non-trivial polyhedra: cut, configuration, cyclic, Kuhn-Quandt, and metric polytopes.

Journal ArticleDOI
TL;DR: In this paper, a family of primal/primal-dual/dual search directions for the monotone LCP over the space of n× nsymmetric block-diagonal matrices is considered.
Abstract: We consider a family of primal/primal-dual/dual search directions for the monotone LCP over the space of n× nsymmetric block-diagonal matrices. We consider twio infea-sible predictor-corrector path-following methods using these search directions, with the predictor and corrector steps used either in series (similar to the Mizuno-Todd-Ye method) or in parallel (similar to Mizuno et al./McShane's method). The methods attaijn global linear convergence with a convergence ratio which, depending on the quality of the starting iterate, ranges from .Our analysis is fairly compact and parallels that for the LP and LCP cases

Journal ArticleDOI
TL;DR: Automatic differentiation and nonmonotone spectral projected gradient techniques are used for solving optimal control problems using general Runge–Kutta integration formulas.
Abstract: Automatic differentiation and nonmonotone spectral projected gradient techniques are used for solving optimal control problems. The original problem is reduced to a nonlinear programming one using ...

Journal ArticleDOI
TL;DR: A method for computing efficiently the LCS between three and more strings of small alphabet size is proposed, its theoretical time complexity is evaluated, and its computing time is estimated by computational experiments.
Abstract: Given two or more strings (for example, DNA and amino acid sequences), the longest common subsequence (LCS for short) problem is to determine the longest common subsequence obtained by deleting zero or more symbols from each string. The algorithms for computing an LCS between two strings were given by many papers, but there are few efficient algorithms for computing an LCS between more than two strings. This paper proposes a method for computing efficiently the LCS between three and more strings of small alphabet size, evaluates its theoretical time complexity, and estimates the computing time by computational experiments. Using this method, the LCS problem for eight strings of more than 120 length can be solved in about 40min on a slow workstation.

Journal ArticleDOI
TL;DR: In this paper, a proof of Farkas's lemma based on a new theorem pertaining to orthogodal matrices is given, which is slightly more general than Tucker's theorem, which concerns skew-symmetric matrices and which may itself be derived simply from tne new theorem.
Abstract: A proof is given of Farkas's lemma based on a new theorem pertaining to orthogodal matrices. It is claimed that this theorem is slightly more general than Tucker's theorem, which concerns skew-symmetric matrices and which may itself be derived simply from tne new theorem. Farkas's lemma and other theorems of the alternative then follow trivially from Tucker's theorem

Journal ArticleDOI
TL;DR: In this paper, an optimal control problem governed by the heat equation with nonlinear boundary conditions is considered, where the objective function consists of a quadratic terminal part aifid a quad ratic regularization term, and the convergence of the SQP method is shown by proving the strong regularity of the optimality system.
Abstract: An optimal control problem governed by the heat equation with nonlinear boundary conditions is considered. The objective functional consists of a quadratic terminal part aifid a quadratic regularization term. On transforming the associated optimality system to! a generalized equation, an SQP method for solving the optimal control problem is related to the Newton method for the generalized equation. In this way, the convergence of tfie SQP method is shown by proving the strong regularity of the optimality system. Aftjer explaining the numerical implementation of the theoretical results some high precision test examples are presented

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the effect of regularizing the scaling matrix of interior point methods on the numerical stability of the system that defines the searcn direction and show that the effect can be easily monitored and corrected if necessary.
Abstract: Interior point methods, especially the algorithms for linear programming problems are sensitive if there are unconstrained (free) variables in the problem. While replacing a free variable by two nonnegative ones may cause numerical instabilities, the implicit handling results in a semidefinite scaling matrix at each interior point iteration. In the paper we investigate the effects if the scaling matrix is regularized. Our analysis will prove that the effect of the regularization can be easily monitored and corrected if necessary. We describe the regularization scheme mainly for the efficient handling of free variables, but a similar analysis can be made for the case, when the small scaling factors are raised to larger values to improve the numerical stability of the systems that define the searcn direction. We will show the superiority of our approach over the variable replacement method on a set of test problems arising from water management application

Journal ArticleDOI
TL;DR: A general theory of global convergence together with a robust algorithm including a special restarting strategy for solving large sparse systems of nonlinear equations based on various approximations of the Jacobian matrix is proposed.
Abstract: This paper is devoted to globally convergent Armijo-type descent methods for solving large sparse systems of nonlinear equations. These methods include the discrete Newtcin method and a broad class of Newton-like methods based on various approximations of the Jacobian matrix. We propose a general theory of global convergence together with a robust algorithm including a special restarting strategy. This algorithm is based cfn the preconditioned smoothed CGS method for solving nonsymmetric systems of linejtr equations. After reviewing 12 particular Newton-like methods, we propose results of extensive computational experiments. These results demonstrate high efficiency of tip proposed algorithm

Journal ArticleDOI
TL;DR: A class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables is considered, and an affine-scaling and two primal-dual interior-point Newton algorithms are derived.
Abstract: In this paper we consider a class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables For this class of problems, we analyze constraint qualifications and optimality conditions in detail We derive an affine-scaling and two primal-dual interior-point Newton algorithms by applying, in an interior-point way, Newton's method to equivalent formi of the first-order optimality conditions Under appropriate assumptions, the interior-point Newton algorithms are shown to be locally well-defined with q-quadratic rate of locaj convergence By using the structure of the problem, the linear algebra of these algorithms can be reduced to the null space of the Jacobian of the equality constraints The similarities between the three algorithms are pointed out, and their corresponding versions for th^ general nonlinear programming problem are discussed

Journal ArticleDOI
TL;DR: The NP-hardness in general and the polynomial solvability for specially structured cases are proved and a new heuristic approach is suggested and compared to other heuristics known from the literature.
Abstract: In this paper we investigate scheduling problems which stem from real-world applications in the chemical process industry both from a theoretical and from a practical point of view. After providing a survey and a general mixed integer programming model, we present some results on the complexity of the process scheduling problem and investigate some important special cases. (We prove the NP-hardness in general and the polynomial solvability for specially structured cases.) Furthermore, we suggest a new heuristic approach and compare this to other heuristics known from the literature.

Journal ArticleDOI
TL;DR: This paper addresses important implementation aspects and describes the most efficient heuristics in the cost-scaling algorithm of Goldberg and Tarjan and highlights the effect of combining severalHeuristics.
Abstract: The cost-scaling algorithm of Goldberg and Tarjan [Mathematics of Operations Research 15 (1990), 430-466] is known to be one of the most efficient algorithms for minimum–cost flow problems. However, its efficiency in practice depends on many implementation aspects. Moreover, the inclusion of several heuristics improves its performance drastically. This paper addresses important implementation aspects and describes the most efficient heuristics. Experimental results also highlight the effect of combining several heuristics.

Journal ArticleDOI
TL;DR: An efficient branch and bound algorithm for solving a non-concave maximization problem and it is demonstrated that it can generate a globally optimal solution in an efficient manner.
Abstract: This paper discusses an efficient algorithm for solving the mean–absolute deviation–skewness(MADS) portfolio optimization model in which the third order moment in addition to the first and second moment of the rate of return of the portfolio is taken into account. The MADS model can be considered either as an extension of the mean–absolute deviation (MAD) model or an approximation of the mean–variance–skewness (MVS) model which is a straightforward extension of the mean–variance (MV) model Models which take account of the third moment play an important role when the rate of return of the assets are non-symmetrically distributed. We will propose an efficient branch and bound algorithm for solving a non-concave maximization problem and demonstrate that it can generate a globally optimal solution in an efficient manner.

Journal ArticleDOI
Koichi Kubota1
TL;DR: A preprocessor is reported on that can handle any Fortran77 programs with an improved reverse mode automatic differentiation for reducing the size of the storage by means of a recursive checkpointing mechanism.
Abstract: Given a program computing the value of a function with many variables, the reverse mode automatic differentiation (or top-down algorithm of automatic differentiation) swiftly computes the values of the partial derivatives of the function. But it is a weak point that it requires storage whose size is proportional to the complexity of the underlying function. We report on a preprocessor that can handle any Fortran77 programs with an improved reverse mode automatic differentiation for reducing the size of the storage by means of a recursive checkpointing mechanism. Developing a library program named RCL/fork (Recursive Checkpointing Library program with fork system-call) based on the fork system-call provided by the UNIX operating system, we could reduce the size of the virtual memory below the half for computation of the partial derivatives that requires about 1.3 GB virtual memory with the original reverse mode automatic differentiation.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the analyticity of certain paths that arise in the context of feasible interior-point methods and showed that there exists a neigborhood surrounding a strictly complementary optimal point where the path is analytic and all its derivatives with respect to the path parameter exist, even if the linear program is degenerate.
Abstract: This paper investigates the analyticity of certain paths that arise in the context of feasible interior-point-methods. It is shown that there exists a neigborhood surrounding a strictly complementary optimal point where the path is analytic and all its derivatives with respect to the path parameter exist, even if the linear program is degenerate. For this reason it is possible to extend the path through the feasible region from the positive real axis to the left complex half plane. This is done by a canonical transformation of the linear program. The analyticity provides the theoretical foundation for numerical methods following the path by higher-order approximations

Journal ArticleDOI
TL;DR: This work describes regularization tools for training large-scale artificial feed-forward neural networks and proposes algorithms that explicitly use a sequence of Tikhonov regularized nonlinear least squarings.
Abstract: We describe regularization tools for training large-scale artificial feed-forward neural networks. We propose algorithms that explicitly use a sequence of Tikhonov regularized nonlinear least squar ...

Journal ArticleDOI
TL;DR: The complexity of these methods is investigated and it is shown that all these methods, if started appropriately, need predictor-corrector steps to find an e-solution, and only steps, if the problem has strictly interior points.
Abstract: Recently the authors of this paper and S. Mizuno described a class of infeasible-interiorpoint methods for solving linear complementarity problems that are sufficient in the sense of R.W. Cottle, J.-S. Pang and V. Venkateswaran (1989) Sufficient matrices and the linear complementarity problemLinear Algebra AppL 114/115,231-249. It was shown that these methods converge superlinearly with an arbitrarily high order even for degenerate problems or problems without strictly complementary solution. In this paper the complexity of these methods is investigated. It is shown that all these methods, if started appropriately, need predictor-corrector steps to find an e-solution, and only steps, if the problem has strictly interior points. HereK is the sufficiency parameter of the complementarity problem.