Topic
Discrete optimization
About: Discrete optimization is a research topic. Over the lifetime, 4598 publications have been published within this topic receiving 158297 citations. The topic is also known as: discrete optimisation.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The Penalty Function with Memory (PFM) is developed, a new method that converts a DOvS problem with constraints into a series of unconstrained problems and proves convergence properties and discusses parameter selection for the implementation of PFM.
Abstract: We consider a discrete optimization via simulation (DOvS) problem with stochastic constraints on secondary performance measures in which both objective and secondary performance measures need to be estimated by stochastic simulation. To solve the problem, we develop a new method called the Penalty Function with Memory (PFM). It is similar to an existing penalty-type method—which consists of a penalty parameter and a measure of violation of constraints—in a sense that it converts a DOvS problem with constraints into a series of unconstrained problems. However, PFM uses a different penalty parameter, called a penalty sequence, determined by the past history of feasibility checks on a solution. Specifically, assuming a minimization problem, a penalty sequence diverges to infinity for any infeasible solution but converges to zero for any feasible solution under certain conditions. As a result, a DOvS algorithm combined with PFM performs well even when an optimal feasible solution is a boundary solution with o...
31 citations
••
TL;DR: Inverse optimization refers to the fact that each time a Lagrangean calculation is made for a specific problem with a given resources vector, an optimal solution is obtained for a related problem with suitably adjusted resources vector.
Abstract: Lagrangean techniques have had wide application to the optimization of discrete optimization problems. Inverse optimization refers to the fact that each time a Lagrangean calculation is made for a specific problem with a given resources vector, an optimal solution is obtained for a related problem with a suitably adjusted resources vector. This property is studied in depth for the capacitated plant location problem and new parametric methods for that problem are suggested. Computational experience is reported.
31 citations
••
TL;DR: In this article, a theoretical framework of difference of discrete convex functions (discrete DC functions) and optimization problems for discrete DC functions is established, and a discrete DC algorithm, which is a discrete analogue of the continuous DC algorithm (Concave---Convex procedure in machine learning) is proposed.
Abstract: A theoretical framework of difference of discrete convex functions (discrete DC functions) and optimization problems for discrete DC functions is established. Standard results in continuous DC theory are exported to the discrete DC theory with the use of discrete convex analysis. A discrete DC algorithm, which is a discrete analogue of the continuous DC algorithm (Concave---Convex procedure in machine learning) is proposed. The algorithm contains the submodular-supermodular procedure as a special case. Exploiting the polyhedral structure of discrete convex functions, the algorithms tailored to specific types of discrete DC functions are proposed.
31 citations
••
01 Jan 2007TL;DR: This paper analyses and suggests a solution to the discrete allocation problem and extends the problem to include treating general loss functions, an arbitrary polynomial function of a certain degree.
Abstract: The tolerance allocation problem consists of choosing tolerances on dimensions of a complex assembly so that they combine into an optimal state while retaining certain requirements. This optimal state often coincides with the minimum manufacturing cost of the product. Sometimes it is balanced with an artificial cost that the deviation from target induces on the quality of the product.
This paper analyses and suggests a solution to the discrete allocation problem. It also extends the problem to include treating general loss functions. General loss in this paper means an arbitrary polynomial function of a certain degree. We also briefly review the current work that has been made on solving the tolerance allocation problem.
31 citations
••
TL;DR: In this paper, a unified structural optimization model is described by a parameterized level set function that applies compactly supported radial basis functions (CS-RBFs) with favorable smoothness and accuracy for interpolation.
Abstract: Recent advances in level-set-based shape and topology optimization rely on free-form implicit representations to support boundary deformations and topological changes. In practice, a continuum structure is usually designed to meet parametric shape optimization, which is formulated directly in terms of meaningful geometric design variables, but usually does not support free-form boundary and topological changes. In order to solve the disadvantage of traditional step-type structural optimization, a unified optimization method which can fulfill the structural topology, shape, and sizing optimization at the same time is presented. The unified structural optimization model is described by a parameterized level set function that applies compactly supported radial basis functions (CS-RBFs) with favorable smoothness and accuracy for interpolation. The expansion coefficients of the interpolation function are treated as the design variables, which reflect the structural performance impacts of the topology, shape, and geometric constraints. Accordingly, the original topological shape optimization problem under geometric constraint is fully transformed into a simple parameter optimization problem; in other words, the optimization contains the expansion coefficients of the interpolation function in terms of limited design variables. This parameterization transforms the difficult shape and topology optimization problems with geometric constraints into a relatively straightforward parameterized problem to which many gradient-based optimization techniques can be applied. More specifically, the extended finite element method (XFEM) is adopted to improve the accuracy of boundary resolution. At last, combined with the optimality criteria method, several numerical examples are presented to demonstrate the applicability and potential of the presented method.
31 citations