scispace - formally typeset
Search or ask a question

Showing papers on "Linear programming published in 2005"


Journal ArticleDOI
TL;DR: This paper facilitates the reliable use of nonlinear convex relaxations in global optimization via a polyhedral branch-and-cut approach and proves that, if the convexity of a univariate or multivariate function is apparent by decomposing it into convex subexpressions, the relaxation constructor automatically exploits this convexITY in a manner that is much superior to developing polyhedral outer approximators for the original function.
Abstract: A variety of nonlinear, including semidefinite, relaxations have been developed in recent years for nonconvex optimization problems. Their potential can be realized only if they can be solved with sufficient speed and reliability. Unfortunately, state-of-the-art nonlinear programming codes are significantly slower and numerically unstable compared to linear programming software.In this paper, we facilitate the reliable use of nonlinear convex relaxations in global optimization via a polyhedral branch-and-cut approach. Our algorithm exploits convexity, either identified automatically or supplied through a suitable modeling language construct, in order to generate polyhedral cutting planes and relaxations for multivariate nonconvex problems. We prove that, if the convexity of a univariate or multivariate function is apparent by decomposing it into convex subexpressions, our relaxation constructor automatically exploits this convexity in a manner that is much superior to developing polyhedral outer approximators for the original function. The convexity of functional expressions that are composed to form nonconvex expressions is also automatically exploited.Root-node relaxations are computed for 87 problems from globallib and minlplib, and detailed computational results are presented for globally solving 26 of these problems with BARON 7.2, which implements the proposed techniques. The use of cutting planes for these problems reduces root-node relaxation gaps by up to 100% and expedites the solution process, often by several orders of magnitude.

1,205 citations


Book
01 Jan 2005
TL;DR: The Simplex Method for Linear Programming Problems is a method for solving linear programming problems with real-time requirements.
Abstract: Preface Table of Notation Chapter 1. Introduction Chapter 2. Line Search Descent Methods for Unconstrained Minimization Chapter 3. Standard Methods for Constrained Optimization Chapter 4. New Gradient-Based Trajectory and Approximation Methods Chapter 5. Example Problems Chapter 6. Some Theorems Chapter 7. The Simplex Method for Linear Programming Problems Bibliography Index

810 citations


Journal ArticleDOI
TL;DR: This work develops and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles and establishes a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm.
Abstract: We develop and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is tight if and only if all the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: a) a tree-relaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds; and b) a tree-reweighted max-product message-passing algorithm that is related to but distinct from the max-product algorithm. In this way, we establish a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm.

770 citations


Book
01 Jan 2005
TL;DR: It is shown that outward k-neighborliness is equivalent to the statement that, whenever y = Ax has a non negative solution with at most k nonzeros, it is the nonnegative solution to y =Ax having minimal sum.
Abstract: Consider an underdetermined system of linear equations y = Ax with known y and d × n matrix A. We seek the nonnegative x with the fewest nonzeros satisfying y = Ax. In general, this problem is NP-hard. However, for many matrices A there is a threshold phenomenon: if the sparsest solution is sufficiently sparse, it can be found by linear programming. We explain this by the theory of convex polytopes. Let aj denote the jth column of A, 1 ≤ j ≤ n, let a0 = 0 and P denote the convex hull of the aj. We say the polytope P is outwardly k-neighborly if every subset of k vertices not including 0 spans a face of P. We show that outward k-neighborliness is equivalent to the statement that, whenever y = Ax has a nonnegative solution with at most k nonzeros, it is the nonnegative solution to y = Ax having minimal sum. We also consider weak neighborliness, where the overwhelming majority of k-sets of ajs not containing 0 span a face of P. This implies that most nonnegative vectors x with k nonzeros are uniquely recoverable from y = Ax by linear programming. Numerous corollaries follow by invoking neighborliness results. For example, for most large n by 2n underdetermined systems having a solution with fewer nonzeros than roughly half the number of equations, the sparsest solution can be found by linear programming.

639 citations


Journal ArticleDOI
TL;DR: The definition of a pseudocodeword unifies other such notions known for iterative algorithms, including "stopping sets," "irreducible closed walks," "trellis cycles," "deviation sets," and "graph covers," which is a lower bound on the classical distance.
Abstract: A new method is given for performing approximate maximum-likelihood (ML) decoding of an arbitrary binary linear code based on observations received from any discrete memoryless symmetric channel. The decoding algorithm is based on a linear programming (LP) relaxation that is defined by a factor graph or parity-check representation of the code. The resulting "LP decoder" generalizes our previous work on turbo-like codes. A precise combinatorial characterization of when the LP decoder succeeds is provided, based on pseudocodewords associated with the factor graph. Our definition of a pseudocodeword unifies other such notions known for iterative algorithms, including "stopping sets," "irreducible closed walks," "trellis cycles," "deviation sets," and "graph covers." The fractional distance d/sub frac/ of a code is introduced, which is a lower bound on the classical distance. It is shown that the efficient LP decoder will correct up to /spl lceil/d/sub frac//2/spl rceil/-1 errors and that there are codes with d/sub frac/=/spl Omega/(n/sup 1-/spl epsi//). An efficient algorithm to compute the fractional distance is presented. Experimental evidence shows a similar performance on low-density parity-check (LDPC) codes between LP decoding and the min-sum and sum-product algorithms. Methods for tightening the LP relaxation to improve performance are also provided.

636 citations


Journal ArticleDOI
TL;DR: In this article, a stochastic security-constrained multi-period electricity market clearing problem with unit commitment is formulated, where reserve services are determined by economically penalizing the operation of the market by the expected load not served.
Abstract: The first of this two-paper series formulates a stochastic security-constrained multi-period electricity market-clearing problem with unit commitment. The stochastic security criterion accounts for a pre-selected set of random generator and line outages with known historical failure rates and involuntary load shedding as optimization variables. Unlike the classical deterministic reserve-constrained unit commitment, here the reserve services are determined by economically penalizing the operation of the market by the expected load not served. The proposed formulation is a stochastic programming problem that optimizes, concurrently with the pre-contingency social welfare, the expected operating costs associated with the deployment of the reserves following the contingencies. This stochastic programming formulation is solved in the second companion paper using mixed-integer linear programming methods. Two cases are presented: a small transmission-constrained three-bus network scheduled over a horizon of four hours and the IEEE Reliability Test System scheduled over 24 h. The impact on the resulting generation and reserve schedules of transmission constraints and generation ramp limits, of demand-side reserve, of the value of load not served, and of the constitution of the pre-selected set of contingencies are assessed.

459 citations


Journal ArticleDOI
TL;DR: A heuristic algorithm, called Smart Pairing and INtelligent Disc Search (SPINDS), is developed that effectively transform a complex MINLP problem into a linear programming (LP) problem without losing critical points in its search space.
Abstract: Wireless sensor networks that operate on batteries have limited network lifetime. There have been extensive recent research efforts on how to design protocols and algorithms to prolong network lifetime. However, due to energy constraint, even under the most efficient protocols and algorithms, the network lifetime may still be unable to meet the mission's requirements. In this paper, we consider the energy provisioning (EP) problem for a two-tiered wireless sensor network. In addition to provisioning additional energy on the existing nodes, we also consider deploying relay nodes (RNs) into the network to mitigate network geometric deficiencies and prolong network lifetime. We formulate the joint problem of EP and RN placement (EP-RNP) into a mixed-integer nonlinear programming (MINLP) problem. Since an MINLP problem is NP-hard in general, and even state-of-the-art software and techniques are unable to offer satisfactory solutions, we develop a heuristic algorithm, called Smart Pairing and INtelligent Disc Search (SPINDS), to address this problem. We show a number of novel algorithmic design techniques in the design of SPINDS that effectively transform a complex MINLP problem into a linear programming (LP) problem without losing critical points in its search space. Through numerical results, we show that SPINDS offers a very attractive solution and some important insights to the EP-RNP problem.

420 citations


Journal ArticleDOI
TL;DR: This paper reviews the advances of mixed-integer linear programming (MILP) based approaches for the scheduling of chemical processing systems and focuses on the short-term scheduling of general network represented processes.
Abstract: This paper reviews the advances of mixed-integer linear programming (MILP) based approaches for the scheduling of chemical processing systems. We focus on the short-term scheduling of general network represented processes. First, the various mathematical models that have been proposed in the literature are classified mainly based on the time representation. Discrete-time and continuous-time models are presented along with their strengths and limitations. Several classes of approaches for improving the computational efficiency in the solution of MILP problems are discussed. Furthermore, a summary of computational experiences and applications is provided. The paper concludes with perspectives on future research directions for MILP based process scheduling technologies.

320 citations


Book ChapterDOI
22 Aug 2005
TL;DR: This paper presents a heuristic algorithm, namely Capacity Constrained Route Planner (CCRP), which produces sub-optimal solution for the evacuation planning problem and significantly reduces the computational cost compared to linear programming approach that produces optimal solutions.
Abstract: Evacuation planning is critical for numerous important applications, e.g. disaster emergency management and homeland defense preparation. Efficient tools are needed to produce evacuation plans that identify routes and schedules to evacuate affected populations to safety in the event of natural disasters or terrorist attacks. The existing linear programming approach uses time-expanded networks to compute the optimal evacuation plan and requires a user-provided upper bound on evacuation time. It suffers from high computational cost and may not scale up to large transportation networks in urban scenarios. In this paper we present a heuristic algorithm, namely Capacity Constrained Route Planner(CCRP), which produces sub-optimal solution for the evacuation planning problem. CCRP models capacity as a time series and uses a capacity constrained routing approach to incorporate route capacity constraints. It addresses the limitations of linear programming approach by using only the original evacuation network and it does not require prior knowledge of evacuation time. Performance evaluation on various network configurations shows that the CCRP algorithm produces high quality solutions, and significantly reduces the computational cost compared to linear programming approach that produces optimal solutions. CCRP is also scalable to the number of evacuees and the size of the network.

268 citations


Book ChapterDOI
17 Jan 2005
TL;DR: The method generalizes similar analyses in the interval, octagon, and octahedra domains, without resorting to polyhedral manipulations, and demonstrates the performance of the method on some benchmark programs.
Abstract: We present a method for generating linear invariants for large systems. The method performs forward propagation in an abstract domain consisting of arbitrary polyhedra of a predefined fixed shape. The basic operations on the domain like abstraction, intersection, join and inclusion tests are all posed as linear optimization queries, which can be solved efficiently by existing LP solvers. The number and dimensionality of the LP queries are polynomial in the program dimensionality, size and the number of target invariants. The method generalizes similar analyses in the interval, octagon, and octahedra domains, without resorting to polyhedral manipulations. We demonstrate the performance of our method on some benchmark programs.

235 citations


Journal ArticleDOI
TL;DR: This analysis is the first large-scale demonstration that LP-based approaches are highly effective in finding optimal (and successive near-optimal) solutions for the side-chain positioning problem.
Abstract: Motivation: Side-chain positioning is a central component of homology modeling and protein design. In a common formulation of the problem, the backbone is fixed, side-chain conformations come from a rotamer library, and a pairwise energy function is optimized. It is NP-complete to find even a reasonable approximate solution to this problem. We seek to put this hardness result into practical context. Results: We present an integer linear programming (ILP) formulation of side-chain positioning that allows us to tackle large problem sizes. We relax the integrality constraint to give a polynomial-time linear programming (LP) heuristic. We apply LP to position side chains on native and homologous backbones and to choose side chains for protein design. Surprisingly, when positioning side chains on native and homologous backbones, optimal solutions using a simple, biologically relevant energy function can usually be found using LP. On the other hand, the design problem often cannot be solved using LP directly; however, optimal solutions for large instances can still be found using the computationally more expensive ILP procedure. While different energy functions also affect the difficulty of the problem, the LP/ILP approach is able to find optimal solutions. Our analysis is the first large-scale demonstration that LP-based approaches are highly effective in finding optimal (and successive near-optimal) solutions for the side-chain positioning problem. Availability: The source code for generating the ILP given a file of pairwise energies between rotamers is available online at http://compbio.cs.princeton.edu/scplp Contact: msingh@cs.princeton.edu

Proceedings ArticleDOI
23 Oct 2005
TL;DR: The construction of the first truthful mechanisms with approximation guarantees for a variety of multi-parameter domains can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard.
Abstract: We give a general technique to obtain approximation mechanisms that are truthful in expectation. We show that for packing domains, any /spl alpha/-approximation algorithm that also bounds the integrality gap of the IF relaxation of the problem by a can be used to construct an /spl alpha/-approximation mechanism that is truthful in expectation. This immediately yields a variety of new and significantly improved results for various problem domains and furthermore, yields truthful (in expectation) mechanisms with guarantees that match the best known approximation guarantees when truthfulness is not required. In particular, we obtain the first truthful mechanisms with approximation guarantees for a variety of multi-parameter domains. We obtain truthful (in expectation) mechanisms achieving approximation guarantees of O(/spl radic/m) for combinatorial auctions (CAs), (1 + /spl epsi/ ) for multiunit CAs with B = /spl Omega/(log m) copies of each item, and 2 for multiparameter knapsack problems (multiunit auctions). Our construction is based on considering an LP relaxation of the problem and using the classic VCG mechanism by W. Vickrey (1961), E. Clarke (1971) and T. Groves (1973) to obtain a truthful mechanism in this fractional domain. We argue that the (fractional) optimal solution scaled down by a, where a is the integrality gap of the problem, can be represented as a convex combination of integer solutions, and by viewing this convex combination as specifying a probability distribution over integer solutions, we get a randomized, truthful in expectation mechanism. Our construction can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard.

Journal ArticleDOI
TL;DR: This paper generalizes the "terrorist threat problem" first defined by Salmero/spl acute/n, Wood, and Baldick by formulating it as a bilevel programming problem, and converts it into an equivalent single-level mixed-integer linear program by replacing the inner optimization by its Karush-Kuhn-Tucker optimality conditions.
Abstract: This paper generalizes the "terrorist threat problem" first defined by Salmero/spl acute/n, Wood, and Baldick by formulating it as a bilevel programming problem. Specifically, the bilevel model allows one to define different objective functions for the terrorist and the system operator as well as permitting the imposition of constraints on the outer optimization that are functions of both the inner and outer variables. This degree of flexibility is not possible through existing max-min models. The bilevel formulation is investigated through a problem in which the goal of the destructive agent is to minimize the number of power system components that must be destroyed in order to cause a loss of load greater than or equal to a specified level. This goal is tempered by the logical assumption that, following a deliberate outage, the system operator will implement all feasible corrective actions to minimize the level of system load shed. The resulting nonlinear mixed-integer bilevel programming formulation is transformed into an equivalent single-level mixed-integer linear program by replacing the inner optimization by its Karush-Kuhn-Tucker optimality conditions and converting a number of nonlinearities to linear equivalents using some well-known integer algebra results. The equivalent formulation has been tested on two case studies, including the 24-bus IEEE Reliability Test System, through the use of commercially available software.

Proceedings ArticleDOI
07 Aug 2005
TL;DR: A novel inference procedure based on integer linear programming (ILP) and extends CRF models to naturally and efficiently support general constraint structures is proposed and Experimental evidence is supplied in the context of an important NLP problem, semantic role labeling.
Abstract: Inference in Conditional Random Fields and Hidden Markov Models is done using the Viterbi algorithm, an efficient dynamic programming algorithm. In many cases, general (non-local and non-sequential) constraints may exist over the output sequence, but cannot be incorporated and exploited in a natural way by this inference procedure. This paper proposes a novel inference procedure based on integer linear programming (ILP) and extends CRF models to naturally and efficiently support general constraint structures. For sequential constraints, this procedure reduces to simple linear programming as the inference process. Experimental evidence is supplied in the context of an important NLP problem, semantic role labeling.

Journal ArticleDOI
TL;DR: In this article, a binary expansion (BE) solution approach is proposed to solve the problem of strategic bidding under uncertainty in short-term electricity markets, which is applicable to pure price, pure quantity, or joint price/quantity bidding models.
Abstract: This work presents a binary expansion (BE) solution approach to the problem of strategic bidding under uncertainty in short-term electricity markets. The BE scheme is used to transform the products of variables in the nonlinear bidding problem into a mixed integer linear programming formulation, which can be solved by commercially available computational systems. The BE scheme is applicable to pure price, pure quantity, or joint price/quantity bidding models. It is also possible to represent transmission networks, uncertainties (scenarios for price, quantity, plant availability, and load), financial instruments, capacity reinforcement decisions, and unit commitment. The application of the methodology is illustrated in case studies, with configurations derived from the 80-GW Brazilian system.

Book ChapterDOI
17 Jan 2005
TL;DR: This new approach exploits the recent progress in the numerical resolution of linear or bilinear matrix inequalities by semidefinite programming using efficient polynomial primal/dual interior point methods generalizing those well-known in linear programming to convex optimization.
Abstract: In order to verify semialgebraic programs, we automatize the Floyd/Naur/Hoare proof method The main task is to automatically infer valid invariants and rank functionsFirst we express the program semantics in polynomial form Then the unknown rank function and invariants are abstracted in parametric form The implication in the Floyd/Naur/Hoare verification conditions is handled by abstraction into numerical constraints by Lagrangian relaxation The remaining universal quantification is handled by semidefinite programming relaxation Finally the parameters are computed using semidefinite programming solversThis new approach exploits the recent progress in the numerical resolution of linear or bilinear matrix inequalities by semidefinite programming using efficient polynomial primal/dual interior point methods generalizing those well-known in linear programming to convex optimizationThe framework is applied to invariance and termination proof of sequential, nondeterministic, concurrent, and fair parallel imperative polynomial programs and can easily be extended to other safety and liveness properties

Journal ArticleDOI
TL;DR: Improved combinatorial approximation algorithms for the uncapacitated facility location problem and a variant of the capacitated facility locations problem is considered and improved approximation algorithms are presented for this.
Abstract: We present improved combinatorial approximation algorithms for the uncapacitated facility location problem. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of $2.414+\epsilon$ in $\tilde{O}(n^2/\epsilon)$ time. This also yields a bicriteria approximation tradeoff of $(1+\gamma,1+2/\gamma)$ for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal-dual algorithm for facility location due to Jain and Vazirani, we get an approximation ratio of $1.853$ in $\tilde{O}(n^3)$ time. This is very close to the approximation guarantee of the best known algorithm which is linear programming (LP)-based. Further, combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving $1.728$. We also consider a variant of the capacitated facility location problem and present improved approximation algorithms for this.

Journal ArticleDOI
TL;DR: A set of correlated sources located at the nodes of a network, and a set of sinks that are the destinations for some of the sources, is considered, for both the data-gathering scenario and general traffic matrices, relevant for general networks.
Abstract: Consider a set of correlated sources located at the nodes of a network, and a set of sinks that are the destinations for some of the sources. The minimization of cost functions which are the product of a function of the rate and a function of the path weight is considered, for both the data-gathering scenario, which is relevant in sensor networks, and general traffic matrices, relevant for general networks. The minimization is achieved by jointly optimizing a) the transmission structure, which is shown to consist in general of a superposition of trees, and b) the rate allocation across the source nodes, which is done by Slepian-Wolf coding. The overall minimization can be achieved in two concatenated steps. First, the optimal transmission structure is found, which in general amounts to finding a Steiner tree, and second, the optimal rate allocation is obtained by solving an optimization problem with cost weights determined by the given optimal transmission structure, and with linear constraints given by the Slepian-Wolf rate region. For the case of data gathering, the optimal transmission structure is fully characterized and a closed-form solution for the optimal rate allocation is provided. For the general case of an arbitrary traffic matrix, the problem of finding the optimal transmission structure is NP-complete. For large networks, in some simplified scenarios, the total costs associated with Slepian-Wolf coding and explicit communication (conditional encoding based on explicitly communicated side information) are compared. Finally, the design of decentralized algorithms for the optimal rate allocation is analyzed.

01 Jan 2005
TL;DR: A novel procedure for the identification of hybrid systems in the class of piecewise ARX systems that facilitates the use of available a priori knowledge on the system to be identified, but can also be used as a black-box method.
Abstract: In this paper, we present a novel procedure for the identification of hybrid systems in the class of piecewise ARX systems. The presented method facilitates the use of available a priori knowledge on the system to be identified, but can also be used as a black-box method. We treat the unknown parame- ters as random variables, described by their probability density functions. The identification problem is posed as the problem of computing the a posteriori probability density function of the model parameters, and subsequently relaxed until a practically implementable method is obtained. A particle filtering method is used for a numerical implementation of the proposed procedure. A modified version of the multicategory robust linear programming classification procedure, which uses the information derived in the previous steps of the identification algorithm, is used for estimating the partition of the piecewise ARX map. The proposed procedure is applied for the identification of a component placement process in pick-and-place machines.

Book
16 Nov 2005
TL;DR: In this article, the authors define the fundamentals of convex sets and convex functions: Continuity and?(X) -continuity of some operations between functions, well-posed problems, generic wellposedness, and duality.
Abstract: Convex sets and convex functions: the fundamentals.- Continuity and ?(X).- The derivatives and the subdifferential.- Minima and quasi minima.- The Fenchel conjugate.- Duality.- Linear programming and game theory.- Hypertopologies, hyperconvergences.- Continuity of some operations between functions.- Well-posed problems.- Generic well-posedness.- More exercises.

Journal ArticleDOI
TL;DR: It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex.
Abstract: In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.

Journal ArticleDOI
TL;DR: Using results from linear programming theory and some basic linearization of products of binary-binary or binary-continuous variables, (ST-MIBLP) is recast into a standard (one-level) mixed-integer linear program (ST -MILP) with no more binary variables than in the original.
Abstract: This paper presents a solution procedure for the mixed-integer bilevel programming model of the electric grid security under disruptive threat problem, here concisely denoted by (ST-MIBLP), that was recently reported. Using results from linear programming theory and some basic linearization of products of binary-binary or binary-continuous variables, we recast (ST-MIBLP) into a standard (one-level) mixed-integer linear program (ST-MILP) with no more binary variables than in the original (ST-MIBLP). This transformation provides a framework for globally solving (ST-MIBLP) using available mixed-integer linear programming solvers. Some numerical results obtained by the new method are compared with those recently published, based on IEEE Reliability Test Systems.

Journal ArticleDOI
01 May 2005
TL;DR: A special nonlinear bilevel programming problem is transformed into an equivalent single objective nonlinear programming problem and a new evolutionary algorithm is proposed that can be used to handle nonlinear BLPPs with nondifferentiable leader's objective functions.
Abstract: In this paper, a special nonlinear bilevel programming problem (nonlinear BLPP) is transformed into an equivalent single objective nonlinear programming problem. To solve the equivalent problem effectively, we first construct a specific optimization problem with two objectives. By solving the specific problem, we can decrease the leader's objective value, identify the quality of any feasible solution from infeasible solutions and the quality of two feasible solutions for the equivalent single objective optimization problem, force the infeasible solutions moving toward the feasible region, and improve the feasible solutions gradually. We then propose a new constraint-handling scheme and a specific-design crossover operator. The new constraint-handling scheme can make the individuals satisfy all linear constraints exactly and the nonlinear constraints approximately. The crossover operator can generate high quality potential offspring. Based on the constraint-handling scheme and the crossover operator, we propose a new evolutionary algorithm and prove its global convergence. A distinguishing feature of the algorithm is that it can be used to handle nonlinear BLPPs with nondifferentiable leader's objective functions. Finally, simulations on 31 benchmark problems, 12 of which have nondifferentiable leader's objective functions, are made and the results demonstrate the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: Although AS/sub i-best/ does not perform as well as other algorithms from the literature for the Hanoi Problem, it successfully finds the known least cost solution for the larger Doubled New York Tunnels Problem.
Abstract: Much research has been carried out on the optimization of water distribution systems (WDSs). Within the last decade, the focus has shifted from the use of traditional optimization methods, such as linear and nonlinear programming, to the use of heuristics derived from nature (HDNs), namely, genetic algorithms, simulated annealing and more recently, ant colony optimization (ACO), an optimization algorithm based on the foraging behavior of ants. HDNs have been seen to perform better than more traditional optimization methods and amongst the HDNs applied to WDS optimization, a recent study found ACO to outperform other HDNs for two well-known case studies. One of the major problems that exists with the use of HDNs, particularly ACO, is that their searching behavior and, hence, performance, is governed by a set of user-selected parameters. Consequently, a large calibration phase is required for successful application to new problems. The aim of this paper is to provide a deeper understanding of ACO parameters and to develop parametric guidelines for the application of ACO to WDS optimization. For the adopted ACO algorithm, called AS/sub i-best/ (as it uses an iteration-best pheromone updating scheme), seven parameters are used: two decision policy control parameters /spl alpha/ and /spl beta/, initial pheromone value /spl tau//sub 0/, pheromone persistence factor /spl rho/, number of ants m, pheromone addition factor Q, and the penalty factor (PEN). Deterministic and semi-deterministic expressions for Q and PEN are developed. For the remaining parameters, a parametric study is performed, from which guidelines for appropriate parameter settings are developed. Based on the use of these heuristics, the performance of AS/sub i-best/ was assessed for two case studies from the literature (the New York Tunnels Problem, and the Hanoi Problem) and an additional larger case study (the Doubled New York Tunnels Problem). The results show that AS/sub i-best/ achieves the best performance presented in the literature, in terms of efficiency and solution quality, for the New York Tunnels Problem. Although AS/sub i-best/ does not perform as well as other algorithms from the literature for the Hanoi Problem (a notably difficult problem), it successfully finds the known least cost solution for the larger Doubled New York Tunnels Problem.

Book
07 Sep 2005
TL;DR: In this paper, the duality theory for linear optimization has been used to solve the Canonical Problem in the method of the central path, and a polynomial algorithm for the self-dual model has been proposed.
Abstract: List of figures.- List of tables.- Preface.- Acknowledgements.- Introduction.- I. Introdcution: Theory and Complexity.- Duality Theory for Linear Optimization.- A Polynomial Algorithm for the Self-dual Model.- Solving the Canonical Problem.- II. The Logatithmic Barrier Approach.- Preliminaries.- The Dual Logarithmic Barrier Method.- The Primal-Dual Logarithmic Barrier Method.- Initialization.- III. The Target-Following Approach.- Preliminaries.- The Primal-Dual Newton Method.- Applications.- The Dual Newton Method.- The Primal Newton Method.- Application to the Method of Centers.- IV. Miscellaneous Topics.- Karmarkar's Projective Method.- More Properties of the Central Path.- Partial Updating.- Higher-Order Methods.- Parametric and Sensitivity Analysis.- Implementing Interior Point Methods.- Appendices.- Bibliography.- Author Index.- Subject Index.- Symbol Index.

Journal ArticleDOI
TL;DR: This article shows that the convergence behavior of the linear programming SVM is almost the same as that of the quadratic programming S VM, and proposes an upper bound for the misclassification error for general probability distributions.
Abstract: Support vector machine (SVM) soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifiers are especially efficient for very large size samples. But little is known about their convergence, compared with the well-understood quadratic programming SVM classifier. In this article, we point out the difficulty and provide an error analysis. Our analysis shows that the convergence behavior of the linear programming SVM is almost the same as that of the quadratic programming SVM. This is implemented by setting a stepping-stone between the linear programming SVM and the classical 1-norm soft margin classifier. An upper bound for the misclassification error is presented for general probability distributions. Explicit learning rates are derived for deterministic and weakly separable distributions, and for distributions satisfying some Tsybakov noise condition.

Proceedings ArticleDOI
24 Apr 2005
TL;DR: This paper focuses on analytically solving the linear program for some simple regular network topologies, and shows that the simple collection scheme of transmitting only to nearest neighbors, yields a nearly optimal lifetime in a scaling sense.
Abstract: The functional lifetime of a sensor network is defined as the maximum number of times a certain data collection function or task can be carried out without any node running out of energy. The specific task considered in this paper is that of communicating a specified quantity of information from each sensor to a collector node. The problem of finding the communication scheme which maximizes functional lifetime can be formulated as a linear program, under "fluid-like" assumptions on information bits. This paper focuses on analytically solving the linear program for some simple regular network topologies. The two topologies considered are a regular linear array, and a regular two-dimensional network. In the linear case, an upper bound on functional lifetime is derived, as a function of the initial energies and quantities of data held by the sensors. Under some assumptions on the relative amounts of the energies and data, this upper bound is shown to be achievable, and the exact form of the optimal communication strategy is derived. For the regular planar network, upper and lower bounds on functional lifetime, differing only by a constant factor, are obtained. Finally, it is shown that the simple collection scheme of transmitting only to nearest neighbors, yields a nearly optimal lifetime in a scaling sense.

Journal ArticleDOI
TL;DR: The results have demonstrated that the extended Kuhn-Tucker approach can solve a wider class of linear BLP problems can than current capabilities permit.

Journal ArticleDOI
TL;DR: This work develops and illustrates a practical method for sizing agent pools using stochastic fluid models, which reduces the staffing problem to a multidimensional newsvendor problem, which can be solved numerically by a combination of linear programming and Monte Carlo simulation.
Abstract: We consider a call center model withm input flows andr pools of agents; them-vector ? of instantaneous arrival rates is allowed to be time dependent and to vary stochastically. Seeking to optimize the trade-off between personnel costs and abandonment penalties, we develop and illustrate a practical method for sizing ther agent pools. Using stochastic fluid models, this method reduces the staffing problem to a multidimensional newsvendor problem, which can be solved numerically by a combination of linear programming and Monte Carlo simulation. Numerical examples are presented, and in all cases the pool sizes derived by means of the proposed method are very close to optimal.

Journal ArticleDOI
TL;DR: In this article, a new approach based on the "path-to-node" concept is presented, allowing both topological and electrical constraints to be algebraically formulated before the actual radial configuration is determined.
Abstract: This paper is devoted to efficiently modeling the connectivity of distribution networks, which are structurally meshed but radially operated. A new approach, based on the "path-to-node" concept, is presented, allowing both topological and electrical constraints to be algebraically formulated before the actual radial configuration is determined. In order to illustrate the possibilities of the proposed framework, the problem of network reconfiguration for power loss reduction is considered. Two different optimization algorithms-one resorting to a genetic algorithm and the other solving a conventional mixed-integer linear problem-are fully developed. The validity and effectiveness of the path-based distribution network modeling are demonstrated on different test systems.