scispace - formally typeset
Search or ask a question

Showing papers on "Linear programming published in 2011"


BookDOI
27 Jun 2011
TL;DR: This textbook provides a first course in stochastic programming suitable for students with a basic knowledge of linear programming, elementary analysis, and probability to help students develop an intuition on how to model uncertainty into mathematical problems.
Abstract: The aim of stochastic programming is to find optimal decisions in problems which involve uncertain data. This field is currently developing rapidly with contributions from many disciplines including operations research, mathematics, and probability. At the same time, it is now being applied in a wide variety of subjects ranging from agriculture to financial planning and from industrial engineering to computer networks. This textbook provides a first course in stochastic programming suitable for students with a basic knowledge of linear programming, elementary analysis, and probability. The authors aim to present a broad overview of the main themes and methods of the subject. Its prime goal is to help students develop an intuition on how to model uncertainty into mathematical problems, what uncertainty changes bring to the decision process, and what techniques help to manage uncertainty in solving the problems.In this extensively updated new edition there is more material on methods and examples including several new approaches for discrete variables, new results on risk measures in modeling and Monte Carlo sampling methods, a new chapter on relationships to other methods including approximate dynamic programming, robust optimization and online methods.The book is highly illustrated with chapter summaries and many examples and exercises. Students, researchers and practitioners in operations research and the optimization area will find it particularly of interest. Review of First Edition:"The discussion on modeling issues, the large number of examples used to illustrate the material, and the breadth of the coverage make'Introduction to Stochastic Programming' an ideal textbook for the area." (Interfaces, 1998)

5,398 citations


BookDOI
04 Aug 2011
TL;DR: This book discusses the challenges of dynamic programming, the three curses of dimensionality, and some experimental comparisons of stepsize formulas that led to the creation of ADP for online applications.
Abstract: Preface. Acknowledgments. 1. The challenges of dynamic programming. 1.1 A dynamic programming example: a shortest path problem. 1.2 The three curses of dimensionality. 1.3 Some real applications. 1.4 Problem classes. 1.5 The many dialects of dynamic programming. 1.6 What is new in this book? 1.7 Bibliographic notes. 2. Some illustrative models. 2.1 Deterministic problems. 2.2 Stochastic problems. 2.3 Information acquisition problems. 2.4 A simple modeling framework for dynamic programs. 2.5 Bibliographic notes. Problems. 3. Introduction to Markov decision processes. 3.1 The optimality equations. 3.2 Finite horizon problems. 3.3 Infinite horizon problems. 3.4 Value iteration. 3.5 Policy iteration. 3.6 Hybrid valuepolicy iteration. 3.7 The linear programming method for dynamic programs. 3.8 Monotone policies. 3.9 Why does it work? 3.10 Bibliographic notes. Problems 4. Introduction to approximate dynamic programming. 4.1 The three curses of dimensionality (revisited). 4.2 The basic idea. 4.3 Sampling random variables . 4.4 ADP using the postdecision state variable. 4.5 Lowdimensional representations of value functions. 4.6 So just what is approximate dynamic programming? 4.7 Experimental issues. 4.8 Dynamic programming with missing or incomplete models. 4.9 Relationship to reinforcement learning. 4.10 But does it work? 4.11 Bibliographic notes. Problems. 5. Modeling dynamic programs. 5.1 Notational style. 5.2 Modeling time. 5.3 Modeling resources. 5.4 The states of our system. 5.5 Modeling decisions. 5.6 The exogenous information process. 5.7 The transition function. 5.8 The contribution function. 5.9 The objective function. 5.10 A measuretheoretic view of information. 5.11 Bibliographic notes. Problems. 6. Stochastic approximation methods. 6.1 A stochastic gradient algorithm. 6.2 Some stepsize recipes. 6.3 Stochastic stepsizes. 6.4 Computing bias and variance. 6.5 Optimal stepsizes. 6.6 Some experimental comparisons of stepsize formulas. 6.7 Convergence. 6.8 Why does it work? 6.9 Bibliographic notes. Problems. 7. Approximating value functions. 7.1 Approximation using aggregation. 7.2 Approximation methods using regression models. 7.3 Recursive methods for regression models. 7.4 Neural networks. 7.5 Batch processes. 7.6 Why does it work? 7.7 Bibliographic notes. Problems. 8. ADP for finite horizon problems. 8.1 Strategies for finite horizon problems. 8.2 Qlearning. 8.3 Temporal difference learning. 8.4 Policy iteration. 8.5 Monte Carlo value and policy iteration. 8.6 The actorcritic paradigm. 8.7 Bias in value function estimation. 8.8 State sampling strategies. 8.9 Starting and stopping. 8.10 A taxonomy of approximate dynamic programming strategies. 8.11 Why does it work? 8.12 Bibliographic notes. Problems. 9. Infinite horizon problems. 9.1 From finite to infinite horizon. 9.2 Algorithmic strategies. 9.3 Stepsizes for infinite horizon problems. 9.4 Error measures. 9.5 Direct ADP for online applications. 9.6 Finite horizon models for steady state applications. 9.7 Why does it work? 9.8 Bibliographic notes. Problems. 10. Exploration vs. exploitation. 10.1 A learning exercise: the nomadic trucker. 10.2 Learning strategies. 10.3 A simple information acquisition problem. 10.4 Gittins indices and the information acquisition problem. 10.5 Variations. 10.6 The knowledge gradient algorithm. 10.7 Information acquisition in dynamic programming. 10.8 Bibliographic notes. Problems. 11. Value function approximations for special functions. 11.1 Value functions versus gradients. 11.2 Linear approximations. 11.3 Piecewise linear approximations. 11.4 The SHAPE algorithm. 11.5 Regression methods. 11.6 Cutting planes. 11.7 Why does it work? 11.8 Bibliographic notes. Problems. 12. Dynamic resource allocation. 12.1 An asset acquisition problem. 12.2 The blood management problem. 12.3 A portfolio optimization problem. 12.4 A general resource allocation problem. 12.5 A fleet management problem. 12.6 A driver management problem. 12.7 Bibliographic references. Problems. 13. Implementation challenges. 13.1 Will ADP work for your problem? 13.2 Designing an ADP algorithm for complex problems. 13.3 Debugging an ADP algorithm. 13.4 Convergence issues. 13.5 Modeling your problem. 13.6 Online vs. offline models. 13.7 If it works, patent it!

2,300 citations


Journal ArticleDOI
TL;DR: This paper shows that reformulating that step as a constrained flow optimization results in a convex problem and takes advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast.
Abstract: Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.

1,076 citations


Journal ArticleDOI
TL;DR: In this article, a robust optimization model for handling the inherent uncertainty of input data in a closed-loop supply chain network design problem is proposed, and the robust counterpart of the proposed mixed-integer linear programming model is presented by using the recent extensions in robust optimization theory.

571 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper proposes simple and easy to compute diagonal preconditioners for the first-order primal-dual algorithm for which convergence of the algorithm is guaranteed without the need to compute any step size parameters.
Abstract: In this paper we study preconditioning techniques for the first-order primal-dual algorithm proposed in [5]. In particular, we propose simple and easy to compute diagonal preconditioners for which convergence of the algorithm is guaranteed without the need to compute any step size parameters. As a by-product, we show that for a certain instance of the preconditioning, the proposed algorithm is equivalent to the old and widely unknown alternating step method for monotropic programming [7]. We show numerical results on general linear programming problems and a few standard computer vision problems. In all examples, the preconditioned algorithm significantly outperforms the algorithm of [5].

474 citations


Journal ArticleDOI
TL;DR: A new route relaxation called ng-route is introduced, used by different dual ascent heuristics to find near-optimal dual solutions of the LP-relaxation of the SP model, and a column-and-cut generation algorithm strengthened by valid inequalities that uses a new strategy for solving the pricing problem.
Abstract: In this paper, we describe an effective exact method for solving both the capacitated vehicle routing problem (cvrp) and the vehicle routing problem with time windows (vrptw) that improves the method proposed by Baldacci et al. [Baldacci, R., N. Christofides, A. Mingozzi. 2008. An exact algorithm for the vehicle routing problem based on the set partitioning formulation with additional cuts. Math. Programming115(2) 351--385] for the cvrp. The proposed algorithm is based on the set partitioning (SP) formulation of the problem. We introduce a new route relaxation called ng-route, used by different dual ascent heuristics to find near-optimal dual solutions of the LP-relaxation of the SP model. We describe a column-and-cut generation algorithm strengthened by valid inequalities that uses a new strategy for solving the pricing problem. The new ng-route relaxation and the different dual solutions achieved allow us to generate a reduced SP problem containing all routes of any optimal solution that is finally solved by an integer programming solver. The proposed method solves four of the five open Solomon's vrptw instances and significantly improves the running times of state-of-the-art algorithms for both vrptw and cvrp.

415 citations


Book
15 Jun 2011
TL;DR: This book is the first to cover geometric approximation algorithms in detail, and topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings.
Abstract: Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

410 citations


Journal ArticleDOI
TL;DR: This paper discusses statistical properties and convergence of the Stochastic Dual Dynamic Programming method applied to multistage linear stochastic programming problems, and argues that the computational complexity of the corresponding SDDP algorithm is almost the same as in the risk neutral case.

399 citations


Journal ArticleDOI
TL;DR: The proposed approach to establishing correspondences between two sets of visual features using higher order constraints instead of the unary or pairwise ones used in classical methods is compared to state-of-the-art algorithms on both synthetic and real data.
Abstract: This paper addresses the problem of establishing correspondences between two sets of visual features using higher order constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to state-of-the-art algorithms on both synthetic and real data.

394 citations


Proceedings ArticleDOI
10 Apr 2011
TL;DR: This work comprehensively study the routing and spectrum allocation (RSA) problem in the SLICE network, and formulate the RSA problem using the Integer Linear Programming (ILP) formulations to optimally minimize the maximum number of sub-carriers required on any fiber of a SLice network.
Abstract: In OFDM-based optical networks, multiple subcarriers can be allocated to accommodate various size of traffic demands. By using the multi-carrier modulation technique, subcarriers for the same node-pair can be overlapping in the spectrum domain. Compared to the traditional wavelength routed networks (WRNs), the OFDM-based Spectrum-sliced Elastic Optical Path (SLICE) network has higher spectrum efficiency due to its finer granularity and frequency-resource saving. In this work, for the first time, we comprehensively study the routing and spectrum allocation (RSA) problem in the SLICE network. After proving the NP-hardness of the static RSA problem, we formulate the RSA problem using the Integer Linear Programming (ILP) formulations to optimally minimize the maximum number of sub-carriers required on any fiber of a SLICE network. We then analyze the lower/upper bounds for the sub-carrier number in a network with general or specific topology. We also propose two efficient algorithms, namely, balanced load spectrum allocation (BLSA) algorithm and shortest path with maximum spectrum reuse (SPSR) algorithm to minimize the required sub-carrier number in a SLICE network. The results show that the proposed algorithms can match the analysis and approximate the optimal solutions using the ILP model.

391 citations


Journal ArticleDOI
TL;DR: It is shown that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms, which are able to derive algorithms that generalize and extend state-of-the-art message-passing methods, and take full advantage of the special structure that may exist in particular MRFs.
Abstract: This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.

Proceedings ArticleDOI
10 Apr 2011
TL;DR: This paper investigates secure outsourcing of widely applicable linear programming (LP) computations and develops a set of efficient privacy-preserving problem transformation techniques, which allow customers to transform original LP problem into some random one while protecting sensitive input/output information.
Abstract: Cloud computing enables customers with limited computational resources to outsource large-scale computational tasks to the cloud, where massive computational power can be easily utilized in a pay-per-use manner. However, security is the major concern that prevents the wide adoption of computation outsourcing in the cloud, especially when end-user's confidential data are processed and produced during the computation. Thus, secure outsourcing mechanisms are in great need to not only protect sensitive information by enabling computations with encrypted data, but also protect customers from malicious behaviors by validating the computation result. Such a mechanism of general secure computation outsourcing was recently shown to be feasible in theory, but to design mechanisms that are practically efficient remains a very challenging problem. Focusing on engineering computing and optimization tasks, this paper investigates secure outsourcing of widely applicable linear programming (LP) computations. In order to achieve practical efficiency, our mechanism design explicitly decomposes the LP computation outsourcing into public LP solvers running on the cloud and private LP parameters owned by the customer. The resulting flexibility allows us to explore appropriate security/efficiency tradeoff via higher-level abstraction of LP computations than the general circuit representation. In particular, by formulating private data owned by the customer for LP problem as a set of matrices and vectors, we are able to develop a set of efficient privacy-preserving problem transformation techniques, which allow customers to transform original LP problem into some random one while protecting sensitive input/output information. To validate the computation result, we further explore the fundamental duality theorem of LP computation and derive the necessary and sufficient conditions that correct result must satisfy. Such result verification mechanism is extremely efficient and incurs close-to-zero additional cost on both cloud server and customers. Extensive security analysis and experiment results show the immediate practicability of our mechanism design.


Posted Content
TL;DR: In this article, the authors gave the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x in R^n from the signs of O(s log 2 n/s) random linear measurements of x. This result extends to approximately sparse vectors x.
Abstract: We give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x in R^n from the signs of O(s log^2(n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations.

Journal ArticleDOI
TL;DR: This letter formulate RSA as an Integer Linear Programming (ILP) problem and propose an effective heuristic to be used if the solution of ILP is not attainable.
Abstract: A spectrum-sliced elastic optical path network (SLICE) architecture has been recently proposed as an efficient solution for a flexible bandwidth allocation in optical networks In SLICE, the problem of Routing and Spectrum Assignment (RSA) emerges In this letter, we both formulate RSA as an Integer Linear Programming (ILP) problem and propose an effective heuristic to be used if the solution of ILP is not attainable

Journal ArticleDOI
TL;DR: A new method is proposed to find the fuzzy optimal solution of same type of fuzzy linear programming problems and it is easy to apply the proposed method compare to the existing method for solving the FFLP problems with equality constraints occurring in real life situations.

Journal ArticleDOI
TL;DR: The construction of the first truthful mechanisms with approximation guarantees for a variety of multi-parameter domains can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard.
Abstract: We give a general technique to obtain approximation mechanisms that are truthful in expectation. We show that for packing domains, any α-approximation algorithm that also bounds the integrality gap of the LP relaxation of the problem by α can be used to construct an α-approximation mechanism that is truthful in expectation. This immediately yields a variety of new and significantly improved results for various problem domains and furthermore, yields truthful (in expectation) mechanisms with guarantees that match the best-known approximation guarantees when truthfulness is not required. In particular, we obtain the first truthful mechanisms with approximation guarantees for a variety of multiparameter domains. We obtain truthful (in expectation) mechanisms achieving approximation guarantees of O(√m) for combinatorial auctions (CAs), (1 + e) for multiunit CAs with B = Ω(log m) copies of each item, and 2 for multiparameter knapsack problems (multi-unit auctions).Our construction is based on considering an LP relaxation of the problem and using the classic VCG mechanism to obtain a truthful mechanism in this fractional domain. We argue that the (fractional) optimal solution scaled down by α, where α is the integrality gap of the problem, can be represented as a convex combination of integer solutions, and by viewing this convex combination as specifying a probability distribution over integer solutions, we get a randomized, truthful in expectation mechanism. Our construction can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard.

Journal ArticleDOI
TL;DR: In this article, the authors introduced the chance constrained programming (CCP) approach to OPF under uncertainty and analyzed the computational complexity of the chance-constrained OPF.
Abstract: Solution approaches to chance constrained programming (CCP) have been recently developed and applied in many areas for optimization under uncertainty. Due to the nonlinear model with multiple uncertain variables as well as multiple output constraints, CCP has not been directly applied to optimal power flow (OPF) under uncertainty. The objective of this paper is twofold. First, we introduce the CCP approach to OPF under uncertainty and analyze the computational complexity of the chance constrained OPF. Second, the effectiveness of implementing a back-mapping approach and a linear approximation of the nonlinear model equations to solve the formulated CCP problem is investigated. Load power uncertainties are considered as multivariate random variables with correlated normal distribution. Based on both the nonlinear and the linearized model, results of a five-bus system and the IEEE 30-bus test system are presented to demonstrate the scope of chance constrained OPF.

Journal ArticleDOI
TL;DR: In this article, the linear programming discriminant (LPD) rule is proposed for sparse linear discriminant analysis of high-dimensional data, which can be implemented efficiently using linear programming and the resulting classifier is called the LPD rule.
Abstract: This article considers sparse linear discriminant analysis of high-dimensional data. In contrast to the existing methods which are based on separate estimation of the precision matrix Ω and the difference δ of the mean vectors, we introduce a simple and effective classifier by estimating the product Ωδ directly through constrained l1 minimization. The estimator can be implemented efficiently using linear programming and the resulting classifier is called the linear programming discriminant (LPD) rule. The LPD rule is shown to have desirable theoretical and numerical properties. It exploits the approximate sparsity of Ωδ and as a consequence allows cases where it can still perform well even when Ω and/or δ cannot be estimated consistently. Asymptotic properties of the LPD rule are investigated and consistency and rate of convergence results are given. The LPD classifier has superior finite sample performance and significant computational advantages over the existing methods that require separate estimation...

Proceedings ArticleDOI
01 Dec 2011
TL;DR: This paper considers the minimum electricity cost scheduling problem of smart home appliances, and the optimal power profile signal minimizes cost, while satisfying technical operation constraints and consumer preferences.
Abstract: This paper considers the minimum electricity cost scheduling problem of smart home appliances. Operation characteristics, such as expected duration and peak power consumption of the smart appliances, can be adjusted through a power profile signal. The optimal power profile signal minimizes cost, while satisfying technical operation constraints and consumer preferences. Constraints such as enforcing uninterruptible and sequential operations are modeled in the proposed framework using mixed integer linear programming (MILP). Several realistic scenarios based on actual spot price are considered, and the numerical results provide insight into tariff design. Computational issues and extensions of the proposed scheduling framework are also discussed.

Journal ArticleDOI
TL;DR: A novel and computationally efficient method to design optimal control places, and an iteration approach that only computes the reachability graph of a plant Petri net model once in order to obtain a maximally permissive liveness-enforcing supervisor for an FMS.
Abstract: Deadlock prevention plays an important role in the modeling and control of flexible manufacturing systems (FMS). This paper presents a novel and computationally efficient method to design optimal control places, and an iteration approach that only computes the reachability graph of a plant Petri net model once in order to obtain a maximally permissive liveness-enforcing supervisor for an FMS. By using a vector covering approach, a minimal covering set of legal markings and a minimal covered set of first-met bad markings (FBM) are computed. At each iteration, an FBM from the minimal covered set is selected. By solving an integer linear programming problem, a place invariant is designed to prevent the FBM from being reached and no marking in the minimal covering set of legal markings is forbidden. This process is carried out until no FBM can be reached. In order to make the considered problem computationally tractable, binary decision diagrams (BDD) are used to compute the sets of legal markings and FBM, and solve the vector covering problem to get a minimal covering set of legal markings and a minimal covered set of FBM. Finally, a number of FMS examples are presented to illustrate the proposed approaches.

Book ChapterDOI
30 Nov 2011
TL;DR: In this paper, the authors used mixed-integer linear programming (MILP) to prove security bounds against both differential and linear cryptanalysis for Enocoro-128v2.
Abstract: Differential and linear cryptanalysis are two of the most powerful techniques to analyze symmetric-key primitives. For modern ciphers, resistance against these attacks is therefore a mandatory design criterion. In this paper, we propose a novel technique to prove security bounds against both differential and linear cryptanalysis. We use mixed-integer linear programming (MILP), a method that is frequently used in business and economics to solve optimization problems. Our technique significantly reduces the workload of designers and cryptanalysts, because it only involves writing out simple equations that are input into an MILP solver. As very little programming is required, both the time spent on cryptanalysis and the possibility of human errors are greatly reduced. Our method is used to analyze Enocoro-128v2, a stream cipher that consists of 96 rounds. We prove that 38 rounds are sufficient for security against differential cryptanalysis, and 61 rounds for security against linear cryptanalysis. We also illustrate our technique by calculating the number of active S-boxes for AES.

Journal ArticleDOI
TL;DR: Different uncertainty sets, including those studied in literature and new ones, are studied and their geometric relationship is discussed and robust counterpart optimization formulations induced by those different uncertainty sets are derived.
Abstract: Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented.

Journal ArticleDOI
TL;DR: In this paper, a detailed mathematical formulation for the problem of designing supply chain networks comprising multiproduct production facilities with shared production resources, warehouses, distribution centers and customer zones and operating under time varying demand uncertainty is considered.
Abstract: We consider a detailed mathematical formulation for the problem of designing supply chain networks comprising multiproduct production facilities with shared production resources, warehouses, distribution centers and customer zones and operating under time varying demand uncertainty. Uncertainty is captured in terms of a number of likely scenarios possible to materialize during the lifetime of the network. The problem is formulated as a mixed-integer linear programming problem and solved to global optimality using standard branch-and-bound techniques. A case study concerned with the establishment of Europe-wide supply chain is used to illustrate the applicability and efficiency of the proposed approach. The results obtained provide a good indication of the value of having a model that takes into account the complex interactions that exist in such networks and the effect of inventory levels to the design and operation.

Journal ArticleDOI
TL;DR: A Monte-Carlo simulation-based algorithm is described that integrates a sample average approximation scheme with a Benders decomposition algorithm to solve problems having stochastic independent transportation costs.

Journal ArticleDOI
Yinyu Ye1
TL;DR: It is proved that the classic policy-iteration method and the original simplex method with the most-negative-reduced-cost pivoting rule of Dantzig are strongly polynomial-time algorithms for solving the Markov decision problem (MDP) with a fixed discount rate.
Abstract: We prove that the classic policy-iteration method [Howard, R. A. 1960. Dynamic Programming and Markov Processes. MIT, Cambridge] and the original simplex method with the most-negative-reduced-cost pivoting rule of Dantzig are strongly polynomial-time algorithms for solving the Markov decision problem (MDP) with a fixed discount rate. Furthermore, the computational complexity of the policy-iteration and simplex methods is superior to that of the only known strongly polynomial-time interior-point algorithm [Ye, Y. 2005. A new complexity result on solving the Markov decision problem. Math. Oper. Res.30(3) 733--749] for solving this problem. The result is surprising because the simplex method with the same pivoting rule was shown to be exponential for solving a general linear programming problem [Klee, V., G. J. Minty. 1972. How good is the simplex method? Technical report. O. Shisha, ed. Inequalities III. Academic Press, New York], the simplex method with the smallest index pivoting rule was shown to be exponential for solving an MDP regardless of discount rates [Melekopoglou, M., A. Condon. 1994. On the complexity of the policy improvement algorithm for Markov decision processes. INFORMS J. Comput.6(2) 188--192], and the policy-iteration method was recently shown to be exponential for solving undiscounted MDPs under the average cost criterion. We also extend the result to solving MDPs with transient substochastic transition matrices whose spectral radii are uniformly below one.

Proceedings ArticleDOI
12 Dec 2011
TL;DR: The presented approach to segmenting shapes in a heterogenous shape database is evaluated on the Princeton segmentation benchmark and it is shown that joint shape segmentation significantly outperforms single-shape segmentation techniques.
Abstract: We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques.

Posted Content
TL;DR: This work designs optimal simulations of the the well-established PRAM and BSP models in MapReduce, immediately resulting in optimal solutions to the problems of computing fixed-dimensional linear programming and 2-D and 3-D convex hulls.
Abstract: In this paper, we study the MapReduce framework from an algorithmic standpoint and demonstrate the usefulness of our approach by designing and analyzing efficient MapReduce algorithms for fundamental sorting, searching, and simulation problems. This study is motivated by a goal of ultimately putting the MapReduce framework on an equal theoretical footing with the well-known PRAM and BSP parallel models, which would benefit both the theory and practice of MapReduce algorithms. We describe efficient MapReduce algorithms for sorting, multi-searching, and simulations of parallel algorithms specified in the BSP and CRCW PRAM models. We also provide some applications of these results to problems in parallel computational geometry for the MapReduce framework, which result in efficient MapReduce algorithms for sorting, 2- and 3-dimensional convex hulls, and fixed-dimensional linear programming. For the case when mappers and reducers have a memory/message-I/O size of $M=\Theta(N^\epsilon)$, for a small constant $\epsilon>0$, all of our MapReduce algorithms for these applications run in a constant number of rounds.

Journal ArticleDOI
TL;DR: Using full-scenario analysis, the worst-case impact of volatile node injection on unit commitment is acquired, so that the proposed model can always provide a secure and economical unit commitment result to the operators.
Abstract: In response to the challenges brought by uncertain bus load and volatile wind power to power system security, this paper presents a novel unit commitment formulation based on interval number optimization to improve the security as well as economy of power system operation. By using full-scenario analysis, the worst-case impact of volatile node injection on unit commitment is acquired, so that the proposed model can always provide a secure and economical unit commitment result to the operators. Scenarios generation and reduction method based on interval linear programming theory are used to accelerate the solution procedure without loss of optimality. Benders decomposition is also implemented to reduce the complexity of this large-scale interval mixed integer linear programming, and prove the rationality and rigor of our proposed method. The numerical results indicate better secure and economical features of the proposed method comparing with the traditional one.

Book
12 Sep 2011
TL;DR: Branch-and-bound algorithms are developed for solving the mixed-integer linear programs by solving sequences of ordinary linear programs on the problem of synchronizing a network of signals.
Abstract: Traffic signals can be synchronized so that a car, starting at one end of a main artery and traveling at preassigned speeds, can go to the other end without stopping for a red light. The portion of a signal cycle for which this is possible is called the bandwidth for that direction. A mixed-integer linear program is formulated for the following arterial problem: Given 1 an arbitrary number of signals, 2 the red-green split at each signal, 3 upper and lower limits on signal period, 4 upper and lower limits on speed between adjacent signals, and 5 limits on change in speed, find 1 common signal period, 2 speeds between signals, and 3 the relative phasing of the signals, in order to maximize the sum of the bandwidths for the two directions. Several variants of the problem are formulated, including the problem of synchronizing a network of signals. Branch-and-bound algorithms are developed for solving the mixed-integer linear programs by solving sequences of ordinary linear programs. A 10-signal arterial example and a 7-signal network example are worked out.