scispace - formally typeset
Search or ask a question

Showing papers on "Linear programming published in 2002"


Proceedings ArticleDOI
08 May 2002
TL;DR: In this article, an approximate model of aircraft dynamics using only linear constraints is developed, enabling the MILP approach to be applied to aircraft collision avoidance, which can also be extended to include multiple waypoint path-planning, in which each vehicle is required to visit a set of points in an order chosen within the optimization.
Abstract: Describes a method for finding optimal trajectories for multiple aircraft avoiding collisions. Developments in spacecraft path-planning have shown that trajectory optimization including collision avoidance can be written as a linear program subject to mixed integer constraints, known as a mixed-integer linear program (MILP). This can be solved using commercial software written for the operations research community. In the paper, an approximate model of aircraft dynamics using only linear constraints is developed, enabling the MILP approach to be applied to aircraft collision avoidance. The formulation can also be extended to include multiple waypoint path-planning, in which each vehicle is required to visit a set of points in an order chosen within the optimization.

791 citations


Journal ArticleDOI
TL;DR: The availability of the explicit structure of the MPC controller provides an insight into the type of control action in different regions of the state space, and highlights possible conditions of degeneracies of the LP, such as multiple optima.
Abstract: We study model predictive control (MPC) schemes for discrete-time linear time-invariant systems with constraints on inputs and states, that can be formulated using a linear program (LP). In particular, we focus our attention on performance criteria based on a mixed 1 -norm, namely, 1-norm with respect to time and -norm with respect to space. First we provide a method to compute the terminal weight so that closed-loop stability is achieved. We then show that the optimal control profile is a piecewise affine and continuous function of the initial state and briefly describe the algorithm to compute it. The piecewise affine form allows to eliminate online LP, as the computation associated with MPC becomes a simple function evaluation. Besides practical advantages, the availability of the explicit structure of the MPC controller provides an insight into the type of control action in different regions of the state space, and highlights possible conditions of degeneracies of the LP, such as multiple optima.

765 citations


Journal ArticleDOI
TL;DR: A condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization shows how their influence has transformed both the theory and practice of constrained optimization.
Abstract: Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interior-point techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar's widely publicized announcement in 1984 of a fast polynomial-time interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.

693 citations


Journal ArticleDOI
Marc Bodson1
TL;DR: The major conclusion is that constrained optimization can be performed with computational requirements that fall within an order of magnitude of those of simpler methods.
Abstract: The performanceand computational requirements ofoptimization methodsfor control allocation areevaluated Two control allocation problems are formulated: a direct allocation method that preserves the directionality of the moment and a mixed optimization method that minimizes the error between the desired and the achieved momentsaswellasthecontroleffortTheconstrainedoptimizationproblemsaretransformedinto linearprograms so that they can be solved using well-tried linear programming techniques such as the simplex algorithm A variety of techniques that can be applied for the solution of the control allocation problem in order to accelerate computations are discussed Performance and computational requirements are evaluated using aircraft models with different numbers of actuators and with different properties In addition to the two optimization methods, three algorithms with low computational requirements are also implemented for comparison: a redistributed pseudoinverse technique, a quadratic programming algorithm, and a e xed-point method The major conclusion is that constrained optimization can be performed with computational requirements that fall within an order of magnitude of those of simpler methods The performance gains of optimization methods, measured in terms of the error between the desired and achieved moments, are found to be small on the average but sometimes signie cantAvariety ofissuesthataffecttheimplementation ofthevariousalgorithmsin ae ight-controlsystem are discussed

628 citations


Journal ArticleDOI
TL;DR: It is proved that for classification, minimizing the 1-norm soft margin error function directly optimizes a generalization error bound and is competitive in quality and computational cost to AdaBoost.
Abstract: We examine linear program (LP) approaches to boosting and demonstrate their efficient solution using LPBoost, a column generation based simplex method. We formulate the problem as if all possible weak hypotheses had already been generated. The labels produced by the weak hypotheses become the new feature space of the problem. The boosting task becomes to construct a learning function in the label space that minimizes misclassification error and maximizes the soft margin. We prove that for classification, minimizing the 1-norm soft margin error function directly optimizes a generalization error bound. The equivalent linear program can be efficiently solved using column generation techniques developed for large-scale optimization problems. The resulting LPBoost algorithm can be used to solve any LP boosting formulation by iteratively optimizing the dual misclassification costs in a restricted LP and dynamically generating weak hypotheses to make new LP columns. We provide algorithms for soft margin classification, confidence-rated, and regression boosting problems. Unlike gradient boosting algorithms, which may converge in the limit only, LPBoost converges in a finite number of iterations to a global solution satisfying mathematically well-defined optimality conditions. The optimal solutions of LPBoost are very sparse in contrast with gradient based methods. Computationally, LPBoost is competitive in quality and computational cost to AdaBoost.

462 citations


Journal ArticleDOI
TL;DR: This paper describes a new formulation, based on linear finite elements and non-linear programming, for computing rigorous lower bounds in 1, 2 and 3 dimensions, and is shown to be vastly superior to an equivalent formulation that is based on a linearized yield surface and linear programming.
Abstract: This paper describes a new formulation, based on linear finite elements and non-linear programming, for computing rigorous lower bounds in 1, 2 and 3 dimensions. The resulting optimization problem is typically very large and highly sparse and is solved using a fast quasi-Newton method whose iteration count is largely independent of the mesh refinement. For two-dimensional applications, the new formulation is shown to be vastly superior to an equivalent formulation that is based on a linearized yield surface and linear programming. Although it has been developed primarily for geotechnical applications, the method can be used for a wide range of plasticity problems including those with inhomogeneous materials, complex loading, and complicated geometry. Copyright © 2002 John Wiley & Sons, Ltd.

453 citations


Proceedings ArticleDOI
05 Aug 2002
TL;DR: Two methods are compared for solving the optimization that combines task assignment, subjected to UAV capability constraints, and path planning, subjectedto dynamics, avoidance and timing constraints.
Abstract: This paper addresses the problems of autonomous task allocation and trajectory planning for a fleet of UAVs. Two methods are compared for solving the optimization that combines task assignment, subjected to UAV capability constraints, and path planning, subjected to dynamics, avoidance and timing constraints. Both sub-problems are non-convex and the two are strongly-coupled. The first method expresses the entire problem as a single mixed-integer linear program (MILP) that can be solved using available software. This method is guaranteed to find the globally-optimal solution to the problem, but is computationally intensive. The second method employs an approximation for rapid computation of the cost of many different trajectories. This enables the assignment and trajectory problems to be decoupled and partially distributed, offering much faster computation. The paper presents several examples to compare the performance and computational results from these two algorithms.

392 citations


Journal ArticleDOI
TL;DR: In this paper, a new method for computing rigorous upper bounds on the limit loads for one-, two-and three-dimensional continua is described, which is based on linear finite elements.
Abstract: A new method for computing rigorous upper bounds on the limit loads for one-, two- and three-dimensional continua is described. The formulation is based on linear finite elements, permits kinematically admissible velocity discontinuities at all interelement boundaries, and furnishes a kinematically admissible velocity field by solving a non-linear programming problem. In the latter, the objective function corresponds to the dissipated power (which is minimized) and the unknowns are subject to linear equality constraints as well as linear and non-linear inequality constraints. Provided the yield surface is convex, the optimization problem generated by the upper bound method is also convex and can be solved efficiently by applying a two-stage, quasi-Newton scheme to the corresponding Kuhn–Tucker optimality conditions. A key advantage of this strategy is that its iteration count is largely independent of the mesh size. Since the formulation permits non-linear constraints on the unknowns, no linearization of the yield surface is necessary and the modelling of three-dimensional geometries presents no special difficulties. The utility of the proposed upper bound method is illustrated by applying it to a number of two- and three-dimensional boundary value problems. For a variety of two-dimensional cases, the new scheme is up to two orders of magnitude faster than an equivalent linear programming scheme which uses yield surface linearization. Copyright © 2001 John Wiley & Sons, Ltd.

387 citations


Journal ArticleDOI
Robert E. Bixby1
TL;DR: One person's perspective on the development of computational tools for linear programming is described, followed by historical remarks covering the some 40 years of linear-programming developments that predate my own involvement in this subject.
Abstract: This paper is an invited contribution to the 50th anniversary issue of the journalOperations Research, published by the Institute of Operations Research and Management Science (INFORMS). It describes one person's perspective on the development of computational tools for linear programming. The paper begins with a short personal history, followed by historical remarks covering the some 40 years of linear-programming developments that predate my own involvement in this subject. It concludes with a more detailed look at the evolution of computational linear programming since 1987.

381 citations


Book
31 Aug 2002
TL;DR: The Exponential Potential Function - Key Ideas and Computational Experiments are presented.
Abstract: List of Figures. List of Tables. Preface. 1. Introduction. 1. Early Algorithms. 2. The Exponential Potential Function - Key Ideas. 3. Recent Developments. 4. Computational Experiments. Appendices. Index.

376 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of solving conflicts arising among several aircraft that are assumed to move in a shared airspace and propose two different formulations of the multiaircraft conflict avoidance problem as a mixed-integer linear program.
Abstract: This paper considers the problem of solving conflicts arising among several aircraft that are assumed to move in a shared airspace. Aircraft can not get closer to each other than a given safety distance in order to avoid possible conflicts between different airplanes. For such system of multiple aircraft, we consider the path planning problem among given waypoints avoiding all possible conflicts. In particular we are interested in optimal paths, i.e., we want to minimize the total flight time. We propose two different formulations of the multiaircraft conflict avoidance problem as a mixed-integer linear program: in the first case only velocity changes are admissible maneuvers, in the second one only heading angle changes are allowed. Due to the linear formulation of the two problems, solutions may be obtained quickly with standard optimization software, allowing our approach to be implemented in real time.

Journal ArticleDOI
TL;DR: In the single seller (auction) version, a necessary and sufficient condition is given for the Vickrey payoff point to be implementable by a pricing equilibrium.

Proceedings ArticleDOI
12 May 2002
TL;DR: Three variants of PSO are compared with the widely used branch and bound technique, on several integer programming test problems and results indicate that PSO handles efficiently such problems, and in most cases it outperforms the branch and Bound technique.
Abstract: The investigation of the performance of the particle swarm optimization (PSO) method in integer programming problems, is the main theme of the present paper. Three variants of PSO are compared with the widely used branch and bound technique, on several integer programming test problems. Results indicate that PSO handles efficiently such problems, and in most cases it outperforms the branch and bound technique.

Journal ArticleDOI
TL;DR: This paper presents a new technique for analyzing a power grid using macromodels that are created for a set of partitions of the grid, and shows that even for a 60 million-node power grid, the approach allows for an efficient analysis, whereas previous approaches have been unable to handle power grids of such size.
Abstract: Careful design and verification of the power distribution network of a chip are of critical importance to ensure its reliable performance. With the increasing number of transistors on a chip, the size of the power network has grown so large as to make the verification task very challenging. The available computational power and memory resources impose limitations on the size of networks that can be analyzed using currently known techniques. Many of today's designs have power networks that are too large to be analyzed in the traditional way as flat networks. In this paper, we propose a hierarchical analysis technique to overcome the aforesaid capacity limitation. We present a new technique for analyzing a power grid using macromodels that are created for a set of partitions of the grid. Efficient numerical techniques for the computation and sparsification of the port admittance matrices of the macromodels are presented. A novel sparsification technique using a 0-1 integer linear programming formulation is proposed to achieve superior sparsification for a specified error. The run-time and memory efficiency of the proposed method are illustrated on industrial designs. It is shown that even for a 60 million-node power grid, our approach allows for an efficient analysis, whereas previous approaches have been unable to handle power grids of such size.

Proceedings ArticleDOI
25 Jul 2002
TL;DR: A fuzzy-GA method to resolve dispersed generator placement for distribution systems using the proposed genetic algorithm without any transformation for this nonlinear problem to a linear model or other methods.
Abstract: This paper presents a fuzzy-GA method to resolve dispersed generator placement for distribution systems. The problem formulation considers an objective to reduce power loss costs of distribution systems and the constraints with the number or size of dispersed generators and the deviation of the bus voltage. The main idea of solving fuzzy nonlinear goal programming is to transform the original objective function and constraints into the equivalent multi-objectives functions with fuzzy sets to evaluate their imprecise nature and solve the problem using the proposed genetic algorithm without any transformation for this nonlinear problem to a linear model or other methods. Moreover, this algorithm proposes a satisfying method to solve the constrained multiple objective problem. Analyzing the results and updating the expected value of each objective function allows the dispatcher to obtain the compromised or satisfied solution efficiently.

Journal ArticleDOI
TL;DR: Several linear programming formulations for the one-dimensional cutting stock and bin packing problems are reviewed, including the models of Kantorovich, Gilmore–Gomory, onecut models, as in the Dyckhoff–Stadtler approach, position-indexed models, and a model derived from the vehicle routing literature.

Journal ArticleDOI
TL;DR: In this paper, the authors present fuel/time-optimal control algorithms for a co-ordination and control architecture that was designed for a fleet of spacecraft, including low-level formation-keeping algorithms and a high-level fleet planner that creates trajectories to re-size or re-target the formation.
Abstract: SUMMARY Formation flying of multiple spacecraft is an enabling technology for many future space science missions. However, the co-ordination and control of these instruments poses many difficult design challenges. This paper presents fuel/time-optimal control algorithms for a co-ordination and control architecture that was designed for a fleet of spacecraft. This architecture includes low-level formation-keeping algorithms and a high-level fleet planner that creates trajectories to re-size or re-target the formation. The trajectory and formation-keeping optimization algorithms are based on the solutions of linear and integer programming problems. The result is a very flexible optimization framework that can be used off-line to analyse various aspects of the mission design and in real time as part of an onboard autonomous formation flying control system. The overall control approach is demonstrated using a nonlinear simulation environment that includes realistic measurement noises, disturbances, and actuator nonlinearities. Copyright # 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Convex and semidefinite optimization methods, duality, and complexity theory are introduced to shed new light on the relation of option and stock prices based just on the no-arbitrage assumption, and it is shown that it is NP-hard to find best possible bounds in multiple dimensions.
Abstract: The idea of investigating the relation of option and stock prices based just on the no-arbitrage assumption, but without assuming any model for the underlying price dynamics, has a long history in the financial economics literature. We introduce convex and, in particular semidefinite optimization methods, duality, and complexity theory to shed new light on this relation. For the single stock problem, given moments of the prices of the underlying assets, we show that we can find best-possible bounds on option prices with general payoff functions efficiently, either algorithmically (solving a semidefinite optimization problem) or in closed form. Conversely, given observable option prices, we provide best-possible bounds on moments of the prices of the underlying assets, as well as on the prices of other options on the same asset by solving linear optimization problems. For options that are affected by multiple stocks either directly (the payoff of the option depends on multiple stocks) or indirectly (we have information on correlations between stock prices), we find nonoptimal bounds using convex optimization methods. However, we show that it is NP-hard to find best possible bounds in multiple dimensions. We extend our results to incorporate transactions costs.

Journal ArticleDOI
TL;DR: It is proved that the new large-update IPMs enjoy a polynomial ?
Abstract: In this paper, we first introduce the notion of self-regular functions. Various appealing properties of self-regular functions are explored and we also discuss the relation between selfregular functions and the well-known self-concordant functions. Then we use such functions to define self-regular proximity measure for path-following interior point methods for solving linear optimization (LO) problems. Any self-regular proximity measure naturally defines a primal-dual search direction. In this way a new class of primal-dual search directions for solving LO problems is obtained. Using the appealing properties of self-regular functions, we prove that these new large-update path-following methods for LO enjoy a polynomial, O n q+1 2q log n " iteration bound, where q 1 is the so-called barrier degree of the self-regular proximity measure underlying the algorithm. When q increases, this bound approaches the best known complexity bound for interior point methods, namely O p nlog n . Our unified analysis provides also the O p nlog n best known iteration bound of small-update IPMs. At each iteration, we need only to solve one linear system. As a byproduct of our results, we remove some limitations of the algorithms presented in [24] and improve their complexity as well. An extension of these results to semidefinite optimization (SDO) is also discussed.

Journal ArticleDOI
TL;DR: This work examines the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interior-point, and other methods.
Abstract: We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interior-point, and other methods.

Book ChapterDOI
Maxim Sviridenko1
27 May 2002
TL;DR: A new approximation algorithm for the metric uncapacitated facility location problem is designed, of LP rounding type and is based on a rounding technique developed in [5,6,7].
Abstract: We design a new approximation algorithm for the metric uncapacitated facility location problem. This algorithm is of LP rounding type and is based on a rounding technique developed in [5,6,7].

Proceedings ArticleDOI
16 Nov 2002
TL;DR: This work provides the first polynomial time algorithm for the linear version of a problem defined by Irving Fisher in 1891, modeled after Kuhn's primal-dual algorithm for bipartite matching.
Abstract: Although the study of market equilibria has occupied center stage within mathematical economics for over a century, polynomial time algorithms for such questions have so far evaded researchers. We provide the first such algorithm for the linear version of a problem defined by Irving Fisher in 1891. Our algorithm is modeled after Kuhn's (1995) primal-dual algorithm for bipartite matching.

Journal ArticleDOI
TL;DR: A new approach to solving the express shipment service network design problem by removing flow decisions as explicit decisions and establishing that its linear programming relaxation gives stronger lower bounds than conventional approaches is described.
Abstract: In this paper we describe a new approach to solving the express shipment service network design problem. Conventional polyhedral methods for network design and network loading problems do not consistently solve instances of the planning problem we consider. Under a restricted version of the problem, we transform conventional formulations to a new formulation using what we termcomposite variables. By removing flow decisions as explicit decisions, this extended formulation is cast purely in terms of the design elements. We establish that its linear programming relaxation gives stronger lower bounds than conventional approaches. We apply this composite variable formulation approach to the UPS Next Day Air delivery network and demonstrate potential annual cost savings in the hundreds of millions of dollars.

Proceedings ArticleDOI
08 May 2002
TL;DR: In this paper, the complexity and coupling issues in cooperative decision and control of distributed autonomous unmanned aerial vehicle (UAV) teams are addressed, where team vehicles are allocated to sub-teams using the set partition theory.
Abstract: This paper addresses complexity and coupling issues in cooperative decision and control of distributed autonomous unmanned aerial vehicle (UAV) teams. In particular, the recent results obtained by the inhouse research team are presented. Hierarchical decomposition is implemented where team vehicles are allocated to sub-teams using the set partition theory. Results are presented for single assignment and multiple assignments using the network flow and auction algorithms. Simulation results are presented for wide area search munitions where complexity and coupling are incrementally addressed in the decision system, yielding a radically improved team performance.

BookDOI
01 Nov 2002
TL;DR: This chapter discusses the design principles of LP systems, the linear programming problem, and the dual algorithm, which aims to solve large-scale LP problems using the simplex method.
Abstract: Preface. Part I: Preliminaries. 1. The linear programming problem. 2. The simplex method. 3. Large-scale LP problems. Part II: Computational Techniques. 4. Design principles of LP systems. 5. Data structures and basic operations. 6. Problem definition. 7. LP Processing. 8. Basis inverse, factorization. 9. The primal algorithm. 10. The dual algorithm. 11. Various issues. Index.

Journal ArticleDOI
TL;DR: Worst-case estimates of the number of iterations required to converge to a solution of the perturbed instance from the warm-start points are obtained, showing that these estimates depend on the size ofThe perturbation and on the conditioning and other properties of the problem instances.
Abstract: We study the situation in which, having solved a linear program with an interior-point method, we are presented with a new problem instance whose data is slightly perturbed from the original. We describe strategies for recovering a "warm-start" point for the perturbed problem instance from the iterates of the original problem instance. We obtain worst-case estimates of the number of iterations required to converge to a solution of the perturbed instance from the warm-start points, showing that these estimates depend on the size of the perturbation and on the conditioning and other properties of the problem instances.

Journal ArticleDOI
TL;DR: Two simple randomized approximation algorithms are described, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum of two linear programming relaxations of the problem.
Abstract: We consider the scheduling problem of minimizing the average weighted completion time of n jobs with release dates on a single machine. We first study two linear programming relaxations of the problem, one based on a time-indexed formulation, the other on a completion-time formulation. We show their equivalence by proving that a O(n log n) greedy algorithm leads to optimal solutions to both relaxations. The proof relies on the notion of mean busy times of jobs, a concept which enhances our understanding of these LP relaxations. Based on the greedy solution, we describe two simple randomized approximation algorithms, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum. They are based on the concept of common and independent $\alpha$-points, respectively. The analysis implies in particular that the worst-case relative error of the LP relaxations is at most 1.6853, and we provide instances showing that it is at least $e/(e-1) \approx 1.5819$. Both algorithms may be derandomized; their deterministic versions run in O(n2) time. The randomized algorithms also apply to the on-line setting, in which jobs arrive dynamically over time and one must decide which job to process without knowledge of jobs that will be released afterwards.

Proceedings Article
01 Jan 2002
TL;DR: This paper gives a unifying treatment of max-min fairness, which encompasses all existing results in a simplifying framework, and extends its applicability to new examples, and shows that, if the set of feasible allocations has the free disposal property, thenmax-min programming reduces to a simpler algorithm, called water filling, whose complexity is much lower.
Abstract: Max-min fairness is widely used in various areas of networking. In every case where it is used, there is a proof of existence and one or several algorithms for computing the max-min fair allocation; in most, but not all cases, they are based on the notion of bottlenecks. In spite of this wide applicability, there are still examples, arising in the context of mobile or peer-to-peer networks, where the existing theories do not seem to apply directly. In this paper, we give a unifying treatment of max-min fairness, which encompasses all existing results in a simplifying framework, and extends its applicability to new examples. First, we observe that the existence of max-min fairness is actually a geometric property of the set of feasible allocations (uniqueness always holds). There exist sets on which max-min fairness does not exist, and we describe a large class of sets on which a max-min fair allocation does exist. This class contains the compact, convex sets of \RR^N, but not only. Second, we give a general purpose, centralized algorithm, called Max-min Programming, for computing the max-min fair allocation in all cases where it exists (whether the set of feasible allocations is in our class or not). Its complexity is of the order of N linear programming steps in R^N, in the case where the feasible set is defined by linear constraints. We show that, if the set of feasible allocations has the free-disposal property, then Max-min Programming degenerates to a simpler algorithm, called Water Filling, whose complexity is much less. Free disposal corresponds to the cases where a bottleneck argument can be made, and Water Filling is the general form of all previously known centralized algorithms for such cases. Our derivations are based on the relation between max-min fairness and leximin ordering. All our results apply mutatis mutandis to min-max fairness. Our results apply to weighted, unweighted and util-max-min and min-max fairness. Distributed algorithms for the computation of max-min fair allocations are left outside the scope of this paper.

Posted Content
Neal E. Young1
TL;DR: In this article, the authors explore how to avoid the time bottleneck for randomized rounding algorithms for packing and covering linear programs (either mixed integer linear programs or linear programs with no negative coefficients).
Abstract: Randomized rounding is a standard method, based on the probabilistic method, for designing combinatorial approximation algorithms. In Raghavan's seminal paper introducing the method (1988), he writes: "The time taken to solve the linear program relaxations of the integer programs dominates the net running time theoretically (and, most likely, in practice as well)." This paper explores how this bottleneck can be avoided for randomized rounding algorithms for packing and covering problems (linear programs, or mixed integer linear programs, having no negative coefficients). The resulting algorithms are greedy algorithms, and are faster and simpler to implement than standard randomized-rounding algorithms. This approach can also be used to understand Lagrangian-relaxation algorithms for packing/covering linear programs: such algorithms can be viewed as as (derandomized) randomized-rounding schemes.

Journal ArticleDOI
TL;DR: In this article, an exact and computationally efficient mixed-integer linear programming (MILP) formulation of the self-scheduling problem for a price-maker to achieve maximum profit in a pool-based electricity market is presented.
Abstract: This paper addresses the self-scheduling problem faced by a price-maker to achieve maximum profit in a pool-based electricity market. An exact and computationally efficient mixed-integer linear programming (MILP) formulation of this problem is presented. This formulation models precisely the price-maker capability of altering market-clearing prices to its own benefits, through price quota curves. No assumptions are made on the characteristics of the pool and its agents. A realistic case study is presented and the results obtained are analyzed in detail.