scispace - formally typeset
Search or ask a question

Showing papers on "Linear programming published in 2015"


Book
27 Jul 2015
TL;DR: This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area, providing a toolbox of algorithmic techniques.
Abstract: This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area. The book covers many of the recent developments of the field, including application of important separators, branching based on linear programming, Cut & Count to obtain faster algorithms on tree decompositions, algorithms based on representative families of matroids, and use of the Strong Exponential Time Hypothesis. A number of older results are revisited and explained in a modern and didactic way. The book provides a toolbox of algorithmic techniques. Part I is an overview of basic techniques, each chapter discussing a certain algorithmic paradigm. The material covered in this part can be used for an introductory course on fixed-parameter tractability. Part II discusses more advanced and specialized algorithmic ideas, bringing the reader to the cutting edge of current research. Part III presents complexity results and lower bounds, giving negative evidence by way of W[1]-hardness, the Exponential Time Hypothesis, and kernelization lower bounds. All the results and concepts are introduced at a level accessible to graduate students and advanced undergraduate students. Every chapter is accompanied by exercises, many with hints, while the bibliographic notes point to original publications and related work.

1,544 citations


Journal ArticleDOI
TL;DR: In this article, a general numerical framework to approximate so-lutions to linear programs related to optimal transport is presented, where the set of linear constraints can be split in an intersection of a few simple constraints, for which the projections can be computed in closed form.
Abstract: This article details a general numerical framework to approximate so-lutions to linear programs related to optimal transport. The general idea is to introduce an entropic regularization of the initial linear program. This regularized problem corresponds to a Kullback-Leibler Bregman di-vergence projection of a vector (representing some initial joint distribu-tion) on the polytope of constraints. We show that for many problems related to optimal transport, the set of linear constraints can be split in an intersection of a few simple constraints, for which the projections can be computed in closed form. This allows us to make use of iterative Bregman projections (when there are only equality constraints) or more generally Bregman-Dykstra iterations (when inequality constraints are in-volved). We illustrate the usefulness of this approach to several variational problems related to optimal transport: barycenters for the optimal trans-port metric, tomographic reconstruction, multi-marginal optimal trans-port and in particular its application to Brenier's relaxed solutions of in-compressible Euler equations, partial un-balanced optimal transport and optimal transport with capacity constraints.

567 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the principles of primal?dual approaches while providing an overview of the numerical methods that have been proposed in different contexts, including convex analysis, discrete optimization, parallel processing, and nonsmooth optimization with an emphasis on sparsity issues.
Abstract: Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. However, deriving efficient strategies that jointly bring into play the primal and dual problems is a more recent idea that has generated many important new contributions in recent years. These novel developments are grounded in the recent advances in convex analysis, discrete optimization, parallel processing, and nonsmooth optimization with an emphasis on sparsity issues. In this article, we aim to present the principles of primal?dual approaches while providing an overview of the numerical methods that have been proposed in different contexts. Last but not least, primal?dual methods lead to algorithms that are easily parallelizable. Today, such parallel algorithms are becoming increasingly important for efficiently handling high-dimensional problems.

316 citations


Journal ArticleDOI
TL;DR: This paper proposes a two-stage two-level model for the energy pricing and dispatch problem faced by a smart grid retailer who plays the role of an intermediary agent between a wholesale energy market and end consumers and proposes a heuristic method to select the parameter in disjunctive constraints based on the interpretation of Lagrange multipliers.
Abstract: This paper proposes a two-stage two-level model for the energy pricing and dispatch problem faced by a smart grid retailer who plays the role of an intermediary agent between a wholesale energy market and end consumers. Demand response of consumers with respect to the retail price is characterized by a Stackelberg game in the first stage, thus the first stage has two levels. A risk-aversive energy dispatch accounting for market price uncertainty is modeled by a linear robust optimization with objective uncertainty in the second stage. The proposed model is transformed to a mixed integer linear program (MILP) by jointly using the Karush-Kuhn-Tucker (KKT) condition, the disjunctive constraints, and the duality theory. We propose a heuristic method to select the parameter in disjunctive constraints based on the interpretation of Lagrange multipliers. Moreover, we suggest solving an additional linear program (LP) to acquire a possible enhanced bidding strategy that guarantees a Pareto improvement on the retailer's profit over the entire uncertainty set. Case studies demonstrate the proposed model and method is valid.

309 citations


Proceedings ArticleDOI
01 Dec 2015
TL;DR: A new augmented distributed gradient method (termed Aug-DGM) based on consensus theory is developed that will be able to seek the exact optimum even with constant stepsizes assuming that the global objective function has Lipschitz gradient.
Abstract: We consider distributed optimization problems in which a number of agents are to seek the optimum of a global objective function through merely local information sharing. The problem arises in various application domains, such as resource allocation, sensor fusion and distributed learning. In particular, we are interested in scenarios where agents use uncoordinated (different) constant stepsizes for local optimization. According to most existing works, using this kind of stepsize rule for update, which is necessary in asynchronous scenarios, will lead to some gap (error) between the estimated result and the exact optimum. To deal with this issue, we develop a new augmented distributed gradient method (termed Aug-DGM) based on consensus theory. The proposed algorithm not only allows for using uncoordinated stepsizes but also, most importantly, be able to seek the exact optimum even with constant stepsizes assuming that the global objective function has Lipschitz gradient. A simple numerical example is provided to illustrate the effectiveness of the algorithm.

300 citations


Journal ArticleDOI
TL;DR: A proximal gradient exact first-order algorithm (PG-EXTRA) that utilizes the composite structure and has the best known convergence rate and is a nontrivial extension to the recent algorithm EXTRA.
Abstract: This paper proposes a decentralized algorithm for solving a consensus optimization problem defined in a static networked multi-agent system, where the local objective functions have the smooth+nonsmooth composite form. Examples of such problems include decentralized constrained quadratic programming and compressed sensing problems, as well as many regularization problems arising in inverse problems, signal processing, and machine learning, which have decentralized applications. This paper addresses the need for efficient decentralized algorithms that take advantages of proximal operations for the nonsmooth terms. We propose a proximal gradient exact first-order algorithm (PG-EXTRA) that utilizes the composite structure and has the best known convergence rate. It is a nontrivial extension to the recent algorithm EXTRA. At each iteration, each agent locally computes a gradient of the smooth part of its objective and a proximal map of the nonsmooth part, as well as exchanges information with its neighbors. The algorithm is “exact” in the sense that an exact consensus minimizer can be obtained with a fixed step size, whereas most previous methods must use diminishing step sizes. When the smooth part has Lipschitz gradients, PG-EXTRA has an ergodic convergence rate of $O\left({1\over k}\right)$ in terms of the first-order optimality residual. When the smooth part vanishes, PG-EXTRA reduces to P-EXTRA, an algorithm without the gradients (so no “G” in the name), which has a slightly improved convergence rate at $o\left({1\over k}\right)$ in a standard (non-ergodic) sense. Numerical experiments demonstrate effectiveness of PG-EXTRA and validate our convergence results

284 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a distribution locational marginal pricing (DLMP) method through quadratic programming (QP) designed to alleviate the congestion that might occur in a distribution network with high penetration of flexible demands.
Abstract: This paper presents the distribution locational marginal pricing (DLMP) method through quadratic programming (QP) designed to alleviate the congestion that might occur in a distribution network with high penetration of flexible demands. In the DLMP method, the distribution system operator (DSO) calculates dynamic tariffs and publishes them to the aggregators, who make the optimal energy plans for the flexible demands. The DLMP through QP instead of linear programing as studied in previous literatures solves the multiple solution issue of the aggregator optimization which may cause the decentralized congestion management by DLMP to fail. It is proven in this paper, using convex optimization theory, the aggregator's optimization problem through QP is strictly convex and has a unique solution. The Karush-Kuhn-Tucker (KKT) conditions and the unique solution of the aggregator optimization ensure that the centralized DSO optimization and the decentralized aggregator optimization converge. Case studies using a distribution network with high penetration of electric vehicles (EVs) and heat pumps (HPs) validate the equivalence of the two optimization setups, and the efficacy of the proposed DLMP through QP for congestion management.

259 citations


Book ChapterDOI
11 Apr 2015
TL;DR: usage scenarios of i¾?Z are described, the tool architecture that allows dispatching problems to special purpose solvers is outlined, and use cases are examined.
Abstract: i¾?Z is a part of the SMT solver Z3. It allows users to pose and solve optimization problems modulo theories. Many SMT applications use models to provide satisfying assignments, and a growing number of these build on top of Z3 to get optimal assignments with respect to objective functions. i¾?Z provides a portfolio of approaches for solving linear optimization problems over SMT formulas, MaxSMT, and their combinations. Objective functions are combined as either Pareto fronts, lexicographically, or each objective is optimized independently. We describe usage scenarios of i¾?Z, outline the tool architecture that allows dispatching problems to special purpose solvers, and examine use cases.

242 citations


Journal ArticleDOI
TL;DR: In this paper, a linear programming formulation for autonomous intersection control (LPAIC) is proposed to account for traffic dynamics within a connected vehicle environment, where a lane based bi-level optimization model is introduced to propagate traffic flows in the network, accounting for dynamic departure time, dynamic route choice, and autonomous intersections control in the context of system optimum network model.
Abstract: This paper develops a novel linear programming formulation for autonomous intersection control (LPAIC) accounting for traffic dynamics within a connected vehicle environment. Firstly, a lane based bi-level optimization model is introduced to propagate traffic flows in the network, accounting for dynamic departure time, dynamic route choice, and autonomous intersection control in the context of system optimum network model. Then the bi-level optimization model is transformed to the linear programming formulation by relaxing the nonlinear constraints with a set of linear inequalities. One special feature of the LPAIC formulation is that the entries of the constraint matrix has only {−1, 0, 1} values. Moreover, it is proved that the constraint matrix is totally unimodular, the optimal solution exists and contains only integer values. It is also shown that the traffic flows from different lanes pass through the conflict points of the intersection safely and there are no holding flows in the solution. Three numerical case studies are conducted to demonstrate the properties and effectiveness of the LPAIC formulation to solve autonomous intersection control.

216 citations


Journal ArticleDOI
TL;DR: In this article, the authors show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift.
Abstract: The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely self-calibration, compressive sensing, and biconvex optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations where both and the diagonal matrix (which models the calibration error) are unknown. By 'lifting' this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both and can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis.

209 citations


Proceedings ArticleDOI
26 May 2015
TL;DR: A new approach to the design of smooth trajectories for quadrotor unmanned aerial vehicles (UAVs), which are free of collisions with obstacles along their entire length is presented, using IRIS, a recently developed technique for greedy convex segmentation, to pre-compute convex regions of safe space.
Abstract: We present a new approach to the design of smooth trajectories for quadrotor unmanned aerial vehicles (UAVs), which are free of collisions with obstacles along their entire length. To avoid the non-convex constraints normally required for obstacle-avoidance, we perform a mixed-integer optimization in which polynomial trajectories are assigned to convex regions which are known to be obstacle-free. Prior approaches have used the faces of the obstacles themselves to define these convex regions. We instead use IRIS, a recently developed technique for greedy convex segmentation [1], to pre-compute convex regions of safe space. This results in a substantially reduced number of integer variables, which improves the speed with which the optimization can be solved to its global optimum, even for tens or hundreds of obstacle faces. In addition, prior approaches have typically enforced obstacle avoidance at a finite set of sample or knot points. We introduce a technique based on sums-of-squares (SOS) programming that allows us to ensure that the entire piecewise polynomial trajectory is free of collisions using convex constraints. We demonstrate this technique in 2D and in 3D using a dynamical model in the Drake toolbox for Matlab [2].

Journal ArticleDOI
TL;DR: In this article, the authors address the multistage expansion planning problem of a distribution system where investments in the distribution network and in distributed generation are jointly considered, and the resulting optimization problem is a mixed-integer linear program for which finite convergence to optimality is guaranteed and efficient off-the-shelf software is available.
Abstract: This paper addresses the multistage expansion planning problem of a distribution system where investments in the distribution network and in distributed generation are jointly considered. Network expansion comprises several alternatives for feeders and transformers. Analogously, the installation of distributed generation takes into account several alternatives for conventional and wind generators. Unlike what is customarily done, a set of candidate nodes for generator installation is considered. Thus, the optimal expansion plan identifies the best alternative, location, and installation time for the candidate assets. The model is driven by the minimization of the net present value of the total cost including the costs related to investment, maintenance, production, losses, and unserved energy. The costs of energy losses are modeled by a piecewise linear approximation. As another distinctive feature, radiality conditions are specifically tailored to accommodate the presence of distributed generation in order to avoid the isolation of distributed generators and the issues associated with transfer nodes. The resulting optimization problem is a mixed-integer linear program for which finite convergence to optimality is guaranteed and efficient off-the-shelf software is available. Numerical results illustrate the effective performance of the proposed approach.

Journal ArticleDOI
TL;DR: A new population-based evolutionary algorithm called biogeography-based optimization (BBO) is proposed and the performance of ten types of constraint-handling techniques is evaluated, showing the effectiveness and superiority of the proposed algorithms compared with the other optimization methods presented in the literature.
Abstract: Optimal coordination of directional overcurrent relays (DOCRs) is a highly constrained and nonlinear optimization problem. The operating time of each relay depends on two independent variables called plug setting and time multiplier setting. As the network becomes larger and more complex, the number of relays will increase and, thus, finding the optimal solution becomes very hard. In this paper, a new population-based evolutionary algorithm called biogeography-based optimization (BBO) is proposed and the performance of ten types of constraint-handling techniques is evaluated. In addition, a new hybrid BBO with linear programming (BBO-LP) is proposed to enhance the performance of the conventional BBO algorithm. The performance of the proposed BBO-based algorithms is evaluated by using five test systems. The results show the effectiveness and superiority of the proposed algorithms compared with the performance of the other optimization methods presented in the literature.

Journal ArticleDOI
TL;DR: This work solves a 20-year old problem posed by Yannakakis and proves that no polynomial-size linear program (LP) exists whose associated polytope projects to the traveling salesmanpolytope, even if the LP is not required to be symmetric.
Abstract: We solve a 20-year old problem posed by Yannakakis and prove that no polynomial-size linear program (LP) exists whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope and the stable set polytope. These results were discovered through a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.

Journal ArticleDOI
TL;DR: This work shows how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift, and derives explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently via linear programming.
Abstract: The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely Self-Calibration, Compressive Sensing, and Biconvex Optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where both x and the diagonal matrix D (which models the calibration error) are unknown. By "lifting" this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis.

Journal ArticleDOI
TL;DR: This paper approximate two-stage robust binary programs by their corresponding K -adaptability problems, in which the decision maker precommits to K second-stage policies, here -and-now, and implements the best of these policies once the uncertain parameters are observed.
Abstract: Over the last two decades, robust optimization has emerged as a computationally attractive approach to formulate and solve single-stage decision problems affected by uncertainty. More recently, robust optimization has been successfully applied to multistage problems with continuous recourse. This paper takes a step toward extending the robust optimization methodology to problems with integer recourse, which have largely resisted solution so far. To this end, we approximate two-stage robust binary programs by their corresponding K-adaptability problems, in which the decision maker precommits to K second-stage policies, here -and-now, and implements the best of these policies once the uncertain parameters are observed. We study the approximation quality and the computational complexity of the K-adaptability problem, and we propose two mixed-integer linear programming reformulations that can be solved with off-the-shelf software. We demonstrate the effectiveness of our reformulations for stylized instances o...

Proceedings Article
06 Jul 2015
TL;DR: This paper proves that the vanila FW method converges at a rate of 1/t2, and shows that various balls induced by lp norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution.
Abstract: The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth optimization has regained much interest in recent years in the context of large scale optimization and machine learning. A key advantage of the method is that it avoids projections - the computational bottleneck in many applications - replacing it by a linear optimization step. Despite this advantage, the known convergence rates of the FW method fall behind standard first order methods for most settings of interest. It is an active line of research to derive faster linear optimization-based algorithms for various settings of convex optimization. In this paper we consider the special case of optimization over strongly convex sets, for which we prove that the vanila FW method converges at a rate of 1/t2. This gives a quadratic improvement in convergence rate compared to the general case, in which convergence is of the order 1/t, and known to be tight. We show that various balls induced by lp norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution. We further show how several previous fastrate results for the FW method follow easily from our analysis.

Journal ArticleDOI
TL;DR: 0-1 linear programming formulations exploiting the stated hierarchy are proposed and used to derive a formal proof that the joint OR planning and scheduling problem is NP-hard.

Book
07 Sep 2015
TL;DR: This work considers the problem of input control, subject to a specified product mix, and priority sequencing in a two-station multiclass queueing network with general service time distributions and a general routing structure, and obtains an effective scheduling rule.
Abstract: Motivated by a factory scheduling problem, we consider the problem of input control, subject to a specified product mix, and priority sequencing in a two-station multiclass queueing network with general service time distributions and a general routing structure. The objective is to minimize the long-run expected average number of customers in the system subject to a constraint on the long-run expected average output rate. Under balanced heavy loading conditions, this scheduling problem is approximated by a control problem involving Brownian motion. A reformulation of this Brownian control problem was solved exactly in 1990 by L. M. Wein. In the present paper, this solution is interpreted in terms of the queueing network model in order to obtain an effective scheduling rule. The resulting sequencing policy dynamically prioritizes customers according to reduced costs calculated from a linear program. The input rule is a workload regulating input policy, where a customer is injected into the system whenever the expected total amount of work in the system for the two stations falls within a prescribed region. An example is presented that illustrates the procedure and demonstrates its effectiveness.

Journal ArticleDOI
TL;DR: In this article, three alternative approaches are developed to convert the nonstandard robust optimization problem into linear programming, bilinear programming, and two-stage robust optimization problems, respectively, and the linear programming problem is easier to solve as compared to the other two alternatives.
Abstract: Due to the variability of renewable resources, the ISO tries to identify do-not-exceed (DNE) limits, which are the maximum renewable generation ranges that the power system can accommodate without sacrificing system reliability. The problem is formulated as an optimization problem whose objective is to find the largest operating ranges of variable resources such that the system remains feasible under any generation realizations within the range. Computing the DNE limits can be conceptually translated into finding the largest uncertain set of a robust optimization problem. Depending on the assumptions on how the system responds to the uncertainty of renewable resources, three alternative approaches are developed to convert the nonstandard robust optimization problem into linear programming, bilinear programming and two-stage robust optimization problems, respectively. Although the linear programming problem is easier to solve as compared to the other two alternatives, its resulting DNE limits are the most conservative. Therefore, the trade-off needs to be considered when deciding which approach is the most appropriate in real-time operation. A 5-bus system and the ISO New England power system are used to test the proposed approaches.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: This work casts isometric embedding as MRF optimization and applies efficient global optimization algorithms based on linear programming relaxations to solve the challenge of nonrigid registration of 3D surfaces.
Abstract: We present an approach to nonrigid registration of 3D surfaces. We cast isometric embedding as MRF optimization and apply efficient global optimization algorithms based on linear programming relaxations. The Markov random field perspective suggests a natural connection with robust statistics and motivates robust forms of the intrinsic distortion functional. Our approach outperforms a large body of prior work by a significant margin, increasing registration precision on real data by a factor of 3.

Journal ArticleDOI
TL;DR: A novel mixed-integer linear programming (MILP) model for the electric vehicle charging coordination (EVCC) problem in unbalanced electrical distribution systems (EDSs) is presented and it is demonstrated that the model can be used in the solution of the EVCC problem in EDSs.
Abstract: This paper presents a novel mixed-integer linear programming (MILP) model for the electric vehicle charging coordination (EVCC) problem in unbalanced electrical distribution systems (EDSs). Linearization techniques are applied over a mixed-integer nonlinear programming model to obtain the proposed MILP formulation based on current injections. The expressions used to represent the steady-state operation of the EDS take into account a three-phase representation of the circuits, as well as the imbalance of the loads, leading to a more realistic model. Additionally, the proposed formulation considers the presence of distributed generators and operational constraints such as voltage and current magnitude limits. The optimal solution for the mathematical model was found using commercial MILP solvers. The proposed formulation was tested in a distribution system used in the specialized literature. The results show the efficiency and the robustness of the methodology, and also demonstrate that the model can be used in the solution of the EVCC problem in EDSs.

Journal ArticleDOI
TL;DR: A biased random-key genetic algorithm (BRKGA) for the unequal area facility layout problem (UA-FLP) where a set of rectangular facilities with given area requirements has to be placed, without overlapping, on a rectangular floor space is presented.

Journal ArticleDOI
TL;DR: Based on a new lexicographic ordering on triangular fuzzy numbers, a novel algorithm is proposed to solve the FFLP problem by converting it to its equivalent a multi-objective linear programming (MOLP) problem and then it is solved by the lexicography method.

Journal ArticleDOI
TL;DR: In this paper, a linear programming algorithm and an efficient nonsmooth optimization algorithm are presented for equilibrium multi-population matching in the case of Wasserstein barycenters, where the measures are approximated by discrete measures.
Abstract: Equilibrium multi-population matching (matching for teams) is a problem from mathematical economics which is related to multi-marginal optimal transport. A special but important case is the Wasserstein barycenter problem, which has applications in image processing and statistics. Two algorithms are presented: a linear programming algorithm and an efficient nonsmooth optimization algorithm, which applies in the case of the Wasserstein barycenters. The measures are approximated by discrete measures: convergence of the approximation is proved. Numerical results are presented which illustrate the efficiency of the algorithms.

Journal ArticleDOI
TL;DR: In this paper, the authors define a routing problem called the platooning problem and prove that this problem is NP-hard, even when the graph used to represent the road network is planar.
Abstract: We create a mathematical framework for modeling trucks traveling in road networks, and we define a routing problem called the platooning problem. We prove that this problem is NP-hard, even when the graph used to represent the road network is planar. We present integer linear programming formulations for instances of the platooning problem where deadlines are discarded, which we call the unlimited platooning problem. These allow us to calculate fuel-optimal solutions to the platooning problem for large-scale, real-world examples. The problems solved are orders of magnitude larger than problems previously solved exactly in the literature. We present several heuristics and compare their performance with the optimal solutions on the German Autobahn road network. The proposed heuristics find optimal or near-optimal solutions in most of the problem instances considered, especially when a final local search is applied. Assuming a fuel reduction factor of 10% from platooning, we find fuel savings from platooning of 1–2% for as few as 10 trucks in the road network; the percentage of savings increases with the number of trucks. If all trucks start at the same point, savings of up to 9% are obtained for only 200 trucks.

Journal ArticleDOI
TL;DR: To deal with dynamic events such as sensor node participation and departure, during SDSN operations, an efficient online algorithm using local optimization is developed.
Abstract: After a decade of extensive research on application-specific wireless sensor networks (WSNs), the recent development of information and communication technologies makes it practical to realize the software-defined sensor networks (SDSNs), which are able to adapt to various application requirements and to fully explore the resources of WSNs. A sensor node in SDSN is able to conduct multiple tasks with different sensing targets simultaneously. A given sensing task usually involves multiple sensors to achieve a certain quality-of-sensing, e.g., coverage ratio. It is significant to design an energy-efficient sensor scheduling and management strategy with guaranteed quality-of-sensing for all tasks. To this end, three issues are investigated in this paper: 1) the subset of sensor nodes that shall be activated, i.e., sensor activation, 2) the task that each sensor node shall be assigned, i.e., task mapping, and 3) the sampling rate on a sensor for a target, i.e., sensing scheduling. They are jointly considered and formulated as a mixed-integer with quadratic constraints programming (MIQP) problem, which is then reformulated into a mixed-integer linear programming (MILP) formulation with low computation complexity via linearization. To deal with dynamic events such as sensor node participation and departure, during SDSN operations, an efficient online algorithm using local optimization is developed. Simulation results show that our proposed online algorithm approaches the globally optimized network energy efficiency with much lower rescheduling time and control overhead.

Journal ArticleDOI
TL;DR: In this paper, a distributed SCUC (D-SCUC) algorithm is proposed to accelerate the generation scheduling of large-scale power systems, where a power system is decomposed into several scalable zones which are interconnected through tie lines.
Abstract: Independent system operators (ISOs) of electricity markets solve the security-constrained unit commitment (SCUC) problem to plan a secure and economic generation schedule. However, as the size of power systems increases, the current centralized SCUC algorithm could face critical challenges ranging from modeling accuracy to calculation complexity. This paper presents a distributed SCUC (D-SCUC) algorithm to accelerate the generation scheduling of large-scale power systems. In this algorithm, a power system is decomposed into several scalable zones which are interconnected through tie lines. Each zone solves its own SCUC problem and a parallel calculation method is proposed to coordinate individual D-SCUC problems. Several power systems are studied to show the effectiveness of the proposed algorithm.

Proceedings ArticleDOI
22 Apr 2015
TL;DR: This paper forms the problem of network function placement and routing as a mixed integer linear programming problem, and develops heuristics to solve the problem incrementally, allowing it to support a large number of flows and to solving the problem for incoming flows without impacting existing flows.
Abstract: The integration of network function virtualization (NFV) and software defined networks (SDN) seeks to create a more flexible and dynamic software-based network environment. The line between entities involved in forwarding and those involved in more complex middle box functionality in the network is blurred by the use of high-performance virtualized platforms capable of performing these functions. A key problem is how and where network functions should be placed in the network and how traffic is routed through them. An efficient placement and appropriate routing increases system capacity while also minimizing the delay seen by flows. In this paper, we formulate the problem of network function placement and routing as a mixed integer linear programming problem. This formulation not only determines the placement of services and routing of the flows, but also seeks to minimize the resource utilization. We develop heuristicsto solve the problem incrementally, allowing us to support a large number of flows and to solve the problem for incoming flows without impacting existing flows.

Journal ArticleDOI
TL;DR: A randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs) in contrast to the existing work, that does not require that agents are able to compute the SGs of their objective functions.
Abstract: In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs). In contrast to the existing work, we do not require that agents are able to compute the SGs of their objective functions. We establish the convergence of the method to an approximate solution of the multiagent optimization problem within the error level depending on the smoothing parameter and the Lipschitz constant of each agent’s objective function. Finally, a numerical example is provided to demonstrate the effectiveness of the method.