scispace - formally typeset
Search or ask a question

Showing papers on "Linear programming published in 2020"


Journal ArticleDOI
TL;DR: A novel biobjective mixed-integer linear programming (MILP) model is proposed for FSS with an outsourcing option and just-in-time delivery in order to simultaneously minimize the total cost of the production system and total energy consumption.
Abstract: Flow shop scheduling (FSS) problem constitutes a major part of production planning in every manufacturing organization. It aims at determining the optimal sequence of processing jobs on available machines within a given customer order. In this article, a novel biobjective mixed-integer linear programming (MILP) model is proposed for FSS with an outsourcing option and just-in-time delivery in order to simultaneously minimize the total cost of the production system and total energy consumption. Each job is considered to be either scheduled in-house or to be outsourced to one of the possible subcontractors. To efficiently solve the problem, a hybrid technique is proposed based on an interactive fuzzy solution technique and a self-adaptive artificial fish swarm algorithm (SAAFSA). The proposed model is treated as a single objective MILP using a multiobjective fuzzy mathematical programming technique based on the ϵ-constraint, and SAAFSA is then applied to provide Pareto optimal solutions. The obtained results demonstrate the usefulness of the suggested methodology and high efficiency of the algorithm in comparison with CPLEX solver in different problem instances. Finally, a sensitivity analysis is implemented on the main parameters to study the behavior of the objectives according to the real-world conditions.

123 citations


Journal ArticleDOI
TL;DR: This study presents a probabilistic transmission expansion planning model incorporating distributed series reactors, which are aimed at improving network flexibility and utilises the Monte Carlo simulation method to take into account uncertainty of wind generations and demands.
Abstract: This study presents a probabilistic transmission expansion planning model incorporating distributed series reactors, which are aimed at improving network flexibility. Although the whole problem is a mixed-integer non-linear programming problem, this study proposes an approximation method to linearise it in the structure of the Benders decomposition (BD) algorithm. In the first stage of the BD algorithm, optimal number of new transmission lines and distributed series reactors are determined. In the second stage, the developed optimal power flow problem, as a linear sub-problem, is performed for different scenarios of uncertainties and a set of probable contingencies. The Benders cuts are iteratively added to the first stage problem to decrease the optimality gap below a given threshold. The proposed model utilises the Monte Carlo simulation method to take into account uncertainty of wind generations and demands. Several case studies on three test systems are presented to validate the efficacy of the proposed approach.

123 citations


Proceedings ArticleDOI
19 Jul 2020
TL;DR: An improved optimization algorithm is proposed that uses the benefits of multiple differential evolution operators, with more emphasis placed on the best-performing operator, with its results outperforming both single operator-based and different state-of-the-art algorithms.
Abstract: In recent years, several multi-method and multi-operator-based algorithms have been proposed for solving optimization problems. Generally, their performance is better than other algorithms that based on a single operator and/or algorithm. However, they do not perform consistently well over all the problems tested in the literature. In this paper, we propose an improved optimization algorithm that uses the benefits of multiple differential evolution operators, with more emphasis placed on the best-performing operator. The performance of the proposed algorithm is tested by solving 10 problems with 5, 10, 15 and 20 dimensions taken from CEC2020 competition on single objective bound constrained optimization, with its results outperforming both single operator-based and different state-of-the-art algorithms.

119 citations


Journal ArticleDOI
TL;DR: This paper addresses a class of expensive data-driven constrained multiobjective combinatorial optimization problems, where the objectives and constraints can be calculated only on the basis of a large amount of data.
Abstract: Many real-world optimization problems can be solved by using the data-driven approach only, simply because no analytic objective functions are available for evaluating candidate solutions. In this paper, we address a class of expensive data-driven constrained multiobjective combinatorial optimization problems, where the objectives and constraints can be calculated only on the basis of a large amount of data. To solve this class of problems, we propose using random forests (RFs) and radial basis function networks as surrogates to approximate both objective and constraint functions. In addition, logistic regression models are introduced to rectify the surrogate-assisted fitness evaluations and a stochastic ranking selection is adopted to further reduce the influences of the approximated constraint functions. Three variants of the proposed algorithm are empirically evaluated on multiobjective knapsack benchmark problems and two real-world trauma system design problems. Experimental results demonstrate that the variant using RF models as the surrogates is effective and efficient in solving data-driven constrained multiobjective combinatorial optimization problems.

109 citations


Journal ArticleDOI
TL;DR: It is concluded that the solution techniques can yield high-quality solutions and NSGA-II is considered as the most efficient solution tool, the optimal route planning of the case study problem in delivery and pick-up phases is attained using the best-found Pareto solution and the highest change in the objective function occurs for the total cost value by applying a 20% increase in the demand parameter.

105 citations


Journal ArticleDOI
TL;DR: This work presents a new distributionally robust optimization model called robust stochastic optimization (RSO), which unifies both scenario-tree-based Stochastic linear optimization and distributionally strong linear optimization in a single model.
Abstract: We present a new distributionally robust optimization model called robust stochastic optimization (RSO), which unifies both scenario-tree-based stochastic linear optimization and distributionally r...

104 citations


Journal Article
TL;DR: Stochastic conditional gradient methods are proposed as an alternative solution relying on Approximating gradients via a simple averaging technique requiring a single stochastic gradient evaluation per iteration, and replacing projection step in proximal methods by a linear program lowers the computational complexity of each iteration.
Abstract: This paper considers stochastic optimization problems for a large class of objective functions, including convex and continuous submodular. Stochastic proximal gradient methods have been widely used to solve such problems; however, their applicability remains limited when the problem dimension is large and the projection onto a convex set is costly. Instead, stochastic conditional gradient methods are proposed as an alternative solution relying on (i) Approximating gradients via a simple averaging technique requiring a single stochastic gradient evaluation per iteration; (ii) Solving a linear program to compute the descent/ascent direction. The averaging technique reduces the noise of gradient approximations as time progresses, and replacing projection step in proximal methods by a linear program lowers the computational complexity of each iteration. We show that under convexity and smoothness assumptions, our proposed method converges to the optimal objective function value at a sublinear rate of $O(1/t^{1/3})$. Further, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that our proposed method achieves a $((1-1/e)OPT-\eps)$ guarantee with $O(1/\eps^3)$ stochastic gradient computations. This guarantee matches the known hardness results and closes the gap between deterministic and stochastic continuous submodular maximization. Additionally, we obtain $((1/e)OPT -\eps)$ guarantee after using $O(1/\eps^3)$ stochastic gradients for the case that the objective function is continuous DR-submodular but non-monotone and the constraint set is down-closed. By using stochastic continuous optimization as an interface, we provide the first $(1-1/e)$ tight approximation guarantee for maximizing a monotone but stochastic submodular set function subject to a matroid constraint and $(1/e)$ approximation guarantee for the non-monotone case.

99 citations


Posted Content
TL;DR: This work analyzes two approaches for learning in Constrained Markov Decision Processes and highlights a crucial difference between the two approaches; the linear programming approach results in stronger guarantees than in the dual formulation based approach.
Abstract: In many sequential decision-making problems, the goal is to optimize a utility function while satisfying a set of constraints on different utilities. This learning problem is formalized through Constrained Markov Decision Processes (CMDPs). In this paper, we investigate the exploration-exploitation dilemma in CMDPs. While learning in an unknown CMDP, an agent should trade-off exploration to discover new information about the MDP, and exploitation of the current knowledge to maximize the reward while satisfying the constraints. While the agent will eventually learn a good or optimal policy, we do not want the agent to violate the constraints too often during the learning process. In this work, we analyze two approaches for learning in CMDPs. The first approach leverages the linear formulation of CMDP to perform optimistic planning at each episode. The second approach leverages the dual formulation (or saddle-point formulation) of CMDP to perform incremental, optimistic updates of the primal and dual variables. We show that both achieves sublinear regret w.r.t.\ the main utility while having a sublinear regret on the constraint violations. That being said, we highlight a crucial difference between the two approaches; the linear programming approach results in stronger guarantees than in the dual formulation based approach.

94 citations


Journal ArticleDOI
TL;DR: This paper investigates an energy-efficient hybrid flowshop scheduling problem with the consideration of machines with different energy usage ratios, sequence-dependent setups, and machine-to-machine transportation operations with a three-stage multiobjective approach based on decomposition (TMOA/D).
Abstract: This paper investigates an energy-efficient hybrid flowshop scheduling problem with the consideration of machines with different energy usage ratios, sequence-dependent setups, and machine-to-machine transportation operations. To minimize the makespan and total energy consumption simultaneously, a mixed-integer linear programming (MILP) model is developed. To solve this problem, a three-stage multiobjective approach based on decomposition (TMOA/D) is suggested, in which each solution is bound with a main weight vector and a set of its neighbors. Accordingly, a variable direction strategy is developed to ensure each solution along its main direction is thoroughly exploited and can jump to the neighboring directions using a proximity principle. To ensure an active schedule of arranging jobs to machines, a two-level solution representation is employed. In the first phase, each solution attempts to improve itself along its current weight vector through a developed neighborhood-based local search. In the second phase, the promising solutions are selected through the technique for order preference by similarity to an ideal solution. Then, they attempt to update themselves with a proposed global replacement strategy via incorporation with their closing solutions. In the third phase, a solution conducts a large perturbation when it goes through all its assigned weight vectors. Extensive experiments are conducted to test the performance of TMOA/D, and the results demonstrate that TMOA/D has a very competitive performance.

90 citations


Journal ArticleDOI
TL;DR: The results showed that the proposed methods significantly improved the efficiency and performance of certain classifiers, such as k-Nearest Neighbor, Support Vector Machine, and neural networks.

85 citations


Journal ArticleDOI
TL;DR: A data-driven two-stage robust stochastic programming model for energy hub capacity planning with distributional robustness guarantee is proposed and transformed into an equivalent convex program with a nonlinear objective and linear constraints, and is solved by an outer-approximation algorithm that entails solving only linear program.
Abstract: Cascaded utilization of natural gas, electric power, and heat could leverage synergetic effects among these energy resources, precipitating the advent of integrated energy systems. In such infrastructures, energy hub is an interface among different energy systems, playing the role of energy production, conversion, and storage. The capacity of energy hub largely determines how tightly these energy systems are coupled and how flexibly the whole system would behave. This paper proposes a data-driven two-stage robust stochastic programming model for energy hub capacity planning with distributional robustness guarantee. Renewable generation and load uncertainties are modelled by a family of ambiguous probability distributions near an empirical distribution in the sense of Kullback–Leibler (KL) divergence measure. The objective is to minimize the sum of the construction cost and the expected life-cycle operating cost under the worst-case distribution restricted in the ambiguity set. Network energy flow in normal operating conditions is considered; demand supply reliability in extreme conditions is taken into account via robust chance constraints. Through duality theory and sampling average approximation, the proposed model is transformed into an equivalent convex program with a nonlinear objective and linear constraints, and is solved by an outer-approximation algorithm that entails solving only linear program. Case studies demonstrate the effectiveness of the proposed model and method.

Journal ArticleDOI
TL;DR: This paper incorporates several local searches into an existing IMOEA, and proposes a memetic algorithm (MA) to tackle IMOPs, and experimental results demonstrate the applicability and effectiveness of the proposed MA.
Abstract: One of the most important and widely faced optimization problems in real applications is the interval multiobjective optimization problems (IMOPs). The state-of-the-art evolutionary algorithms (EAs) for IMOPs (IMOEAs) need a great deal of objective function evaluations to find a final Pareto front with good convergence and even distribution. Further, the final Pareto front is of great uncertainty. In this paper, we incorporate several local searches into an existing IMOEA, and propose a memetic algorithm (MA) to tackle IMOPs. At the start, the existing IMOEA is utilized to explore the entire decision space; then, the increment of the hypervolume is employed to develop an activation strategy for every local search procedure; finally, the local search procedure is conducted by constituting its initial population, whose center is an individual with a small uncertainty and a big contribution to the hypervolume, taking the contribution of an individual to the hypervolume as its fitness function, and performing the conventional genetic operators. The proposed MA is empirically evaluated on ten benchmark IMOPs as well as an uncertain solar desalination optimization problem and compared with three state-of-the-art algorithms with no local search procedure. The experimental results demonstrate the applicability and effectiveness of the proposed MA.

Journal ArticleDOI
TL;DR: To achieve the DOC for linear multiagent systems with unmeasurable states, an observer-based event-triggered control law is proposed and it is proved that no Zeno behavior is exhibited and the global asymptotic convergence is preserved.
Abstract: This note considers the distributed optimal coordination (DOC) problem for heterogeneous linear multiagent systems. The local gradients are locally Lipschitz and the local convexity constants are unknown. A control law is proposed to drive the states of all agents to the optimal coordination that minimizes a global objective function. By exploring certain features of the invariant projection of the Laplacian matrix, the global asymptotic convergence is guaranteed utilizing only local interaction. The proposed control law is then extended with event-triggered communication schemes, which removes the requirement for continuous communications. Under the event-triggered control law, it is proved that no Zeno behavior is exhibited and the global asymptotic convergence is preserved. The proposed control laws are fully distributed, in the sense that the control design only uses the information in the connected neighborhood. Furthermore, to achieve the DOC for linear multiagent systems with unmeasurable states, an observer-based event-triggered control law is proposed. A simulation example is given to validate the proposed control laws.

Journal ArticleDOI
03 Apr 2020
TL;DR: This work enables decision-focused learning for the broad class of problems that can be encoded as a Mixed Integer Linear Program (MIP), hence supporting arbitrary linear constraints over discrete and continuous variables.
Abstract: Machine learning components commonly appear in larger decision-making pipelines; however, the model training process typically focuses only on a loss that measures average accuracy between predicted values and ground truth values. Decision-focused learning explicitly integrates the downstream decision problem when training the predictive model, in order to optimize the quality of decisions induced by the predictions. It has been successfully applied to several limited combinatorial problem classes, such as those that can be expressed as linear programs (LP), and submodular optimization. However, these previous applications have uniformly focused on problems with simple constraints. Here, we enable decision-focused learning for the broad class of problems that can be encoded as a mixed integer linear program (MIP), hence supporting arbitrary linear constraints over discrete and continuous variables. We show how to differentiate through a MIP by employing a cutting planes solution approach, an algorithm that iteratively tightens the continuous relaxation by adding constraints removing fractional solutions. We evaluate our new end-to-end approach on several real world domains and show that it outperforms the standard two phase approaches that treat prediction and optimization separately, as well as a baseline approach of simply applying decision-focused learning to the LP relaxation of the MIP. Lastly, we demonstrate generalization performance in several transfer learning tasks.

Proceedings ArticleDOI
09 Jul 2020
TL;DR: This work proposes a new framework of CE for extracting an action by evaluating its reality on the empirical data distribution based on the Mahalanobis’ distance and the local outlier factor and proposes a mixed-integer linear optimization approach to extracting an optimal action by minimizing the cost function.
Abstract: Counterfactual Explanation (CE) is one of the posthoc explanation methods that provides a perturbation vector so as to alter the prediction result obtained from a classifier. Users can directly interpret the perturbation as an ”action” for obtaining their desired decision results. However, an action extracted by existing methods often becomes unrealistic for users because they do not adequately care about the characteristics corresponding to the empirical data distribution such as feature-correlations and outlier risk. To suggest an executable action for users, we propose a new framework of CE for extracting an action by evaluating its reality on the empirical data distribution. The key idea of our proposed method is to define a new cost function based on the Mahalanobis’ distance and the local outlier factor. Then, we propose a mixed-integer linear optimization approach to extracting an optimal action by minimizing our cost function. By experiments on real datasets, we confirm the effectiveness of our method in comparison with existing methods for CE.

Journal ArticleDOI
01 Jun 2020
TL;DR: A robust mixed-integer linear programming model for LNG sales planning over a given time horizon aiming to minimize the costs of the vendor and a novel metaheuristic algorithm, namely cuckoo optimization algorithm (COA), is designed to solve the problem efficiently.
Abstract: A constant development of gas utilization in domestic households, industry, and power plants has slowly transformed gaseous petrol into a noteworthy wellspring of energy. Supply and transportation planning of liquefied natural gas (LNG) need a great attention from the management of the supply chain to provide a significant development of gas trading. Therefore, this paper addresses a robust mixed-integer linear programming model for LNG sales planning over a given time horizon aiming to minimize the costs of the vendor. Since the parameter of the manufacturer supply has an uncertain nature in the real world, and this parameter is regarded to be interval-based uncertain. To validate the model, various illustrative examples are solved using CPLEX solver of GAMS software under different uncertainty levels. Furthermore, a novel metaheuristic algorithm, namely cuckoo optimization algorithm (COA), is designed to solve the problem efficiently. The obtained comparison results demonstrate that the proposed COA can generate high-quality solutions. Furthermore, the comparison results of the deterministic and robust models are evaluated, and sensitivity analyses are performed on the main parameters to provide the concluding remarks and managerial insights of the research. Finally, a comparison evaluation is done between the total vendor profit and the robustness cost to find the optimal robustness level.

Journal ArticleDOI
TL;DR: Results have verified the effectiveness of the proposed method, providing efficient bidding curves to the EHO through the stochastic management, and shows that the proposed strategy can balance the operational cost and service quality via the adjustment of chance constraints.
Abstract: To realize high efficient energy conversion among multiple energy carriers in market environment, the hybrid ac/dc microgrid is embedded as the electrical hub for energy hubs (EHs), and a stochastic day-ahead bidding strategy is proposed for the energy hub operators (EHOs). The electricity, heating, and cooling are managed to balance the operational cost and thermal service quality, while participating into the day-ahead market and real-time market. The uncertainties of prices, electrical loads, and ambient temperature are depicted by scenario trees, and are managed by a two-stage stochastic optimization scheme to minimize the expected and conditional value at the risk of operational cost. This stochastic optimization problem is reformulated to a linear programming (LP) problem under given conditions. In addition, a chance constraint is proposed to relax the quality of thermal services, and a two-stage chance constrained stochastic programming is formulated accordingly. It is further reformulated to a mixed integer LP problem. Simulations have been carried out on an EH with multiple types of energy generation, conversion, and storage systems. Results have verified the effectiveness of the proposed method, providing efficient bidding curves to the EHO through the stochastic management. Sensitive analysis shows that the proposed strategy can balance the operational cost and service quality via the adjustment of chance constraints.

Journal ArticleDOI
TL;DR: A mixed-integer linear programming model, which aims to minimize the total cost of the “factory-in-a-box” supply chain, is presented in this study and it is demonstrated that the Evolutionary Algorithm outperforms the other metaheuristic algorithms developed for the model.
Abstract: The “factory-in-a-box” concept involves assembling production modules (i.e., factories) in containers and transporting the containers to different customer locations. Such a concept could be highly effective during emergencies, when there is an urgent demand for products (e.g., the COVID-19 pandemic). The “factory-in-a-box” planning problem can be divided into two sub-problems. The first sub-problem deals with the assignment of raw materials to suppliers, sub-assembly decomposition, assignment of sub-assembly modules to manufacturers, and assignment of tasks to manufacturers. The second sub-problem focuses on the transport of sub-assembly modules between suppliers and manufacturers by assigning vehicles to locations, deciding the order of visits for suppliers, manufacturers, and customers, and selecting the appropriate routes within the transportation network. This study addresses the second sub-problem, which resembles the vehicle routing problem, by developing an optimization model and solution algorithms in order to optimize the “factory-in-a-box” supply chain. A mixed-integer linear programming model, which aims to minimize the total cost of the “factory-in-a-box” supply chain, is presented in this study. CPLEX is used to solve the model to the global optimality, while four metaheuristic algorithms, including the Evolutionary Algorithm, Variable Neighborhood Search, Tabu Search, and Simulated Annealing, are employed to solve the model for large-scale problem instances. A set of numerical experiments, conducted for a case study of “factory-in-a-box”, demonstrate that the Evolutionary Algorithm outperforms the other metaheuristic algorithms developed for the model. Some managerial insights are outlined in the numerical experiments as well.

Journal ArticleDOI
TL;DR: This paper studies a class of distributed convex optimization problems by a set of agents in which each agent only has access to its own local convex objective function and the estimate of each agent is restricted to both coupling linear constraint and individual box constraints.
Abstract: This paper studies a class of distributed convex optimization problems by a set of agents in which each agent only has access to its own local convex objective function and the estimate of each agent is restricted to both coupling linear constraint and individual box constraints. Our focus is to devise a distributed primal-dual gradient algorithm for working out the problem over a sequence of time-varying general directed graphs. The communications among agents are assumed to be uniformly strongly connected. A column-stochastic mixing matrix and a fixed step-size are applied in the algorithm which exactly steers all the agents to asymptotically converge to a global optimal solution. Based on the standard strong convexity and the smoothness assumptions of the objective functions, we show that the distributed algorithm is capable of driving the whole network to geometrically converge to an optimal solution of the convex optimization problem only if the step-size does not exceed some upper bound. We also give an explicit analysis for the convergence rate of the proposed optimization algorithm. Simulations on economic dispatch problems and demand response problems in power systems are performed to illustrate the effectiveness of the proposed optimization algorithm.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of collecting fresh data from power-constrained sensors in the industrial Internet of Things (IIoT) network and design scheduling algorithms to approach it.
Abstract: This work is motivated by the need of collecting fresh data from power-constrained sensors in the industrial Internet of Things (IIoT) network. A recently proposed metric, the Age of Information (AoI) is adopted to measure data freshness from the perspective of the central controller in the IIoT network. We wonder what is the minimum average AoI the network can achieve and how to design scheduling algorithms to approach it. To answer these questions when the channel states of the network are time-varying and scheduling decisions are restricted to both bandwidth and power consumption constraint, we first decouple the multi-sensor scheduling problem into a single-sensor constrained Markov decision process (CMDP) by relaxing the hard bandwidth constraint. Next we exploit the threshold structure of the optimal policy for the decoupled single sensor CMDP and obtain the optimum solution through linear programming (LP). Finally, an asymptotically optimal truncated policy that can satisfy the hard bandwidth constraint is built upon the optimal solution to each of the decoupled single-sensor. Our investigation shows that to obtain a small average AoI over the network: (1) The scheduler exploits good channels to schedule sensors supported by limited power; (2) Sensors equipped with enough transmission power are updated in a timely manner such that the bandwidth constraint can be satisfied.

Journal ArticleDOI
TL;DR: The compared results conclude that the average cost is leisurely raised while deviation cost is severely decreased which leads to obtaining robust scheduling of hub energy system in the uncertain environment.

Journal ArticleDOI
TL;DR: A new TI algorithm based on measurements from a few line current sensors, together with available pseudo-measurements for nodal power injections, which is able to identify all possible topologies, including radial, loop, and island configurations, which extends the application of TI to identify switch malfunctions and to detect outages.
Abstract: This study is motivated by the recent advancements in developing non-contact line sensor technologies that come at a low cost, but have limited measurement capabilities. While they are intended to measure current, they cannot measure voltage and power. This poses a challenge to certain distribution system applications, such as topology identification (TI), because they commonly use voltage and power measurements. To address this open problem, a new TI algorithm is proposed based on measurements from a few line current sensors, together with available pseudo-measurements for nodal power injections. A TI problem formulation is first developed in the form of a mixed integer nonlinear program (MINLP). Several reformulation steps are then adopted to tackle the nonlinearities to express the TI problem in the form of a mixed integer linear program (MILP). The proposed method is able to identify all possible topologies, including radial, loop, and island configurations, which extends the application of TI to identify switch malfunctions and to detect outages. In addition, recommendations are made with respect to the number and location of the line current sensors to ensure performance accuracy of the TI method. A novel multi-period TI algorithm is also proposed to use multiple measurement snapshots to improve the TI accuracy and robustness against errors in pseudo-measurements. The effectiveness of the proposed TI algorithms is examined on the IEEE 33-bus test case as well as a test case based on a real-world feeder in Riverside, CA.

Journal ArticleDOI
TL;DR: The proposed solution outperforms a baseline strategy leveraged from an existing work, especially with favorable channel condition and sufficient frequency resources, and can be always enhanced by deploying more UAVs due to non-negligible tradeoff between energy/communication sources and co-channel interference.
Abstract: In this paper, we focus on a downlink cellular network, where multiple UAVs serve as aerial basestations to provide wireless connectivity to ground users through frequency division multi-access (FDMA) scheme. The UAVs are exclusively powered by a wireless charging station located on the ground following save-then-transmit protocol. In such a UAV-assisted cellular network, joint optimization for user association, resource allocation and basesation placement are investigated to maximize the downlink sum rate. The problem is formulated as a mixed integer optimization problem and is thus challenging to solve. We propose an efficient solution based on alternating optimization, by iteratively solving one of the three subproblems (i.e., user association, resource allocation and basesation placement) with the other two fixed. Specificly, user association is solved as a standard linear programming problem by relaxing the binary association indicators into continuous variables. For basestation placement and resource allocation, we resort to successive convex optimization technique, which iteratively solves a lower-bound problem. After iteratively solving the three subproblems, we further propose an algorithm based on penalty method and successive convex optimization to make the association indicators feasibly binary. We conduct comprehensive experiments for the optimal solution to the three subproblems with insightful results. We also show that the optimal downlink sum rate cannot be always enhanced by deploying more UAVs due to non-negligible tradeoff between energy/communication sources and co-channel interference. Moreover, the proposed solution outperforms a baseline strategy leveraged from an existing work, especially with favorable channel condition and sufficient frequency resources.

Journal ArticleDOI
02 Oct 2020
TL;DR: An efficient quantum procedure for solving the Newton linear systems arising in the classical IPMs, an efficient pure state tomography algorithm, and an analysis of the IPM where the linear systems are solved approximately are presented.
Abstract: We present a quantum interior point method (IPM) for semi-definite programs that has a worst-case running time of O(n2.5 / ξ2 μ κ 3 log(1/e)). The algorithm outputs a pair of matrices (S,Y) that have objective value within e of the optimal and satisfy the constraints approximately to error xi. The parameter mu is at most √2n while kappa is an upper bound on the condition number of the intermediate solution matrices arising in the classical IPM. For the case where κ L n5/6, our method provides a significant polynomial speedup over the best-known classical semi-definite program solvers that have a worst-case running time of O(n6). For linear programs, our algorithm has a running time of O(n1.5 / ξ2 μ κ 3 log (1/e)) with the same guarantees and with parameter μ n. Our technical contributions include an efficient quantum procedure for solving the Newton linear systems arising in the classical IPMs, an efficient pure state tomography algorithm, and an analysis of the IPM where the linear systems are solved approximately. Our results pave the way for the development of quantum algorithms with significant polynomial speedups for applications in optimization and machine learning.

Journal ArticleDOI
TL;DR: To model the uncertain parameters in the proposed problem including forecasted active and reactive loads, energy and charging/discharging prices and the output power of vRES, the bounded uncertainty-based robust optimization (BURO) framework is proposed in the next step.
Abstract: This paper presents a robust planning of distributed battery energy storage systems (DBESSs) from the viewpoint of distribution system operator (DSO) to increase the network flexibility. Initially, the deterministic model of the proposed problem is expressed by minimizing the difference between the DBESS planning, degradation and operation (charging) costs and the revenue of DBESS from selling its stored energy subject to the constraints of AC power flow equations in the presence of RESs and DBESSs, and technical limits of the network indexes, variable renewable energy sources (vRESs) and DBESSs. This problem is modeled as a non-linear programming (NLP), then, an equivalent linear programming (LP) model is proposed using the first-order expansion of Taylor's series for linearization of power flow equations and a polygon for linearization of circular inequalities. Also, to model the uncertain parameters in the proposed problem including forecasted active and reactive loads, energy and charging/discharging prices and the output power of vRES, the bounded uncertainty-based robust optimization (BURO) framework is proposed in the next step. Finally, the proposed scheme is applied to 19-bus MV CIGRE benchmark grid by GAMS software to investigate the capability and efficiency of the model.

Journal ArticleDOI
TL;DR: This paper is the first attempt to utilize the correlation between constraints and objective function to keep this balance, and a novel constrained optimization evolutionary algorithm is presented.
Abstract: When solving constrained optimization problems by evolutionary algorithms, the core issue is to balance constraints and objective function. This paper is the first attempt to utilize the correlation between constraints and objective function to keep this balance. First of all, the correlation between constraints and objective function is mined and represented by a correlation index. Afterward, a weighted sum updating approach and an archiving and replacement mechanism are proposed to make use of this correlation index to guide the evolution. By the above process, a novel constrained optimization evolutionary algorithm is presented. Experiments on a broad range of benchmark test functions indicate that the proposed method shows better or at least competitive performance against other state-of-the-art methods. Moreover, the proposed method is applied to the gait optimization of humanoid robots.

Journal ArticleDOI
TL;DR: This paper describes how linear programming, second-order cone programming, and semidefinite programming can be used to address a central problem named the optimal power flow problem, and describes how convex relaxations of this highly challenging non-convex optimization problem are designed.

Journal ArticleDOI
TL;DR: Four iterated greedy algorithm-based methods by using different neighborhood structures and integrating a variable neighborhood descent method are developed to solve the scheduling problem of a batch production process, i.e., wire rod and bar rolling, which is modeled by a Petri net (PN).
Abstract: Wire rod and bar rolling is an important batch production process in steel production systems. A scheduling problem originated from this process is studied in this work by considering the constraints on sequence-dependent family setup time and release time. For each serial batch to be scheduled, it contains several jobs and the number of late jobs within it varies with its start time. First, we model a rolling process using a Petri net (PN), where a so-called rolling transition describes a rolling operation of a batch. The objective of the concerned problem is to determine a firing sequence of all rolling transitions such that the total number of late jobs is minimal. Next, a mixed-integer linear program is formulated based on the PN model. Due to the NP-hardness of the concerned problem, iterated greedy algorithm (IGA)-based methods by using different neighborhood structures and integrating a variable neighborhood descent method are developed to obtain its near-optimal solutions. To test the accuracy, speed, and stability of the proposed algorithms, we compare their solutions of different-size instances with those of CPLEX (a commercial software) and four heuristic peers. The results indicate that the proposed algorithms outperform their peers and have great potential to be applied to industrial production process scheduling.

Book ChapterDOI
21 Jul 2020
TL;DR: In this paper, a new set representation called ImageStar is proposed to detect and prove the absence of bounded adversarial attacks, which can then be used to evaluate the effectiveness of neural network training methodology.
Abstract: Convolutional Neural Networks (CNN) have redefined state-of-the-art in many real-world applications, such as facial recognition, image classification, human pose estimation, and semantic segmentation. Despite their success, CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output in even well-trained networks. Set-based analysis methods can detect or prove the absence of bounded adversarial attacks, which can then be used to evaluate the effectiveness of neural network training methodology. Unfortunately, existing verification approaches have limited scalability in terms of the size of networks that can be analyzed. In this paper, we describe a set-based framework that successfully deals with real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet. Our approach is based on a new set representation called the ImageStar, which enables efficient exact and over-approximative analysis of CNNs. ImageStars perform efficient set-based analysis by combining operations on concrete images with linear programming (LP). Our approach is implemented in a tool called NNV, and can verify the robustness of VGG networks with respect to a small set of input states, derived from adversarial attacks, such as the DeepFool attack. The experimental results show that our approach is less conservative and faster than existing zonotope and polytope methods.

Proceedings Article
05 Jan 2020
TL;DR: This paper shows that one can also settle the deterministic setting by derandomizing Cohen et al.'s $\tilde{O}(n^\omega \log(n/\delta))$ time algorithm, and proposes a new data-structure that can maintain the solution to a linear system in subquadratic time.
Abstract: Interior point algorithms for solving linear programs have been studied extensively for a long time [e.g. Karmarkar 1984; Lee, Sidford FOCS'14; Cohen, Lee, Song STOC'19]. For linear programs of the form minAx=b,x≥0 c┬x with n variables and d constraints, the generic case d = Ω(n) has recently been settled by Cohen, Lee and Song [STOC'19]. Their algorithm can solve linear programs in O(nω log(n/Δ)) expected time1, where Δ is the relative accuracy. This is essentially optimal as all known linear system solvers require up to O(nΩ) time for solving Ax = b. However, for the case of deterministic solvers, the best upper bound is Vaidya's 30 years old O(n2.5 log(n/Δ)) bound [FOCS'89]. In this paper we show that one can also settle the deterministic setting by derandomizing Cohen et al.'s O(nω log(n/Δ))) time algorithm. This allows for a strict O(nω log(n/Δ)) time bound, instead of an expected one, and a simplified analysis, reducing the length of their proof of their central path method by roughly half. Derandomizing this algorithm was also an open question asked in Song's PhD Thesis. The main tool to achieve our result is a new data-structure that can maintain the solution to a linear system in subquadratic time. More accurately we are able to maintain [MATH HERE] in subquadratic time under l2 multiplicative changes to the diagonal matrix U and the vector v. This type of change is common for interior point algorithms. Previous algorithms [e.g. Vaidya STOC'89; Lee, Sidford FOCS'15; Cohen, Lee, Song STOC'19] required Ω(n2) time for this task. In [Cohen, Lee, Song STOC'19] they managed to maintain the matrix [MATH HERE] in sub-quadratic time, but multiplying it with a dense vector to solve the linear system still required Ω(n2) time. To improve the complexity of their linear program solver, they restricted the solver to only multiply sparse vectors via a random sampling argument. In comparison, our data-structure maintains the entire product [MATH HERE] additionally to just the matrix. Interestingly, this can be viewed as a simple modification of Cohen et al.'s data-structure, but it significantly simplifies their analysis of their central path method and makes their whole algorithm deterministic.