scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic (computer science) published in 2017"


Proceedings Article
04 Dec 2017
TL;DR: This paper proposes a unique combination of reinforcement learning and graph embedding that behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of agraph embedding network capturing the current state of the solution.
Abstract: The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.

717 citations


Posted Content
TL;DR: In this paper, a combination of reinforcement learning and graph embedding is proposed to learn heuristics for combinatorial optimization problems over graphs, such as Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.
Abstract: The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.

455 citations


Journal ArticleDOI
TL;DR: The results of the proposed algorithm on the test functions show that this algorithm benefits from high convergence and coverage, and its applicability is solving challenging real-world problems as well.
Abstract: This paper proposes a multi-objective version of the recently proposed Ant Lion Optimizer (ALO) called Multi-Objective Ant Lion Optimizer (MOALO). A repository is first employed to store non-dominated Pareto optimal solutions obtained so far. Solutions are then chosen from this repository using a roulette wheel mechanism based on the coverage of solutions as antlions to guide ants towards promising regions of multi-objective search spaces. To prove the effectiveness of the algorithm proposed, a set of standard unconstrained and constrained test functions is employed. Also, the algorithm is applied to a variety of multi-objective engineering design problems: cantilever beam design, brushless dc wheel motor design, disk brake design, 4-bar truss design, safety isolating transformer design, speed reduced design, and welded beam deign. The results are verified by comparing MOALO against NSGA-II and MOPSO. The results of the proposed algorithm on the test functions show that this algorithm benefits from high convergence and coverage. The results of the algorithm on the engineering design problems demonstrate its applicability is solving challenging real-world problems as well.

446 citations


Journal ArticleDOI
TL;DR: This work proposes a new set of 100 instances ranging from 100 to 1000 customers, designed in order to provide a more comprehensive and balanced experimental setting, and reports an analysis on state-of-the-art exact and heuristic methods.

314 citations


Journal ArticleDOI
TL;DR: In this article, an ordered multi-material SIMP interpolation method is proposed to solve multilayer topology optimization problems without introducing any new variables, where power functions with scaling and translation coefficients are introduced to interpolate the elastic modulus and cost properties for multiple materials with respect to the normalized density variables.
Abstract: In this paper an ordered multi-material SIMP (solid isotropic material with penalization) interpolation is proposed to solve multi-material topology optimization problems without introducing any new variables. Power functions with scaling and translation coefficients are introduced to interpolate the elastic modulus and the cost properties for multiple materials with respect to the normalized density variables. Besides a mass constraint, a cost constraint is also considered in compliance minimization problems. A heuristic updating scheme of the design variables is derived from the Kuhn-Tucker optimality condition (OC). Since the proposed method does not rely on additional variables to represent material selection, the computational cost is independent of the number of materials considered. The iteration scheme is designed to jump across the discontinuous point of interpolation derivatives to make stable transition from one material phase to another. Numerical examples are included to demonstrate the proposed method. Because of its conceptual simplicity, the proposed ordered multi-material SIMP interpolation can be easily embedded into any existing single material SIMP topology optimization codes.

282 citations


Journal ArticleDOI
01 Dec 2017
TL;DR: This work model the service placement problem for IoT applications over fog resources as an optimization problem, which explicitly considers the heterogeneity of applications and resources in terms of Quality of Service attributes, and proposes a genetic algorithm as a problem resolution heuristic.
Abstract: The Internet of Things (IoT) leads to an ever-growing presence of ubiquitous networked computing devices in public, business, and private spaces. These devices do not simply act as sensors, but feature computational, storage, and networking resources. Being located at the edge of the network, these resources can be exploited to execute IoT applications in a distributed manner. This concept is known as fog computing. While the theoretical foundations of fog computing are already established, there is a lack of resource provisioning approaches to enable the exploitation of fog-based computational resources. To resolve this shortcoming, we present a conceptual fog computing framework. Then, we model the service placement problem for IoT applications over fog resources as an optimization problem, which explicitly considers the heterogeneity of applications and resources in terms of Quality of Service attributes. Finally, we propose a genetic algorithm as a problem resolution heuristic and show, through experiments, that the service execution can achieve a reduction of network communication delays when the genetic algorithm is used, and a better utilization of fog resources when the exact optimization method is applied.

275 citations


Journal ArticleDOI
TL;DR: Against most existing methods for 3D path following, the proposed robust fuzzy control scheme reduces the design and implementation costs of complicated dynamics controller, and relaxes the knowledge of the accuracy dynamics modelling and environmental disturbances.

234 citations


Proceedings Article
01 Jan 2017
TL;DR: LDAMP as discussed by the authors mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm and can be applied to a variety of different measurement matrices.
Abstract: Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be ``unrolled’' to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network {\em Learned} D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over $50\times$ faster than BM3D-AMP and hundreds of times faster than NLR-CS.

208 citations


Journal ArticleDOI
TL;DR: This work sets up the problem of minimizing inter-cloud traffic and response time in a multi-cloud scenario as an ILP optimization problem, along with important constraints such as total deployment costs and service level agreements (SLAs) and considers link delays and computational delays in the model.

197 citations


Journal ArticleDOI
17 Apr 2017-Energies
TL;DR: In this paper, the authors proposed an optimized home energy management system (OHEMS) that not only facilitates the integration of renewable energy source (RES) and energy storage system (ESS) but also incorporates the residential sector into DSM activities.
Abstract: Traditional power grid and its demand-side management (DSM) techniques are centralized and mainly focus on industrial consumers. The ignorance of residential and commercial sectors in DSM activities degrades the overall performance of a conventional grid. Therefore, the concept of DSM and demand response (DR) via residential sector makes the smart grid (SG) superior over the traditional grid. In this context, this paper proposes an optimized home energy management system (OHEMS) that not only facilitates the integration of renewable energy source (RES) and energy storage system (ESS) but also incorporates the residential sector into DSM activities. The proposed OHEMS minimizes the electricity bill by scheduling the household appliances and ESS in response to the dynamic pricing of electricity market. First, the constrained optimization problem is mathematically formulated by using multiple knapsack problems, and then solved by using the heuristic algorithms; genetic algorithm (GA), binary particle swarm optimization (BPSO), wind driven optimization (WDO), bacterial foraging optimization (BFO) and hybrid GA-PSO (HGPO) algorithms. The performance of the proposed scheme and heuristic algorithms is evaluated via MATLAB simulations. Results illustrate that the integration of RES and ESS reduces the electricity bill and peak-to-average ratio (PAR) by 19.94% and 21.55% respectively. Moreover, the HGPO algorithm based home energy management system outperforms the other heuristic algorithms, and further reduces the bill by 25.12% and PAR by 24.88%.

197 citations


Journal ArticleDOI
TL;DR: This paper proposes a holistic optimization framework for identifying a stream of vehicle trajectories that yield the optimum traffic performance measures on mobility, environment and safety and lays a solid foundation for developing holistic cooperative control strategies on a general transportation network with emerging technologies.
Abstract: Advanced connected and automated vehicle technologies enable us to modify driving behavior and control vehicle trajectories, which have been greatly constrained by human limits in existing manually-driven highway traffic. In order to maximize benefits from these technologies on highway traffic management, vehicle trajectories need to be not only controlled at the individual level but also coordinated collectively for a stream of traffic. As one of the pioneering attempts to highway traffic trajectory control, Part I of this study ( Zhou et al., 2016 ) proposed a parsimonious shooting heuristic (SH) algorithm for constructing feasible trajectories for a stream of vehicles considering realistic constraints including vehicle kinematic limits, traffic arrival patterns, car-following safety, and signal operations. Based on the algorithmic and theoretical developments in the preceding paper, this paper proposes a holistic optimization framework for identifying a stream of vehicle trajectories that yield the optimum traffic performance measures on mobility, environment and safety. The computational complexity and mobility optimality of SH is theoretically analyzed, and verifies superior computational performance and high solution quality of SH. A numerical sub-gradient-based algorithm with SH as a subroutine (NG-SH) is proposed to simultaneously optimize travel time, a surrogate safety measure, and fuel consumption for a stream of vehicles on a signalized highway section. Numerical examples are conducted to illustrate computational and theoretical findings. They show that vehicle trajectories generated from NG-SH significantly outperform the benchmark case with all human drivers at all measures for all experimental scenarios. This study reveals a great potential of transformative trajectory optimization approaches in transportation engineering applications. It lays a solid foundation for developing holistic cooperative control strategies on a general transportation network with emerging technologies.

Journal ArticleDOI
TL;DR: The results are compared quantitatively and qualitatively with other algorithms using a variety of performance indicators, which show the merits of this new MOMVO algorithm in solving a wide range of problems with different characteristics.
Abstract: This work proposes the multi-objective version of the recently proposed Multi-Verse Optimizer (MVO) called Multi-Objective Multi-Verse Optimizer (MOMVO). The same concepts of MVO are used for converging towards the best solutions in a multi-objective search space. For maintaining and improving the coverage of Pareto optimal solutions obtained, however, an archive with an updating mechanism is employed. To test the performance of MOMVO, 80 case studies are employed including 49 unconstrained multi-objective test functions, 10 constrained multi-objective test functions, and 21 engineering design multi-objective problems. The results are compared quantitatively and qualitatively with other algorithms using a variety of performance indicators, which show the merits of this new MOMVO algorithm in solving a wide range of problems with different characteristics.

Journal ArticleDOI
TL;DR: This paper exploits a heuristic bootstrap sampling approach combined with the ensemble learning algorithm on the large-scale insurance business data mining, and proposes an ensemble random forest algorithm that uses the parallel computing capability and memory-cache mechanism optimized by Spark.
Abstract: Due to the imbalanced distribution of business data, missing user features, and many other reasons, directly using big data techniques on realistic business data tends to deviate from the business goals. It is difficult to model the insurance business data by classification algorithms, such as logistic regression and support vector machine (SVM). In this paper, we exploit a heuristic bootstrap sampling approach combined with the ensemble learning algorithm on the large-scale insurance business data mining, and propose an ensemble random forest algorithm that uses the parallel computing capability and memory-cache mechanism optimized by Spark. We collected the insurance business data from China Life Insurance Company to analyze the potential customers using the proposed algorithm. We use F-Measure and G-mean to evaluate the performance of the algorithm. Experiment result shows that the ensemble random forest algorithm outperformed SVM and other classification algorithms in both performance and accuracy within the imbalanced data, and it is useful for improving the accuracy of product marketing compared to the traditional artificial approach.

Journal ArticleDOI
TL;DR: The experiments have shown that the LAHC approach is simple, easy to implement and yet is an effective search procedure, and has an additional advantage (in contrast to the above cooling schedule based methods) in its scale independence.

Journal ArticleDOI
TL;DR: Simulation results illustrate that the proposed methodologies can outperform some counterparts providing sequences with good autocorrelation features especially in the discrete phase/binary case.
Abstract: This paper is focused on the design of phase sequences with good (aperiodic) autocorrelation properties in terms of peak sidelobe level and integrated sidelobe level. The problem is formulated as a biobjective Pareto optimization forcing either a continuous or a discrete phase constraint at the design stage. An iterative procedure based on the coordinate descent method is introduced to deal with the resulting optimization problems that are nonconvex and NP-hard in general. Each iteration of the devised method requires the solution of a nonconvex min–max problem. It is handled either through a novel bisection or an FFT-based method respectively for the continuous and the discrete phase constraint. Additionally, a heuristic approach to initialize the procedures employing the $l_p$ -norm minimization technique is proposed. Simulation results illustrate that the proposed methodologies can outperform some counterparts providing sequences with good autocorrelation features especially in the discrete phase/binary case.

Journal ArticleDOI
TL;DR: In this paper, a parsimonious shooting heuristic algorithm is proposed to construct vehicle trajectories on a signalized highway segment that comply with boundary conditions for vehicle arrivals, vehicle mechanical limits, traffic lights and vehicle following safety.
Abstract: This paper studies a problem of designing trajectories of a platoon of vehicles on a highway segment with advanced connected and automated vehicle technologies. This problem is very complex because each vehicle trajectory is essentially an infinite-dimensional object and neighboring trajectories have complex interactions (e.g., car-following behavior). A parsimonious shooting heuristic algorithm is proposed to construct vehicle trajectories on a signalized highway segment that comply with boundary conditions for vehicle arrivals, vehicle mechanical limits, traffic lights and vehicle following safety. This algorithm breaks each vehicle trajectory into a few sections that are analytically solvable. This decomposes the originally hard trajectory design problem to a simple constructive heuristic. Then we slightly adapt this shooting heuristic algorithm to efficiently solve a leading vehicle problem on an uninterrupted freeway. To study theoretical properties of the proposed algorithms, the time geography theory is generalized by considering finite accelerations. With this generalized theory, it is found that under mild conditions, these algorithms can always obtain a feasible solution to the original complex trajectory design problem. Further, we discover that the shooting heuristic solution is a generalization of the solution to the classic kinematic wave theory by incorporating finite accelerations. We identify the theoretical bounds to the difference between the shooting heuristic solution and the kinematic wave solution. Numerical experiments are conducted to verify the theoretical results and to draw additional managerial insights into the potential of trajectory design in improving traffic performance. In summary, this paper provides a methodological and theoretical foundation for advanced traffic control by optimizing the trajectories of connected and automated vehicles. Building upon this foundation, an optimization framework will be presented in a following paper as Part II of this study.

Journal ArticleDOI
01 Apr 2017
TL;DR: A simulated annealing (SA) heuristic is proposed to solve the hybrid vehicle routing problem (HVRP), which is an extension of the Green Vehicle Routing Problem (G-VRP) and results show that the proposed SA effectively solves HVRP.
Abstract: Display Omitted This research proposes the hybrid vehicle routing problem (HVRP), which is an extension of the green vehicle routing problem.A simulated annealing (SA) heuristic is proposed to solve HVRP.Computational results show that the proposed SA effectively solves HVRP.Sensitivity analysis has been conducted to understand the effect of hybrid vehicles and charging stations on the travel cost. This study proposes the Hybrid Vehicle Routing Problem (HVRP), which is an extension of the Green Vehicle Routing Problem (G-VRP). We focus on vehicles that use a hybrid power source, known as the Plug-in Hybrid Electric Vehicle (PHEV) and generate a mathematical model to minimize the total cost of travel by driving PHEV. Moreover, the model considers the utilization of electric and fuel power depending on the availability of either electric charging or fuel stations.We develop simulated annealing with a restart strategy (SA_RS) to solve this problem, and it consists of two versions. The first version determines the acceptance probability of a worse solution using the Boltzmann function, denoted as SA_RSBF. The second version employs the Cauchy function to determine the acceptance probability of a worse solution, denoted as SA_RSCF. The proposed SA algorithm is first verified with benchmark data of the capacitated vehicle routing problem (CVRP), with the result showing that it performs well and confirms its efficiency in solving CVRP. Further analysis show that SA_RSCF is preferable compared to SA_RSBF and that SA with a restart strategy performs better than without a restart strategy. We next utilize the SA_RSCF method to solve HVRP. The numerical experiment presents that vehicle type and the number of electric charging stations have an impact on the total travel cost.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors studied the strategies that select seed users in an adaptive manner and showed that a simple greedy adaptive seeding strategy finds an effective solution with a provable performance guarantee.
Abstract: For the purpose of propagating information and ideas through a social network, a seeding strategy aims to find a small set of seed users that are able to maximize the spread of the influence, which is termed influence maximization problem. Despite a large number of works have studied this problem, the existing seeding strategies are limited to the models that cannot fully capture the characteristics of real-world social networks. In fact, due to high-speed data transmission and large population of participants, the diffusion processes in real-world social networks have many aspects of uncertainness. As shown in the experiments, when taking such uncertainness into account, the state-of-the-art seeding strategies are pessimistic as they fail to trace the influence diffusion. In this paper, we study the strategies that select seed users in an adaptive manner. We first formally model the dynamic independent Cascade model and introduce the concept of adaptive seeding strategy. Then, based on the proposed model, we show that a simple greedy adaptive seeding strategy finds an effective solution with a provable performance guarantee. Besides the greedy algorithm, an efficient heuristic algorithm is provided for better scalability. Extensive experiments have been performed on both the real-world networks and synthetic power-law networks. The results herein demonstrate the superiority of the adaptive seeding strategies to other baseline methods.

Journal ArticleDOI
TL;DR: In this paper, the authors review some of the more recent methods for distribution network reconfiguration, DG placement, and sizing that are intended to minimize power losses and improve the voltage profile.
Abstract: The Network Reconfiguration technique is a method which helps mitigate power losses from distribution systems. However, the reconfiguration technique can only do this up to a certain point. Further power loss reduction may be realized via the application of Distributed Generation (DG). However, the integration of DG into the distribution system at a non-optimal location may result in increased power losses and voltage fluctuations. Therefore, a strategy for the selection of optimal placement and sizing of the DG needs to be developed and at the same time ensure optimal configuration. Many heuristic and artificial intelligence methods have been proposed in the literature for optimal distribution network reconfiguration, DGs sizing, and location. This paper reviews some of the more recent methods for distribution network reconfiguration, DG placement, and sizing that are intended to minimize power losses and improve the voltage profile.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A new primal-dual approach is presented that allows to exploit the geometric structure of k-means and to satisfy the hard constraint that at most k clusters are selected without deteriorating the approximation guarantee.
Abstract: Clustering is a classic topic in optimization with k-means being one of the most fundamental such problems. In the absence of any restrictions on the input, the best known algorithm for k-means with a provable guarantee is a simple local search heuristic yielding an approximation guarantee of 9+≥ilon, a ratio that is known to be tight with respect to such methods.We overcome this barrier by presenting a new primal-dual approach that allows us to (1) exploit the geometric structure of k-means and (2) to satisfy the hard constraint that at most k clusters are selected without deteriorating the approximation guarantee. Our main result is a 6.357-approximation algorithm with respect to the standard LP relaxation. Our techniques are quite general and we also show improved guarantees for the general version of k-means where the underlying metric is not required to be Euclidean and for k-median in Euclidean metrics.

Journal ArticleDOI
07 Mar 2017-Energies
TL;DR: In this paper, a heuristic algorithms-based energy management controller is designed for a residential area in a smart grid and the proposed hybrid genetic wind-driven (GWD) algorithm is evaluated.
Abstract: In recent years, demand side management (DSM) techniques have been designed for residential, industrial and commercial sectors. These techniques are very effective in flattening the load profile of customers in grid area networks. In this paper, a heuristic algorithms-based energy management controller is designed for a residential area in a smart grid. In essence, five heuristic algorithms (the genetic algorithm (GA), the binary particle swarm optimization (BPSO) algorithm, the bacterial foraging optimization algorithm (BFOA), the wind-driven optimization (WDO) algorithm and our proposed hybrid genetic wind-driven (GWD) algorithm) are evaluated. These algorithms are used for scheduling residential loads between peak hours (PHs) and off-peak hours (OPHs) in a real-time pricing (RTP) environment while maximizing user comfort (UC) and minimizing both electricity cost and the peak to average ratio (PAR). Moreover, these algorithms are tested in two scenarios: (i) scheduling the load of a single home and (ii) scheduling the load of multiple homes. Simulation results show that our proposed hybrid GWD algorithm performs better than the other heuristic algorithms in terms of the selected performance metrics.

Journal ArticleDOI
TL;DR: This paper considers the problem of placing electric vehicle (EV) charging stations at selected bus stops, to minimize the total installation cost of charging stations, and designs a linear programming relaxation algorithm to get a suboptimal solution and derives an approximation ratio of the algorithm.
Abstract: Due to the low pollution and sustainable properties, using electric buses for public transportation systems has attracted considerable attention, whereas how to recharge the electric buses with long continuous service hours remains an open problem. In this paper, we consider the problem of placing electric vehicle (EV) charging stations at selected bus stops, to minimize the total installation cost of charging stations. Specifically, we study two EV charging station placement cases, with and without considering the limited battery size, which are called ECSP_LB and ECSP problems, respectively. The solution of the ECSP problem achieves the lower bound compared with the solution of the ECSP_LB problem, and the larger the battery size of the EV, the lower the overall cost of the charging station installation. For both cases, we prove that the placement problems under consideration are NP-hard and formulate them into integer linear programming. Specifically, for the ECSP problem we design a linear programming relaxation algorithm to get a suboptimal solution and derive an approximation ratio of the algorithm. Moreover, we derive the condition of the battery size when the ECSP problem can be applied. For the ECSP_LB problem, we show that, for a single bus route, the problem can be optimally solved with a backtracking algorithm, whereas for multiple bus routes we propose two heuristic algorithms, namely, multiple backtracking and greedy algorithms. Finally, simulation results show the effectiveness of the proposed schemes.

Journal ArticleDOI
TL;DR: CoFIM is proposed, a community-based framework for influence maximization on large-scale networks that derives a simple evaluation form of the total influence spread which is submodular and can be efficiently computed and a fast algorithm to select the seed nodes.
Abstract: Influence maximization is a classic optimization problem studied in the area of social network analysis and viral marketing. Given a network, it is defined as the problem of finding k seed nodes so that the influence spread of the network can be optimized. Kempe et al. have proved that this problem is NP hard and the objective function is submodular, based on which a greedy algorithm was proposed to give a near-optimal solution. However, this simple greedy algorithm is time consuming, which limits its application on large-scale networks. Heuristic algorithms generally cannot provide any performance guarantee. To solve this problem, in this paper we propose CoFIM, a community-based framework for influence maximization on large-scale networks. In our framework the influence propagation process is divided into two phases: (i) seeds expansion; and (ii) intra-community propagation. The first phase is the expansion of seed nodes among different communities at the beginning of diffusion. The second phase is the influence propagation within communities which are independent of each other. Based on the framework, we derive a simple evaluation form of the total influence spread which is submodular and can be efficiently computed. Then we further propose a fast algorithm to select the seed nodes. Experimental results on synthetic and nine real-world large datasets including networks with millions of nodes and hundreds of millions of edges show that our algorithm achieves competitive results in influence spread as compared with state-of-the-art algorithms and it is much more efficient in terms of both time and memory usage.

Journal ArticleDOI
TL;DR: A heuristic strategy based on voltage sensitivity analysis is proposed to select the most effective locations in the network where to install a given number of ESSs, while circumventing the combinatorial nature of the problem.
Abstract: This paper addresses the problem of finding the optimal configuration (number, locations, and sizes) of energy storage systems (ESSs) in a radial low voltage distribution network with the aim of preventing over- and undervoltages. A heuristic strategy based on voltage sensitivity analysis is proposed to select the most effective locations in the network where to install a given number of ESSs, while circumventing the combinatorial nature of the problem. For fixed ESS locations, the multi-period optimal power flow framework is adopted to formulate the sizing problem, for whose solution convex relaxations based on semidefinite programming are exploited. Uncertainties in the storage sizing decision problem due to stochastic generation and demand, are accounted for carrying out the optimal sizing over different realizations of the demand and generation profiles, and then taking a worst-case approach to select the ESS sizes. The final choice of the most suitable ESS configuration is done by minimizing a total cost, which takes into account the number of storage devices, their total installed capacity and average network losses. The proposed algorithm is extensively tested on 200 randomly generated radial networks, and successfully applied to a real Italian low voltage network and a modified version of the IEEE 34-bus test feeder.

Journal ArticleDOI
TL;DR: In this paper, a two-stage stochastic optimization problem suitable to solve strategic optimization problems of car-sharing systems that utilize electric cars is introduced and studied, and a time-dependent integer linear program and a heuristic algorithm for solving the considered optimization problem are developed and tested on real world instances from the city of Vienna, as well as on grid-graph-based instances.
Abstract: In this article, we introduce and study a two-stage stochastic optimization problem suitable to solve strategic optimization problems of car-sharing systems that utilize electric cars. By combining the individual advantages of car-sharing and electric vehicles, such electric car-sharing systems may help to overcome future challenges related to pollution, congestion, or shortage of fossil fuels. A time-dependent integer linear program and a heuristic algorithm for solving the considered optimization problem are developed and tested on real world instances from the city of Vienna, as well as on grid-graph-based instances. An analysis of the influence of different parameters on the overall performance and managerial insights are given. Results show that the developed exact approach is suitable for medium sized instances such as the ones obtained from the inner districts of Vienna. They also show that the heuristic can be used to tackle very-large-scale instances that cannot be approached successfully by the integer-programming-based method.

Journal ArticleDOI
TL;DR: Numerical results are provided to validate the proposed algorithm (including its accuracy and computational efficiency) and demonstrate that the optimal MDs' cooperative offloading can significantly reduce the system cost compared to some heuristic schemes.
Abstract: In this paper, we investigate the cooperative traffic offloading among mobiles devices (MDs) which are interested in receiving a common content from a cellular base station (BS). For offloading traffic, the BS first sends the content to some selected MDs which then broadcast the received data to the other MDs, such that each MD can receive the entire content simultaneously. Due to each MD's limited transmit-power and energy budget, the transmission rate of the content should be properly designed, since it strongly influences whether and how long each MD can perform relaying. Therefore, different from most existing MDs cooperative schemes, we focus on a novel joint optimization of the content transmission rate and each MD's relay-duration, with the objective of minimizing the system cost accounting for the energy consumption and the cellular-link usage. To tackle with the technical challenge due to the coupling effect between the content transmission rate and each MD's relay-duration, we exploit the decomposable property of the joint optimization problem, based on which we characterize different possible cases for achieving the optimal solution. We then derive the optimal solution for each case analytically, and further propose an efficient algorithm for finding the globally optimal solution of the original joint optimization problem. Numerical results are provided to validate the proposed algorithm (including its accuracy and computational efficiency) and demonstrate that the optimal MDs’ cooperative offloading can significantly reduce the system cost compared to some heuristic schemes. Several interesting insights about the cooperative offloading are also obtained.

Journal ArticleDOI
TL;DR: A new posteriori multi-objective optimization algorithm named as multi- objective Jaya (MO-Jaya) algorithm is proposed which can provide multiple optimal solutions in a single simulation run and the results have shown the better performance of the proposed algorithm.

Journal ArticleDOI
TL;DR: A metaheuristic for the Time-Dependent Pollution-Routing Problem, which consists of routing a number of vehicles to serve a set of customers and determining their speed on each route segment with the objective of minimizing the cost of driver’s wage and greenhouse gases emissions, is proposed.

Journal ArticleDOI
TL;DR: A hybrid large neighborhood search for solving the multi-vehicle bike-repositioning problem, a pick-up and delivery vehicle routing problem that arises in connection with bike-sharing systems, and indicates that the heuristic outperforms both CPLEX and the math heuristic proposed by Forma et al.
Abstract: This paper addresses the multi-vehicle bike-repositioning problem, a pick-up and delivery vehicle routing problem that arises in connection with bike-sharing systems. Bike-sharing is a green transportation mode that makes it possible for people to use shared bikes for travel. Bikes are retrieved and parked at any of the stations within the bike-sharing network. One major challenge is that the demand for and supply of bikes are not always matched. Hence, vehicles are used to pick up bikes from surplus stations and transport them to deficit stations to satisfy a particular service level. This operation is called a bike-repositioning problem. In this paper, we propose a hybrid large neighborhood search for solving the problem. Several removal and insertion operators are proposed to diversify and intensify the search. A simple tabu search is further applied to the most promising solutions. The heuristic is evaluated on three sets of instances with up to 518 stations and five vehicles. The results of computational experiments indicate that the heuristic outperforms both CPLEX and the math heuristic proposed by Forma et al. (2015) [Transportation Research Part B 71: 230–247]. The average improvement of our heuristic over the math heuristic is 1.06%, and it requires only a small fraction of the computation time.

Journal ArticleDOI
TL;DR: Comprehensive comparison of the proposed heuristic over a challenging set of benchmarks from the CEC2014 real parameter single objective competition against several state-of-the-art algorithms is performed and results affirm robustness ofThe proposed approach compared to other state of theart algorithms.
Abstract: Developing efficient evolutionary algorithms attracts many researchers due to the existence of optimization problems in numerous real-world applications. A new differential evolution algorithm, ${s}$ TDE- ${d}\text{R}$ , is proposed to improve the search quality, avoid premature convergence, and stagnation. The population is clustered in multiple tribes and utilizes an ensemble of different mutation and crossover strategies. In this algorithm, a competitive success-based scheme is introduced to determine the life cycle of each tribe and its participation ratio for the next generation. In each tribe, a different adaptive scheme is used to control the scaling factor and crossover rate. The mean success of each subgroup is used to calculate the ratio of its participation for the next generation. This guarantees that successful tribes with the best adaptive schemes are only the ones that guide the search toward the optimal solution. The population size is dynamically reduced using a dynamic reduction method. Comprehensive comparison of the proposed heuristic over a challenging set of benchmarks from the CEC2014 real parameter single objective competition against several state-of-the-art algorithms is performed. The results affirm robustness of the proposed approach compared to other state-of-the-art algorithms.