scispace - formally typeset
Search or ask a question

Showing papers on "Local search (optimization) published in 2017"


Journal ArticleDOI
TL;DR: An improved JAYA (IJAYA) optimization algorithm is proposed in the paper, which can obtain a highly competitive performance compared with other state-of-the-state algorithms, especially in terms of accuracy and reliability.

353 citations


Journal ArticleDOI
01 Aug 2017
TL;DR: The experiment results show that the proposed MGACACO algorithm can avoid falling into the local extremum, and takes on better search precision and faster convergence speed.
Abstract: To overcome the deficiencies of weak local search ability in genetic algorithms (GA) and slow global convergence speed in ant colony optimization (ACO) algorithm in solving complex optimization problems, the chaotic optimization method, multi-population collaborative strategy and adaptive control parameters are introduced into the GA and ACO algorithm to propose a genetic and ant colony adaptive collaborative optimization (MGACACO) algorithm for solving complex optimization problems. The proposed MGACACO algorithm makes use of the exploration capability of GA and stochastic capability of ACO algorithm. In the proposed MGACACO algorithm, the multi-population strategy is used to realize the information exchange and cooperation among the various populations. The chaotic optimization method is used to overcome long search time, avoid falling into the local extremum and improve the search accuracy. The adaptive control parameters is used to make relatively uniform pheromone distribution, effectively solve the contradiction between expanding search and finding optimal solution. The collaborative strategy is used to dynamically balance the global ability and local search ability, and improve the convergence speed. Finally, various scale TSP are selected to verify the effectiveness of the proposed MGACACO algorithm. The experiment results show that the proposed MGACACO algorithm can avoid falling into the local extremum, and takes on better search precision and faster convergence speed.

343 citations


Journal ArticleDOI
TL;DR: A novel surrogate-assisted particle swarm optimization (PSO) inspired from committee-based active learning (CAL) is proposed and experimental results demonstrate that the proposed algorithm is able to achieve better or competitive solutions with a limited budget of hundreds of exact FEs.
Abstract: Function evaluations (FEs) of many real-world optimization problems are time or resource consuming, posing a serious challenge to the application of evolutionary algorithms (EAs) to solve these problems. To address this challenge, the research on surrogate-assisted EAs has attracted increasing attention from both academia and industry over the past decades. However, most existing surrogate-assisted EAs (SAEAs) either still require thousands of expensive FEs to obtain acceptable solutions, or are only applied to very low-dimensional problems. In this paper, a novel surrogate-assisted particle swarm optimization (PSO) inspired from committee-based active learning (CAL) is proposed. In the proposed algorithm, a global model management strategy inspired from CAL is developed, which searches for the best and most uncertain solutions according to a surrogate ensemble using a PSO algorithm and evaluates these solutions using the expensive objective function. In addition, a local surrogate model is built around the best solution obtained so far. Then, a PSO algorithm searches on the local surrogate to find its optimum and evaluates it. The evolutionary search using the global model management strategy switches to the local search once no further improvement can be observed, and vice versa. This iterative search process continues until the computational budget is exhausted. Experimental results comparing the proposed algorithm with a few state-of-the-art SAEAs on both benchmark problems up to 30 decision variables as well as an airfoil design problem demonstrate that the proposed algorithm is able to achieve better or competitive solutions with a limited budget of hundreds of exact FEs.

270 citations


Journal ArticleDOI
TL;DR: An adaptive multimodal continuous ACO algorithm is introduced and an adaptive parameter adjustment is developed, which takes the difference among niches into consideration, which affords a good balance between exploration and exploitation.
Abstract: Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization (ACO) algorithms in preserving high diversity, this paper intends to extend ACO algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ACO algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima.

244 citations


Journal ArticleDOI
01 Oct 2017
TL;DR: To solve the problems of convergence speed in the ant colony algorithm, an improved ant colony optimization algorithm is proposed for path planning of mobile robots in the environment that is expressed using the grid method.
Abstract: To solve the problems of convergence speed in the ant colony algorithm, an improved ant colony optimization algorithm is proposed for path planning of mobile robots in the environment that is expressed using the grid method. The pheromone diffusion and geometric local optimization are combined in the process of searching for the globally optimal path. The current path pheromone diffuses in the direction of the potential field force during the ant searching process, so ants tend to search for a higher fitness subspace, and the search space of the test pattern becomes smaller. The path that is first optimized using the ant colony algorithm is optimized using the geometric algorithm. The pheromones of the first optimal path and the second optimal path are simultaneously updated. The simulation results show that the improved ant colony optimization algorithm is notably effective.

242 citations


Journal ArticleDOI
TL;DR: In this paper, a location-inventory-routing model for perishable products is proposed to determine the number and location of required warehouses, the inventory level at each retailer, and the routes traveled by each vehicle.

215 citations


Journal ArticleDOI
Jin Deng1, Ling Wang1
TL;DR: A competitive memetic algorithm (CMA) is proposed to solve the multi-objective distributed permutation flow-shop scheduling problem (MODPFSP) with the makespan and total tardiness criteria.
Abstract: In this paper, a competitive memetic algorithm (CMA) is proposed to solve the multi-objective distributed permutation flow-shop scheduling problem (MODPFSP) with the makespan and total tardiness criteria. Two populations corresponding to two different objectives are employed in the CMA. Some objective-specific operators are designed for each population, and a special interaction mechanism between two populations is designed. Moreover, a competition mechanism is proposed to adaptively adjust the selection rates of the operators, and some knowledge-based local search operators are developed to enhance the exploitation ability of the CMA. In addition, the influence of the parameters on the performance of the CMA is investigated by using the Taguchi method of design-of-experiment. Finally, extensive computational tests and comparisons are carried out to demonstrate the effectiveness of the CMA in solving the MODPFSP.

179 citations


Journal ArticleDOI
TL;DR: The main application is a nearly optimal lower bound on the complexity of any statistical query algorithm for detecting planted bipartite clique distributions when the planted clique has size O(n1/2 − δ) for any constant δ > 0.
Abstract: We introduce a framework for proving lower bounds on computational problems over distributions against algorithms that can be implemented using access to a statistical query oracle. For such algorithms, access to the input distribution is limited to obtaining an estimate of the expectation of any given function on a sample drawn randomly from the input distribution rather than directly accessing samples. Most natural algorithms of interest in theory and in practice, for example, moments-based methods, local search, standard iterative methods for convex optimization, MCMC, and simulated annealing, can be implemented in this framework. Our framework is based on, and generalizes, the statistical query model in learning theory [Kearns 1998].Our main application is a nearly optimal lower bound on the complexity of any statistical query algorithm for detecting planted bipartite clique distributions (or planted dense subgraph distributions) when the planted clique has size O(n1/2 − δ) for any constant δ > 0. The assumed hardness of variants of these problems has been used to prove hardness of several other problems and as a guarantee for security in cryptographic applications. Our lower bounds provide concrete evidence of hardness, thus supporting these assumptions.

178 citations


Journal ArticleDOI
TL;DR: A memetic ACO algorithm, where a local search operator (called unstring and string) is integrated into ACO, is proposed to address DTSPs, where the best solution from ACO is passed to the local searchoperator, which removes and inserts cities in such a way that improves the solution quality.
Abstract: For a dynamic traveling salesman problem (DTSP), the weights (or traveling times) between two cities (or nodes) may be subject to changes. Ant colony optimization (ACO) algorithms have proved to be powerful methods to tackle such problems due to their adaptation capabilities. It has been shown that the integration of local search operators can significantly improve the performance of ACO. In this paper, a memetic ACO algorithm, where a local search operator (called unstring and string) is integrated into ACO, is proposed to address DTSPs. The best solution from ACO is passed to the local search operator, which removes and inserts cities in such a way that improves the solution quality. The proposed memetic ACO algorithm is designed to address both symmetric and asymmetric DTSPs. The experimental results show the efficiency of the proposed memetic algorithm for addressing DTSPs in comparison with other state-of-the-art algorithms.

169 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: A new primal-dual approach is presented that allows to exploit the geometric structure of k-means and to satisfy the hard constraint that at most k clusters are selected without deteriorating the approximation guarantee.
Abstract: Clustering is a classic topic in optimization with k-means being one of the most fundamental such problems. In the absence of any restrictions on the input, the best known algorithm for k-means with a provable guarantee is a simple local search heuristic yielding an approximation guarantee of 9+≥ilon, a ratio that is known to be tight with respect to such methods.We overcome this barrier by presenting a new primal-dual approach that allows us to (1) exploit the geometric structure of k-means and (2) to satisfy the hard constraint that at most k clusters are selected without deteriorating the approximation guarantee. Our main result is a 6.357-approximation algorithm with respect to the standard LP relaxation. Our techniques are quite general and we also show improved guarantees for the general version of k-means where the underlying metric is not required to be Euclidean and for k-median in Euclidean metrics.

153 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: A new retreat phase called Covariance Matrix Adapted Retreat Phase (CMAR), which uses covariance matrix to generate a new solution and thus improves the local search capability of EBO and is competitive with the compared algorithms.
Abstract: Effective Butterfly Optimizer(EBO) is a self-adaptive Butterfly Optimizer which incorporates a crossover operator in Perching and Patrolling to increase the diversity of the population. This paper proposes a new retreat phase called Covariance Matrix Adapted Retreat Phase (CMAR), which uses covariance matrix to generate a new solution and thus improves the local search capability of EBO. This version of EBO is called EBOwithCMAR. We evaluated the performance of EBOwithCMAR on CEC-2017 benchmark problems and compared with the results of winners of a special session of CEC-2016 for bound-constrained problems. The experimental results show that EBOwithCMAR is competitive with the compared algorithms.

Journal ArticleDOI
TL;DR: Two versions of this multimodal EDA, integrated with clustering strategies for crowding and speciation, are developed, which operate at the niche level and are very promising for complex problems with many local optima.
Abstract: Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

Journal ArticleDOI
28 Sep 2017-Symmetry
TL;DR: A nature-inspired optimization algorithm, which is based on a novel mathematical model of the way polar bears move in the search for food and hunt, to improve global and local search within the solution space.
Abstract: In the proposed article, we present a nature-inspired optimization algorithm, which we called Polar Bear Optimization Algorithm (PBO). The inspiration to develop the algorithm comes from the way polar bears hunt to survive in harsh arctic conditions. These carnivorous mammals are active all year round. Frosty climate, unfavorable to other animals, has made polar bears adapt to the specific mode of exploration and hunting in large areas, not only over ice but also water. The proposed novel mathematical model of the way polar bears move in the search for food and hunt can be a valuable method of optimization for various theoretical and practical problems. Optimization is very similar to nature, similarly to search for optimal solutions for mathematical models animals search for optimal conditions to develop in their natural environments. In this method. we have used a model of polar bear behaviors as a search engine for optimal solutions. Proposed simulated adaptation to harsh winter conditions is an advantage for local and global search, while birth and death mechanism controls the population. Proposed PBO was evaluated and compared to other meta-heuristic algorithms using sample test functions and some classical engineering problems. Experimental research results were compared to other algorithms and analyzed using various parameters. The analysis allowed us to identify the leading advantages which are rapid recognition of the area by the relevant population and efficient birth and death mechanism to improve global and local search within the solution space.

Posted Content
TL;DR: This paper presents an efficient range estimation algorithm that uses a combination of local search and linear programming problems to efficiently find the maximum and minimum values taken by the outputs of the NN over the given input set and demonstrates the effectiveness of the proposed approach for verification of NNs used in automated control as well as those used in classification.
Abstract: Deep neural networks (NN) are extensively used for machine learning tasks such as image classification, perception and control of autonomous systems. Increasingly, these deep NNs are also been deployed in high-assurance applications. Thus, there is a pressing need for developing techniques to verify neural networks to check whether certain user-expected properties are satisfied. In this paper, we study a specific verification problem of computing a guaranteed range for the output of a deep neural network given a set of inputs represented as a convex polyhedron. Range estimation is a key primitive for verifying deep NNs. We present an efficient range estimation algorithm that uses a combination of local search and linear programming problems to efficiently find the maximum and minimum values taken by the outputs of the NN over the given input set. In contrast to recently proposed "monolithic" optimization approaches, we use local gradient descent to repeatedly find and eliminate local minima of the function. The final global optimum is certified using a mixed integer programming instance. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate the effectiveness of the proposed approach for verification of NNs used in automated control as well as those used in classification.

Journal ArticleDOI
01 Mar 2017-Energy
TL;DR: In this article, an improved strength Pareto evolutionary algorithm is proposed to solve the multi-objective optimal power flow problem, where fuel cost and emission are considered as two objective functions for the optimal flow problem.

Journal ArticleDOI
TL;DR: A new hybridized version of Particle Swarm Optimization algorithm with Variable Neighborhood Search is proposed for solving this significant combinatorial optimization problem, the Constrained Shortest Path problem.

Journal ArticleDOI
TL;DR: The proposed extension version of hill climbing method called $$\beta$$β-hill climbing is a very efficient enhancement to the hill climbing providing powerful results when it compares with other advanced methods using the same global optimization functions.
Abstract: Hill climbing method is an optimization technique that is able to build a search trajectory in the search space until reaching the local optima. It only accepts the uphill movement which leads it to easily get stuck in local optima. Several extensions to hill climbing have been proposed to overcome such problem such as Simulated Annealing, Tabu Search. In this paper, an extension version of hill climbing method has been proposed and called $$\beta$$ -hill climbing. A stochastic operator called $$\beta$$ -operator is utilized in hill climbing to control the balance between the exploration and exploitation during the search. The proposed method has been evaluated using IEEE-CEC2005 global optimization functions. The results show that the proposed method is a very efficient enhancement to the hill climbing providing powerful results when it compares with other advanced methods using the same global optimization functions.

Journal ArticleDOI
TL;DR: A metaheuristic, called adaptive large neighborhood search (ALNS), is developed, thus creating a conflict-free observational timeline, and time slacks are introduced to confine the propagation of the time-dependent constraint of transition time.

Journal ArticleDOI
TL;DR: A single-machine scheduling problem with power-down mechanism to minimize both total energy consumption and maximum tardiness and a basic e − constraint method is proposed to obtain the complete Pareto front of the problem.

Journal ArticleDOI
01 Feb 2017
TL;DR: This work proposes an adaptive hybrid population management strategy using memory, local search and random strategies, to effectively handle environment dynamicity for the multi-objective case where objective functions change over time.
Abstract: In addition to the need for simultaneously optimizing several competing objectives, many real-world problems are also dynamic in nature. These problems are called dynamic multi-objective optimization problems. Applying evolutionary algorithms to solve dynamic optimization problems has obtained great attention among many researchers. However, most of works are restricted to the single-objective case. In this work, we propose an adaptive hybrid population management strategy using memory, local search and random strategies, to effectively handle environment dynamicity for the multi-objective case where objective functions change over time. Moreover, the proposed strategy is based on a new technique that detects the change severity, according to which it adjusts the number of memory and random solutions to be used. This ensures, on the one hand, a high level of convergence and on the other hand, the required diversity. We propose a dynamic version of the Non dominated Sorting Genetic Algorithm II, within which we integrate the above-mentioned strategies. Empirical results show that our proposal based on the use of the adaptive strategy is able to handle dynamic environments and to track the Pareto front as it changes over time. Moreover, when confronted with several recently proposed dynamic algorithms, it has presented competitive and better results on most problems.

Proceedings Article
01 Jan 2017
TL;DR: This paper establishes that using bisecting k-means divisive clustering has a very poor lower bound on its approximation ratio for the same objective and shows that there are divisive algorithms that perform well with respect to this objective by giving two constant approximation algorithms.
Abstract: Hierarchical clustering is a data analysis method that has been used for decades. Despite its widespread use, the method has an underdeveloped analytical foundation. Having a well understood foundation would both support the currently used methods and help guide future improvements. The goal of this paper is to give an analytic framework to better understand observations seen in practice. This paper considers the dual of a problem framework for hierarchical clustering introduced by Dasgupta. The main result is that one of the most popular algorithms used in practice, average linkage agglomerative clustering, has a small constant approximation ratio for this objective. Furthermore, this paper establishes that using bisecting k-means divisive clustering has a very poor lower bound on its approximation ratio for the same objective. However, we show that there are divisive algorithms that perform well with respect to this objective by giving two constant approximation algorithms. This paper is some of the first work to establish guarantees on widely used hierarchical algorithms for a natural objective function. This objective and analysis give insight into what these popular algorithms are optimizing and when they will perform well.

Journal ArticleDOI
01 Nov 2017
TL;DR: The proposed hybrid PSO-SA algorithm demonstrates improved performance in solution of these problems compared to other evolutionary methods and can reliably and effectively be used for various optimization problems.
Abstract: Display Omitted Development of a new hybrid PSO-SA optimization method.Numerical validation of the proposed method using a number of benchmark functions.Using three criteria for comparative work.Finding near optimum parameters of the proposed method.Application of the proposed algorithm in two engineering problems. A novel hybrid particle swarm and simulated annealing stochastic optimization method is proposed. The proposed hybrid method uses both PSO and SA in sequence and integrates the merits of good exploration capability of PSO and good local search properties of SA. Numerical simulation has been performed for selection of near optimum parameters of the method. The performance of this hybrid optimization technique was evaluated by comparing optimization results of thirty benchmark functions of different dimensions with those obtained by other numerical methods considering three criteria. These criteria were stability, average trial function evaluations for successful runs and the total average trial function evaluations considering both successful and failed runs. Design of laminated composite materials with required effective stiffness properties and minimum weight design of a three-bar truss are addressed as typical applications of the proposed algorithm in various types of optimization problems. In general, the proposed hybrid PSO-SA algorithm demonstrates improved performance in solution of these problems compared to other evolutionary methods The results of this research show that the proposed algorithm can reliably and effectively be used for various optimization problems.

Journal ArticleDOI
TL;DR: This paper improves and extended the Discrete Symbiotic Organisms Search by using three mutation-based local search operators to reconstruct its population, improve its exploration and exploitation capability, and accelerate the convergence speed.
Abstract: A Discrete Symbiotic Organisms Search (DSOS) algorithm for finding a near optimal solution for the Travelling Salesman Problem (TSP) is proposed. The SOS is a metaheuristic search optimization algorithm, inspired by the symbiotic interaction strategies often adopted by organisms in the ecosystem for survival and propagation. This new optimization algorithm has been proven to be very effective and robust in solving numerical optimization and engineering design problems. In this paper, the SOS is improved and extended by using three mutation-based local search operators to reconstruct its population, improve its exploration and exploitation capability, and accelerate the convergence speed. To prove that the proposed solution approach of the DSOS is a promising technique for solving combinatorial problems like the TSPs, a set of benchmarks of symmetric TSP instances selected from the TSPLIB library are used to evaluate its performance against other heuristic algorithms. Numerical results obtained show that the proposed optimization method can achieve results close to the theoretical best known solutions within a reasonable time frame.

Journal ArticleDOI
TL;DR: New variants of FPA employing new mutation operators, dynamic switching and improved local search are proposed and the best variant among these is adaptive-Lvy flower pollination algorithm (ALFPA) which has been further compared with the well-known algorithms like artificial bee colony, differential evolution, firefly algorithm, bat algorithm and grey wolf optimizer.
Abstract: A new concept based on mutation operators is applied to flower pollination algorithm (FPA).Based on mutation, five new variants of FPA are proposed.Dynamic switch probability is used in all the proposed variants.Benchmarking of Variants with respect to standard FPA.Benchmarking and statistical testing of the best variant with respect to state-of-the-art algorithms. Flower pollination algorithm (FPA) is a recent addition to the field of nature inspired computing. The algorithm has been inspired from the pollination process in flowers and has been applied to a large spectra of optimization problems. But it has certain drawbacks which prevents its applications as a standard algorithm. This paper proposes new variants of FPA employing new mutation operators, dynamic switching and improved local search. A comprehensive comparison of proposed algorithms has been done for different population sizes for optimizing seventeen benchmark problems. The best variant among these is adaptive-Lvy flower pollination algorithm (ALFPA) which has been further compared with the well-known algorithms like artificial bee colony (ABC), differential evolution (DE), firefly algorithm (FA), bat algorithm (BA) and grey wolf optimizer (GWO). Numerical results show that ALFPA gives superior performance for standard benchmark functions. The algorithm has also been subjected to statistical tests and again the performance is better than the other algorithms.

Journal ArticleDOI
TL;DR: It is proved in the framework of stochastic optimization that the proposed collective neurodynamic approach is capable of computing the global optimal solutions with probability one provided that a sufficiently large number of neural networks are utilized.
Abstract: Global optimization is a long-lasting research topic in the field of optimization, posting many challenging theoretic and computational issues. This paper presents a novel collective neurodynamic method for solving constrained global optimization problems. At first, a one-layer recurrent neural network (RNN) is presented for searching the Karush–Kuhn–Tucker points of the optimization problem under study. Next, a collective neuroydnamic optimization approach is developed by emulating the paradigm of brainstorming. Multiple RNNs are exploited cooperatively to search for the global optimal solutions in a framework of particle swarm optimization. Each RNN carries out a precise local search and converges to a candidate solution according to its own neurodynamics. The neuronal state of each neural network is repetitively reset by exchanging historical information of each individual network and the entire group. Wavelet mutation is performed to avoid prematurity, add diversity, and promote global convergence. It is proved in the framework of stochastic optimization that the proposed collective neurodynamic approach is capable of computing the global optimal solutions with probability one provided that a sufficiently large number of neural networks are utilized. The essence of the collective neurodynamic optimization approach lies in its potential to solve constrained global optimization problems in real time. The effectiveness and characteristics of the proposed approach are illustrated by using benchmark optimization problems.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A combinatorial optimization problem whose feasible solutions define both a decomposition and a node labeling of a given graph, which offers a common mathematical abstraction of seemingly unrelated computer vision tasks, including instance-separating semantic segmentation, articulated human body pose estimation and multiple object tracking.
Abstract: We state a combinatorial optimization problem whose feasible solutions define both a decomposition and a node labeling of a given graph. This problem offers a common mathematical abstraction of seemingly unrelated computer vision tasks, including instance-separating semantic segmentation, articulated human body pose estimation and multiple object tracking. Conceptually, it generalizes the unconstrained integer quadratic program and the minimum cost lifted multicut problem, both of which are NP-hard. In order to find feasible solutions efficiently, we define two local search algorithms that converge monotonously to a local optimum, offering a feasible solution at any time. To demonstrate the effectiveness of these algorithms in tackling computer vision tasks, we apply them to instances of the problem that we construct from published data, using published algorithms. We report state-of-the-art application-specific accuracy in the three above-mentioned applications.

Journal ArticleDOI
01 Mar 2017
TL;DR: This paper develops six versions of SOS, a simple and powerful metaheuristic that simulates the symbiotic interaction strategies adopted by an organism for surviving in an ecosystem, for solving the capacitated vehicle routing problem (CVRP).
Abstract: Display Omitted A new metaheuristic, symbiotic organism search (SOS), is applied to the capacitated vehicle routing problem (CVRP).Two solution representations, SR-1 and SR-2, are implemented and compared.Two new interaction strategies, competition and amensalism, are proposed to improve SOS.The performance of SOS is evaluated using two sets of classical benchmark problems.The results indicate that the proposed SOS performs well in solving CVRP. This paper presents the symbiotic organisms search (SOS) heuristic for solving the capacitated vehicle routing problem (CVRP), which is a well-known discrete optimization problem. The objective of CVRP is to decide the routes for a set of vehicles to serve a set of demand points while minimizing the total routing cost. SOS is a simple and powerful metaheuristic that simulates the symbiotic interaction strategies adopted by an organism for surviving in an ecosystem. As SOS is originally developed for solving continuous optimization problems, we therefore apply two solution representations, SR-1 and SR-2, to transform SOS into an applicable solution approach for CVRP and then apply a local search strategy to improve the solution quality of SOS. The original SOS uses three interaction strategies, mutualism, commensalism, and parasitism, to improve a candidate solution. In this improved version, we propose two new interaction strategies, namely competition and amensalism. We develop six versions of SOS for solving CVRP. The first version, SOSCanonical, utilizes a commonly used continuous to discrete solution representation transformation procedure. The second version is an improvement of canonical SOS with a local search strategy, denoted as SOSBasic. The third and fourth versions use SR-1 and SR-2 with a local search strategy, denoted as SOSSR-1 and SOSSR-2. The fifth and sixth versions, denoted as ISOSSR-1 and ISOSSR-2, improve the implementation of SOSSR-1 and SOSSR-2 by adding the newly proposed competition and amensalism interaction strategies. The performances of SOSCanonical, SOSBasic, SOSSR-1, and SOSSR-2 are evaluated on two sets of benchmark problems. First, the results of the four versions of SOS are compared, showing that the preferable result was obtained from SOSSR-1 and SOSSR-2. The performances of SOSSR-1, SOSSR-2, ISOSSR-1, and ISOSSR-2 are then compared, presenting that ISOSSR-1 and ISOSSR-2 offer a better performance. Next, the ISOSSR-1 and ISOSSR-2 results are compared to the best-known solutions. The results show that ISOSSR-1 and ISOSSR-2 produce good VRP solutions under a reasonable computational time, indicating that each of them is a good alternative algorithm for solving the capacitated vehicle routing problem.

Journal ArticleDOI
TL;DR: In this article, a computationally economical algorithm for evolving unsupervised deep neural networks is proposed to efficiently learn meaningful representations, which is very suitable in the current Big Data era where sufficient labeled data for training is often expensive to acquire.
Abstract: Deep Learning (DL) aims at learning the \emph{meaningful representations}. A meaningful representation refers to the one that gives rise to significant performance improvement of associated Machine Learning (ML) tasks by replacing the raw data as the input. However, optimal architecture design and model parameter estimation in DL algorithms are widely considered to be intractable. Evolutionary algorithms are much preferable for complex and non-convex problems due to its inherent characteristics of gradient-free and insensitivity to local optimum. In this paper, we propose a computationally economical algorithm for evolving \emph{unsupervised deep neural networks} to efficiently learn \emph{meaningful representations}, which is very suitable in the current Big Data era where sufficient labeled data for training is often expensive to acquire. In the proposed algorithm, finding an appropriate architecture and the initialized parameter values for a ML task at hand is modeled by one computational efficient gene encoding approach, which is employed to effectively model the task with a large number of parameters. In addition, a local search strategy is incorporated to facilitate the exploitation search for further improving the performance. Furthermore, a small proportion labeled data is utilized during evolution search to guarantee the learnt representations to be meaningful. The performance of the proposed algorithm has been thoroughly investigated over classification tasks. Specifically, error classification rate on MNIST with $1.15\%$ is reached by the proposed algorithm consistently, which is a very promising result against state-of-the-art unsupervised DL algorithms.

Journal ArticleDOI
01 Jul 2017
TL;DR: The proposed cooperation and profit allocation approaches provide an effective paradigm for logistics companies to share benefit, achieve winwin situations through the horizontal cooperation, and improve the negotiation power for logistics network optimization.
Abstract: A two-echelon logistics joint distribution network optimization model is developed.This model is to minimize the total cost of TELJDN.A hybrid algorithm combining ACO and GA operations is proposed.A cooperative mechanism strategy for sequential coalitions is studied in TELJDN.An empirical study demonstrates the applicability of the proposed approach. Collaborative two-echelon logistics joint distribution network can be organized through a negotiation process via logistics service providers or participants existing in the logistics system, which can effectively reduce the crisscross transportation phenomenon and improve the efficiency of the urban freight transportation system. This study establishes a linear optimization model to minimize the total cost of two-echelon logistics joint distribution network. An improved ant colony optimization algorithm integrated with genetic algorithm is presented to serve customer clustering units and resolve the model formulation by assigning logistics facilities. A two-dimensional colony encoding method is adopted to generate the initial ant colonies. Improved ant colony optimization combines the merits of ant colony optimization algorithm and genetic algorithm with both global and local search capabilities. Finally, an improved Shapley value model based on cooperative game theory and a cooperative mechanism strategy are presented to obtain the optimal profit allocation scheme and sequential coalitions respectively in two-echelon logistics joint distribution network. An empirical study in Guiyang City, China, reveals that the improved ant colony optimization algorithm is superior to the other three methods in terms of the total cost. The improved Shapley value model and monotonic path selection strategy are applied to calculate the best sequential coalition selection strategy. The proposed cooperation and profit allocation approaches provide an effective paradigm for logistics companies to share benefit, achieve winwin situations through the horizontal cooperation, and improve the negotiation power for logistics network optimization.

Journal ArticleDOI
TL;DR: A linear programming relaxation with constant integrality gap for capacitated facility location is presented and it is demonstrated that the fundamental theories of multi-commodity flows and matchings provide key insights that lead to the strong relaxation.
Abstract: Linear programming (LP) has played a key role in the study of algorithms for combinatorial optimization problems. In the field of approximation algorithms, this is well illustrated by the uncapacitated facility location problem. A variety of algorithmic methodologies, such as LP-rounding and the primal-dual method, have been applied to and evolved from algorithms for this problem. Unfortunately, this collection of powerful algorithmic techniques had not yet been applicable to the more general capacitated facility location problem. In fact, all of the known algorithms with good performance guarantees were based on a single technique, local search, and no LP relaxation was known to efficiently approximate the problem. In this paper, we present an LP relaxation with a constant integrality gap for the capacitated facility location. We demonstrate that the fundamental theories of multicommodity flows and matchings provide key insights that lead to the strong relaxation. Our algorithmic proof of integrality gap...