scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Heuristics in 2015"


Journal ArticleDOI
TL;DR: This paper proposes a simple but efficient heuristic that combines construction and improvement heuristic ideas to solve multi-level lot-sizing problems, and shows that it is very efficient and competitive, outperforming benchmark methods for most of the test problems.
Abstract: In this paper, we propose a simple but efficient heuristic that combines construction and improvement heuristic ideas to solve multi-level lot-sizing problems. A relax-and-fix heuristic is firstly used to build an initial solution, and this is further improved by applying a fix-and-optimize heuristic. We also introduce a novel way to define the mixed-integer subproblems solved by both heuristics. The efficiency of the approach is evaluated solving two different classes of multi-level lot-sizing problems: the multi-level capacitated lot-sizing problem with backlogging and the two-stage glass container production scheduling problem (TGCPSP). We present extensive computational results including four test sets of the Multi-item Lot-Sizing with Backlogging library, and real-world test problems defined for the TGCPSP, where we benchmark against state-of-the-art methods from the recent literature. The computational results show that our combined heuristic approach is very efficient and competitive, outperforming benchmark methods for most of the test problems.

67 citations


Journal ArticleDOI
TL;DR: An experimental assessment of the impact of infeasible solutions on heuristic searches, through various empirical studies on local improvement procedures, iterated local searches, and hybrid genetic algorithms for the VRP with time windows and other related variants with fleet mix, backhauls, and multiple periods.
Abstract: The contribution of infeasible solutions in heuristic searches for vehicle routing problems (VRP) is not a subject of consensus in the metaheuristics community. Infeasible solutions may allow transitioning between structurally different feasible solutions, thus enhancing the search, but they also lead to more complex move-evaluation procedures and wider search spaces. This paper introduces an experimental assessment of the impact of infeasible solutions on heuristic searches, through various empirical studies on local improvement procedures, iterated local searches, and hybrid genetic algorithms for the VRP with time windows and other related variants with fleet mix, backhauls, and multiple periods. Four relaxation schemes are considered, allowing penalized late arrivals to customers, early and late arrivals, returns in time, or a flexible travel time relaxation. For all considered problems and methods, our experiments demonstrate the significant positive impact of penalized infeasible solution. Differences can also be observed between individual relaxation schemes. The "returns in time" and "flexible travel time" relaxations appear as the best options in terms of solution quality, CPU time, and scalability.

58 citations


Journal ArticleDOI
TL;DR: The inverted PBI scalarizing approach is proposed which is an extension of the conventional PBI and WS and achieves higher search performance than other algorithms in problems with many-objectives and the difficulty to approximate a widely spread Pareto front in the objective space.
Abstract: MOEA/D is one of the promising evolutionary approaches for solving multi and many-objective optimization problems. MOEA/D decomposes a multi-objective optimization problem into a number of single objective optimization problems. Each single objective optimization problem is defined by a scalarizing function using a weight vector. In MOEA/D, there are several scalarizing approaches such as weighted Tchebycheff, reciprocal weighted Tchebycheff, weighted sum (WS) and penalty-based boundary intersection (PBI). Each scalarizing function has a characteristic effect on the search performance of MOEA/D and provides a scenario of multi-objective solution search. To improve the availability of MOEA/D framework for solving various kinds of problems, it is important to provide a new scalarizing function which has different characteristics from the conventional scalarizing functions. In particular, the conventional scalarizing approaches face a difficulty to approximate a widely spread Pareto front in some problems. To approximate the entire Pareto front by improving the spread of solutions in the objective space and enhance the search performance of MOEA/D in multi and many-objective optimization problems, in this work we propose the inverted PBI scalarizing approach which is an extension of the conventional PBI and WS. In this work, we analyze differences between inverted PBI and other scalarizing functions, and compare the search performances of NSGA-III and five MOEA/Ds using weighted Tchebycheff, reciprocal weighted Tchebycheff, WS, PBI and inverted PBI in many-objective knapsack problems and WFG4 problems with 2---8 objectives. As results, we show that the inverted PBI based MOEA/D achieves higher search performance than other algorithms in problems with many-objectives and the difficulty to approximate a widely spread Pareto front in the objective space. Also, we show the robustness of the inverted PBI on Pareto front geometry by using problems with four representative concave, linear, convex and discontinuous Pareto fronts.

44 citations


Journal ArticleDOI
Yangyang Li1, Yang Wang1, Jing Chen1, Licheng Jiao1, Ronghua Shang1 
TL;DR: An improved multi-objective quantum-behaved particle swarm optimization (IMOQPSO) based on spectral-clustering is proposed to detect the overlapping community structure in complex networks.
Abstract: Community detection is one of the most important problems in the field of complex networks in recent years. The majority of present algorithms only find disjoint communities, however, community often overlap to some extent in many real-world networks. In this paper, an improved multi-objective quantum-behaved particle swarm optimization (IMOQPSO) based on spectral-clustering is proposed to detect the overlapping community structure in complex networks. Firstly, the line graph of the graph modeling the network is formed, and a spectral method is employed to extract the spectral information of the line graph. Secondly, IMOQPSO is employed to solve the multi-objective optimization problem so as to resolve the separated community structure in the line graph which corresponding to the overlapping community structure in the graph presenting the network. Finally, a fine-tuning strategy is adopted to improve the accuracy of community detection. The experiments on both synthetic and real-world networks demonstrate our method achieves cover results which fit the real situation in an even better fashion.

41 citations


Journal ArticleDOI
Wayne Pullan1
TL;DR: Extensive computational experiments, using a range of sparse real-world graphs, and a comparison with previous exact results demonstrate the effectiveness of the proposed algorithms.
Abstract: Given a graph, the critical node detection problem can be broadly defined as identifying the minimum subset of nodes such that, if these nodes were removed, some metric of graph connectivity is minimised. In this paper, two variants of the critical node detection problem are addressed. Firstly, the basic critical node detection problem where, given the maximum number of nodes that can be removed, the objective is to minimise the total number of connected nodes in the graph. Secondly, the cardinality constrained critical node detection problem where, given the maximum allowed connected graph component size, the objective is to minimise the number of nodes required to be removed to achieve this. Extensive computational experiments, using a range of sparse real-world graphs, and a comparison with previous exact results demonstrate the effectiveness of the proposed algorithms.

38 citations


Journal ArticleDOI
TL;DR: A multi-objective evolutionary algorithm (MOEA), coined as Memetic NSGA-II), has been designed to solve the mixed capacitated general routing problem and shows the energetic effects of using DBLSP, CMP and X-set together while finding the set of potentially Pareto optimal solutions.
Abstract: The mixed capacitated general routing problem (MCGRP) is concerned with the determination of the optimal vehicle routes to service a set of customers located at nodes and along edges or arcs on a mixed weighted graph representing a complete transportation network. Although MCGRP generalizes many other routing problems and yields better models for several practical problems such as newspaper delivery and urban waste collection, this is still an underinvestigated problem. Furthermore, most of the studies have focused on the optimization of just one objective, that is, cost minimization. Keeping in mind the requirement of industries nowadays, MCGRP has been addressed in this paper to concurrently optimize two crucial objectives, namely, minimization of routing cost and route imbalance. To solve this bi-objective form of MCGRP, a multi-objective evolutionary algorithm (MOEA), coined as Memetic NSGA-II, has been designed. It is a hybrid of non-dominated sorting genetic algorithm-II (NSGA-II), a dominance based local search procedure (DBLSP), and a clone management principle (CMP). The DBLSP and CMP have been incorporated into the framework of NSGA-II with a view to empowering its capability to converge at/or near the true Pareto front and boosting diversity among the trade-off solutions, respectively. In addition, the algorithm also contains a set of three well-known crossover operators (X-set) that are employed to explore different parts of the search space. The algorithm was tested on a standard benchmark of twenty three standard MCGRP instances of varying complexity. The computational experiments verify the effectiveness of Memetic NSGA-II and also show the energetic effects of using DBLSP, CMP and X-set together while finding the set of potentially Pareto optimal solutions.

34 citations


Journal ArticleDOI
TL;DR: Two heuristic strategies are proposed: what-if analysis and robustness checking that allow to drive designers towards optimal WSN deployment solutions, from the point of view of the connection and data delivery resiliency, exploiting a formal approach based on the event calculus formal language.
Abstract: Wireless sensor networks (WSNs) are increasingly being adopted in critical applications. In these networks undesired events may undermine the reliability level; thus their effects need to be properly assessed from the early stages of the development process onwards to minimize the chances of unexpected problems during use. In this paper we propose two heuristic strategies: what-if analysis and robustness checking. They allow to drive designers towards optimal WSN deployment solutions, from the point of view of the connection and data delivery resiliency, exploiting a formal approach based on the event calculus formal language. The heuristics are backed up by a support tool aimed to simplify their adoption by system designers. The tool allows to specify the target WSN in a user-friendly way and it is able to elaborate the two heuristic strategies by means of the event calculus specifications automatically generated. The WSN reliability is assessed computing a set of specific metrics. The effectiveness of the strategies is shown in the context of three case studies.

27 citations


Journal ArticleDOI
TL;DR: Greedy-MSP and GRASP-MSRP deploys sinks and relays to minimise the deployment cost and to guarantee that all sensor nodes in the network are double-covered and noncritical.
Abstract: Wireless sensor networks are subject to failures. Deployment planning should ensure that when a data sink or sensor node fails, the remaining network can still be connected, and so may require placing multiple sinks and relay nodes in addition to sensor nodes. For network performance requirements, there may also be path-length constraints for each sensor node. We propose four algorithms, Greedy-MSP and GRASP-MSP to solve the problem of multiple sink placement, and Greedy-MSRP and GRASP-MSRP for the problem of multiple sink and relay placement. Greedy-MSP and GRASP-MSP minimise the deployment cost, while ensuring that each sensor node in the network is double-covered, i.e. it has two length-constrained paths to two sinks. Greedy-MSRP and GRASP-MSRP deploys sinks and relays to minimise the deployment cost and to guarantee that all sensor nodes in the network are double-covered and noncritical. A sensor node is noncritical if upon its removal, all remaining sensor nodes still have length-constrained paths to sinks. We evaluate the algorithms empirically and show that these algorithms outperform the closely-related algorithms from the literature for the lowest total deployment cost.

26 citations


Journal ArticleDOI
TL;DR: A tabu Search algorithm is presented for a new problem class called cohesive clustering which arises in a variety of business applications and introduces an objective function to produce clusters as “pure” as possible, to maximize the similarity of the elements in each given cluster.
Abstract: Clustering problems can be found in a wide range of applications including data mining/analytics, logistics, healthcare, biotechnology, economic analysis and many other areas. Solving a clustering problem from the real world often poses significant challenges in spite of the fact that extensive research has been devoted to this topic. In this paper we present a tabu Search algorithm for a new problem class called cohesive clustering which arises in a variety of business applications. The class introduces an objective function to produce clusters as "pure" as possible, to maximize the similarity of the elements in each given cluster. Tabu search intensification and diversification strategies are employed in order to produce enhanced outcomes. The computational results demonstrate the effectiveness of the proposed algorithm.

24 citations


Journal ArticleDOI
TL;DR: Simulation and experiment results demonstrate that applying the SSW DV-hop algorithmin O-WSNs could significantly improve the localization accuracy.
Abstract: Automatic localization is one of the major issues in Wireless Sensor Networks (WSN). DV-hop algorithm is a well-known localization algorithm in WSN but with limited localization accuracy. In this paper, an improved DV-hop localization algorithm in hybrid optical wireless sensor networks is proposed based on the optimization of the parameters in WSN. Various factors that affect the localization accuracy of the DV-hop algorithm in WSN are investigated, including the communication radius of the node, the number of beacon nodes and the number of the total nodes. As the DV-hop algorithm is applied into hybrid optical sensor and WSNs (O-WSN) with rectangular topology, different parameters have to be optimized accordingly. Simulation results show that the square topology outperforms the rectangle topology more than 45 % under the same network parameters using the improved DV-hop algorithm. Therefore another improved DV-hop called Sub-Square Weighted DV-hop (SSW DV-hop) is proposed for the rectangle topology. Both simulation and experiment results demonstrate that applying the SSW DV-hop algorithmin O-WSNs could significantly improve the localization accuracy.

24 citations


Journal ArticleDOI
TL;DR: A multi-objective optimization algorithm is developed to cater for the characteristics of WSNs and the results are more comprehensive optimized compared with other state-of-the-art algorithms.
Abstract: Wireless sensor networks (WSN) have shown their potentials in various applications, which bring a lot of benefits to users from different working areas. However, due to the diversity of the deployed environments and resource constraints, it is difficult to predict the performance of a topology. Besides the connectivity, coverage, cost, network longevity and service quality should all be considered during the planning procedure. Therefore, efficiently planning a reliable WSN is a challenging task, which requires designers coping with comprehensive and interdisciplinary knowledge. A WSN planning method is proposed in this work to tackle the above mentioned challenges and efficiently deploying reliable WSNs. First of all, the above mentioned metrics are modeled more comprehensively and practically compared with other works. Especially 3D ray tracing method is used to model the radio link and sensing signal, which are sensitive to the obstruction of obstacles; network routing is constructed by using AODV protocol; the network longevity, packet delay and packet drop rate are obtained via simulating practical events in WSNet simulator, which to the best of our knowledge, is the first time that network simulator is involved in a planning algorithm. Moreover, a multi-objective optimization algorithm is developed to cater for the characteristics of WSNs. Network size is changeable during evolution, meanwhile the crossovers and mutations are limited by certain constraints to eliminate invalid modifications and improve the computation efficiency. The capability of providing multiple optimized solutions simultaneously allows users making their own decisions, and the results are more comprehensive optimized compared with other state-of-the-art algorithms. Practical WSN deployments are also realized for both indoor and outdoor environments and the measurements coincident well with the generated optimized topologies, which prove the efficiency and reliability of the proposed algorithm.

Journal ArticleDOI
TL;DR: A heuristic approach is employed which is based on a genetic algorithm with tabu search as local improvement procedure and a complete solution archive and the archive is used to store and convert already visited solutions in order to avoid costly unnecessary re-evaluations.
Abstract: In this article we propose a hybrid genetic algorithm for the discrete $$(r|p)$$(r|p)-centroid problem. We consider the competitive facility location problem where two non-cooperating companies enter a market sequentially and compete for market share. The first decision maker, called the leader, wants to maximize his market share knowing that a follower will enter the same market. Thus, for evaluating a leader's candidate solution, a corresponding follower's subproblem needs to be solved, and the overall problem therefore is a bi-level optimization problem. This problem is $$\Sigma _2^P$$Σ2P-hard, i.e., harder than any problem in NP (if $$\hbox {P} ot =\hbox {NP}$$P?NP). A heuristic approach is employed which is based on a genetic algorithm with tabu search as local improvement procedure and a complete solution archive. The archive is used to store and convert already visited solutions in order to avoid costly unnecessary re-evaluations. Different solution evaluation methods are combined into an effective multi-level evaluation scheme. The algorithm is tested on well-known benchmark sets of both Euclidean and non-Euclidean instances as well as on larger newly created instances. Especially on the Euclidean instances our algorithm is able to exceed previous state-of-the-art heuristic approaches in solution quality and running time in most cases.

Journal ArticleDOI
TL;DR: It is shown that supervised machine learning techniques can be used to construct a passive autofocus heuristic for these problems that out-performs an existing hand-crafted heuristic and other baseline methods.
Abstract: Digital cameras are equipped with passive autofocus mechanisms where a lens is focused using only the camera's optical system and an algorithm for controlling the lens. The speed and accuracy of the autofocus algorithm are crucial to user satisfaction. In this paper, we address the problems of identifying the global optimum and significant local optima (or peaks) when focusing an image. We show that supervised machine learning techniques can be used to construct a passive autofocus heuristic for these problems that out-performs an existing hand-crafted heuristic and other baseline methods. In our approach, training and test data were produced using an offline simulation on a suite of 25 benchmarks and correctly labeled in a semi-automated manner. A decision tree learning algorithm was then used to induce an autofocus heuristic from the data. The automatically constructed machine-learning-based (ml-based) heuristic was compared against a previously proposed hand-crafted heuristic for autofocusing and other baseline methods. In our experiments, the ml-based heuristic had improved speed--reducing the number of iterations needed to focus by 37.9 % on average in common photography settings and 22.9 % on average in a more difficult focus stacking setting--while maintaining accuracy.

Journal ArticleDOI
TL;DR: This paper proposes a general approach that allows for aggregating preferences when the expressed CP-nets are not required to be acyclic, and presents results of experiments that demonstrate the efficiency and scalability of this approach.
Abstract: We develop a framework for preference aggregation in multi-attribute, multi-valued domains, where agents' preferences are represented by Conditional Preference Networks (CP-nets). Most existing work either does not consider computational requirements, or depends on the strong assumption that the agents can express their preferences by acyclic CP-nets that are compatible with a common order on the variables. In this paper, we focus on majoritarian aggregation of CP-nets. We propose a general approach that allows for aggregating preferences when the expressed CP-nets are not required to be acyclic. Moreover, there is no requirement for any common structure among the agents' CP-nets. The proposed approach computes a set of locally winning alternatives through the reduction to a constraint satisfaction problem. We present results of experiments that demonstrate the efficiency and scalability of our approach. Through comprehensive experiments we also investigate the distributions of the numbers of locally winning alternatives with different CP-net structures, with varying domain sizes and varying numbers of variables and agents.

Journal ArticleDOI
TL;DR: This paper introduces a pre-root primal heuristic, named Shift-and-Propagate, that applies domain propagation techniques to quickly drive a variable assignment towards feasibility and is a powerful supplement to existing rounding and propagation heuristics.
Abstract: In recent years, there has been a growing interest in the design of general purpose primal heuristics for use inside complete mixed integer programming solvers. Many of these heuristics rely on an optimal LP solution, which may take a significant amount of time to find. In this paper, we address this issue by introducing a pre-root primal heuristic that does not require a previously found LP solution. This heuristic, named Shift-and-Propagate , applies domain propagation techniques to quickly drive a variable assignment towards feasibility. Computational experiments indicate that this heuristic is a powerful supplement to existing rounding and propagation heuristics.

Journal ArticleDOI
TL;DR: A new variable selection heuristic for Max-SAT local search algorithms, which works particularly well for weighted Max-2-S AT instances, and is called CCTriplex.
Abstract: Stochastic local search (SLS) is an appealing method for solving the maximum satisfiability (Max-SAT) problem. This paper proposes a new variable selection heuristic for Max-SAT local search algorithms, which works particularly well for weighted Max-2-SAT instances. Evolving from the recent configuration checking strategy, this new heuristic works in three levels and is called CCTriplex. According to the CCTriplex heuristic, a variable that is both decreasing and configuration changed has the higher priority to be flipped than a decreasing variable, which in turn has the higher priority than a configuration changed variable. The CCTriplex heuristic is used to develop a new SLS algorithm for weighted Max-2-SAT called CCMaxSAT. We evaluate CCMaxSAT on random benchmarks with different densities, and the hand crafted Frb benchmark, as well as weighted Max-2-SAT instances encoded from MaxCut, MaxClique and sports scheduling problems. Compared with the state-of-the-art SLS solver for weighted Max-2-SAT called ITS and the best SLS solver in Max-SAT Evaluation 2012 namely ubcsat-IRoTS, as well as the famous complete solver wMaxSATz, our algorithm CCMaxSAT shows rather good performance on all the benchmarks.

Journal ArticleDOI
TL;DR: An algorithm which makes use of a mathematical programming solver in order to find near-optimal solutions to the combinatorial optimization problem from the family of minimum weight rooted arborescence problems, both in acyclic directed graphs and in directed graphs possibly containing directed circuits.
Abstract: The combinatorial optimization problem tackled in this work is from the family of minimum weight rooted arborescence problems. The problem is NP-hard and has applications, for example, in computer vision and in multistage production planning. We describe an algorithm which makes use of a mathematical programming solver in order to find near-optimal solutions to the problem both in acyclic directed graphs and in directed graphs possibly containing directed circuits. It is shown that the proposed technique compares favorably to competiting approaches published in the related literature. Moreover, the experimental evaluation demonstrates that, although mathematical programming solvers are very powerful for this problem, with growing graph size and density they become unpractical due to excessive memory requirements.

Journal ArticleDOI
TL;DR: This paper considers the problem of efficient data gathering in sensor networks for arbitrary sensor node deployments, and shows that in many cases the output-sensitive approximation solution performs better than the currently known best results for sensor networks.
Abstract: In this paper we consider the problem of efficient data gathering in sensor networks for arbitrary sensor node deployments. The efficiency of the solution is measured by a number of criteria: total energy consumption, total transport capacity, latency and quality of the transmissions. We present a number of different constructions with various tradeoffs between aforementioned parameters. We provide theoretical performance analysis for our approaches, present their distributed implementation and discuss the different aspects of using each. We show that in many cases our output-sensitive approximation solution performs better than the currently known best results for sensor networks. We also consider our problem under the mobile sensor nodes environment, when the sensors have no information about each other. The only information a single sensor holds is its current location and future mobility plan. Our simulation results validate the theoretical findings.

Journal ArticleDOI
TL;DR: This paper presents a non-hierarchical evolutionary multi-Objective tree learner (NHEMOtree) based on genetic programming using a binary decision tree representation to handle multi-objective optimization problems with equitable optimization criteria and introduces a novel crossover operator based on a multi- objective variable importance measure.
Abstract: Selecting reliable predictors has always been crucial in classification. Especially decision trees are very popular for solving supervised variable selection and classification problems. When variable selection has to be performed with regard to acquisition costs, which have to be paid whenever the respective variable is extracted for a new observation, the problem of balancing the predictive power of the model against its costs describes a multi-objective optimization problem which can be solved with meta-heuristics such as evolutionary multi-objective algorithms. In this paper, we present a non-hierarchical evolutionary multi-objective tree learner (NHEMOtree) based on genetic programming using a binary decision tree representation to handle multi-objective optimization problems with equitable optimization criteria. This tree learner is applied to a multi-objective classification problem from medicine as well as to simulated data to evaluate its performance relative to two wrapper approaches based on either NSGA-II or SMS-EMOA with bitstring representation and CART as the enclosed classification algorithm. Moreover, a novel crossover operator based on a multi-objective variable importance measure is introduced. Using this crossover operator, NHEMOtree can be improved.

Journal ArticleDOI
TL;DR: A new perturbation operator is proposed for the TSPPDL that achieves better results on average than the existing best approach and the resultant VNS that employs the best perturbations operator outperforms the best existing TSPPDF approach on benchmark test data.
Abstract: This paper investigates perturbation operators for variable neighborhood search (VNS) approaches for two related problems, namely the pickup and delivery traveling salesman problem with LIFO loading (TSPPDL) and FIFO loading (TSPPDF). Our study is motivated by the fact that previously published results on VNS approaches on the TSPPDL suggest that the perturbation operation has the most significant effect on solution quality. We propose a new perturbation operator for the TSPPDL that achieves better results on average than the existing best approach. We also devise new perturbation operators for the TSPPDF that combine request removal and request insertion operations, and investigate which combination of request removal and request insertion operations produces the best results. Our resultant VNS that employs our best perturbation operator outperforms the best existing TSPPDF approach on benchmark test data.

Journal ArticleDOI
TL;DR: A multiobjective evolutionary algorithm which is hybridized with a local search heuristics specially designed for the problem and able to obtain very promising DNA sequences suitable for computation which outperform in reliability other results generated with other existing sequence design techniques published in the literature.
Abstract: DNA-based technologies, such as nanotechnology, DNA sequencing or DNA computing, have grown significantly in recent years. The inherent properties presented in DNA molecules (storage density, parallelism possibilities, energy efficiency, etc) make these particles unique computational elements. However, DNA libraries used for computation have to fulfill strict biochemical properties to avoid undesirable reactions, because those reactions usually lead to incorrect calculations. DNA sequence design is an NP-hard problem which involves several heterogeneous and conflicting biochemical design criteria which may produce some difficulties for traditional optimization methods. In this paper, we propose a multiobjective evolutionary algorithm which is hybridized with a local search heuristics specially designed for the problem. The proposal is a multiobjective variant of the teaching---learning-based optimization algorithm. On the other hand, with the aim of ensuring the performance of our proposal, we have made comparisons with the well-known fast-non dominated sorting genetic algorithm and strength Pareto evolutionary algorithm 2, and other approaches published in the literature. After performing diverse comparisons, we can conclude that our hybrid approach is able to obtain very promising DNA sequences suitable for computation which outperform in reliability other results generated with other existing sequence design techniques published in the literature.

Journal ArticleDOI
TL;DR: This work presents a hybrid evolutionary algorithm for graph b-coloring that has been tested on some instances of regular graphs where their b-chromatic number is theoretically known in advance, as well as by comparing with a brute force algorithm solving the regular graphs up to 12 vertices.
Abstract: The b-chromatic number of a graph $$G$$G is a maximum integer $$\varphi (G)$$?(G) for which there exists a proper $$\varphi (G)$$?(G)-coloring with the additional property that each color class contains a vertex that is adjacent to one of the vertices within each color class. In contrast to many theoretical results discovered over the last decade and a half there are no computer running experiments on $$\varphi (G)$$?(G) in the literature. This work presents a hybrid evolutionary algorithm for graph b-coloring. Its performance has been tested on some instances of regular graphs where their b-chromatic number is theoretically known in advance, as well as by comparing with a brute force algorithm solving the regular graphs up to 12 vertices. In addition, the algorithm has been tested on some larger graphs taken from a DIMACS challenge benchmark that also proved to be challenging for the algorithms searching for the classical chromatic number $$\chi (G)$$?(G).

Journal ArticleDOI
TL;DR: This paper proposes a general framework for integration, which is viewed as a problem in itself with well-defined objectives and constraints and four different mechanisms are proposed, based on well-known concepts from the literature such as constraining or giving incentives to particular characteristics of partial solutions.
Abstract: Problem decomposition requires the ability to recombine partial solutions. This recombination task, which we call integration, is a fundamental feature of many methods, both those based on mathematical formulations such as Dantzig---Wolfe or Benders and those based on heuristics. Integration may be implicit in mathematical decompositions, but in heuristics this critical task is usually managed by ad-hoc operators, e.g., operators that combine decisions and heuristic adjustments to manage incompatibilities. In this paper, we propose a general framework for integration, which is viewed as a problem in itself with well-defined objectives and constraints. Four different mechanisms are proposed, based on well-known concepts from the literature such as constraining or giving incentives to particular characteristics of partial solutions. We perform computational experiments on the multi-depot periodic vehicle routing problem to compare the various integration approaches. The strategy that places incentives on selected solution characteristics rather than imposing constraints seems to yield the best results in the context of a cooperative search for this problem.

Journal ArticleDOI
TL;DR: A heuristic for scheduling so-called ‘modular’ projects that can be used even for large instances, or when instances are particularly difficult because of their characteristics, such as a low network density.
Abstract: This article describes a heuristic for scheduling so-called `modular' projects. Exact solutions to this NP-hard problem can be obtained with existing branch-and-bound and dynamic-programming algorithms, but only for small to medium-size instances. The proposed heuristic, by contrast, can be used even for large instances, or when instances are particularly difficult because of their characteristics, such as a low network density. The proposed heuristic draws from existing results in the literature on sequential testing, which will be briefly reviewed. The performance of the heuristic is assessed over a dataset of 360 instances.

Journal ArticleDOI
TL;DR: This paper addresses the local job scheduling problem, with processing time constraints, in a computational grid, as a rectangular packing problem and several heuristic approaches are developed for its solution.
Abstract: In this paper, the local job scheduling problem, with processing time constraints (i.e., deadline, earliest start time, reservation) in a computational grid, is addressed. The problem under investigation is formulated as a rectangular packing problem and several heuristic approaches are developed for its solution. The performance of the proposed algorithms are evaluated under different scenarios. Extensive experimental tests demonstrate that the defined solution strategies outperform the state-of-art methods.

Journal ArticleDOI
TL;DR: It is empirically demonstrated that VarClust is at least as accurate as, and requires less node-to-node communication than, a state-of-the-art aggregation approach, affinity propagation, for both the clustering and aggregation phases of inference.
Abstract: We describe VarClust, a gossip-based decentralized clustering algorithm designed to support multi-mean decentralized aggregation in energy-constrained wireless sensor networks. We empirically demonstrate that VarClust is at least as accurate as, and requires less node-to-node communication (and hence consumes less energy) than, a state-of-the-art aggregation approach, affinity propagation. This superiority holds for both the clustering and aggregation phases of inference, and is demonstrated over a range of noise levels and for a range of random and small-world graph topologies.

Journal ArticleDOI
TL;DR: The user interface, the interaction mechanisms and the heuristic developed to support the cooperation between the computer and the user are presented and comparison shows advantages for using the proposed interactive approach over a pure manual or pure automated approach.
Abstract: This paper presents an interactive planning system for forest road location. This decision support system is based on an interactive heuristic approach referred to as interactive large neighborhood search, within which the user contributes in a cooperative manner to the optimization process. The objective of this cooperative optimization process is to exploit the problem-domain expertise of the user in order to, on the one hand, guide the search for a solution towards intuitively interesting parts of the solution space, and, on the other hand, generate more practical solutions that integrate aspects of the decision problem that are not captured by the heuristic objective function. This paper more specifically presents the user interface, the interaction mechanisms and the heuristic developed to support the cooperation between the computer and the user. We also present experimental results based on real problem instances, with an expert user. A comparison shows advantages for using the proposed interactive approach over a pure manual or pure automated approach.

Journal ArticleDOI
TL;DR: A novel tree-based Iterated Local Search method that takes advantage of the tractability of tree-structures embedded within MRFs to derive strong local search in an ILS framework and is competitive against state-of-the-art rivals with significant computational gain.
Abstract: The maximum a posteriori assignment for general structure Markov random fields is computationally intractable. In this paper, we exploit tree-based methods to efficiently address this problem. Our novel method, named Tree-based Iterated Local Search (T-ILS), takes advantage of the tractability of tree-structures embedded within MRFs to derive strong local search in an ILS framework. The method efficiently explores exponentially large neighborhoods using a limited memory without any requirement on the cost functions. We evaluate the T-ILS on a simulated Ising model and two real-world vision problems: stereo matching and image denoising. Experimental results demonstrate that our methods are competitive against state-of-the-art rivals with significant computational gain.


Journal ArticleDOI
TL;DR: This work proposes a multiple neighborhood search hybridized with a Tabu Search and enhanced by complex ejection chains that outperforms all previously developed methods devised for the dynamic memory allocation problem.
Abstract: Memory allocation has a significant impact on power consumption in embedded systems. We address the dynamic memory allocation problem, in which memory requirements may change at each time interval. This problem has previously been addressed using integer linear programming and iterative approaches which build a solution interval by interval taking into account the requirements of partial time intervals. A GRASP that builds a solution for all time intervals has been proposed as a global approach. Due to the complexity of this problem, the GRASP algorithm solution quality decreases for larger instances. In order to overcome this drawback, we propose a multiple neighborhood search hybridized with a Tabu Search and enhanced by complex ejection chains. The proposed approach outperforms all previously developed methods devised for the dynamic memory allocation problem.