scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic (computer science) published in 2015"


Journal ArticleDOI
TL;DR: This survey presented a comprehensive investigation of PSO, including its modifications, extensions, and applications to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology.
Abstract: Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.

836 citations


Journal ArticleDOI
01 Nov 2015
TL;DR: This paper introduces the Join Order Benchmark (JOB) and experimentally revisit the main components in the classic query optimizer architecture using a complex, real-world data set and realistic multi-join queries.
Abstract: Finding a good join order is crucial for query performance. In this paper, we introduce the Join Order Benchmark (JOB) and experimentally revisit the main components in the classic query optimizer architecture using a complex, real-world data set and realistic multi-join queries. We investigate the quality of industrial-strength cardinality estimators and find that all estimators routinely produce large errors. We further show that while estimates are essential for finding a good join order, query performance is unsatisfactory if the query engine relies too heavily on these estimates. Using another set of experiments that measure the impact of the cost model, we find that it has much less influence on query performance than the cardinality estimates. Finally, we investigate plan enumeration techniques comparing exhaustive dynamic programming with heuristic algorithms and find that exhaustive enumeration improves performance despite the sub-optimal cardinality estimates.

449 citations


Proceedings ArticleDOI
11 May 2015
TL;DR: This paper formalizes the network function placement and chaining problem and proposes an Integer Linear Programming (ILP) model to solve it and proposes a heuristic procedure for efficiently guiding the ILP solver towards feasible, near-optimal solutions.
Abstract: Network Function Virtualization (NFV) is a promising network architecture concept, in which virtualization technologies are employed to manage networking functions via software as opposed to having to rely on hardware to handle these functions. By shifting dedicated, hardware-based network function processing to software running on commoditized hardware, NFV has the potential to make the provisioning of network functions more flexible and cost-effective, to mention just a few anticipated benefits. Despite consistent initial efforts to make NFV a reality, little has been done towards efficiently placing virtual network functions and deploying service function chains (SFC). With respect to this particular research problem, it is important to make sure resource allocation is carefully performed and orchestrated, preventing over- or under-provisioning of resources and keeping end-to-end delays comparable to those observed in traditional middlebox-based networks. In this paper, we formalize the network function placement and chaining problem and propose an Integer Linear Programming (ILP) model to solve it. Additionally, in order to cope with large infrastructures, we propose a heuristic procedure for efficiently guiding the ILP solver towards feasible, near-optimal solutions. Results show that the proposed model leads to a reduction of up to 25% in end-to-end delays (in comparison to chainings observed in traditional infrastructures) and an acceptable resource over-provisioning limited to 4%. Further, we demonstrate that our heuristic approach is able to find solutions that are very close to optimality while delivering results in a timely manner.

389 citations


Journal ArticleDOI
TL;DR: This work develops one data collection protocol called EDAL, which stands for Energy-efficient Delay-aware Lifetime-balancing data collection, and proposes both a centralized heuristic to reduce its computational overhead and a distributed heuristics to make the algorithm scalable for large-scale network operations.
Abstract: Our work in this paper stems from our insight that recent research efforts on open vehicle routing (OVR) problems, an active area in operations research, are based on similar assumptions and constraints compared to sensor networks. Therefore, it may be feasible that we could adapt these techniques in such a way that they will provide valuable solutions to certain tricky problems in the wireless sensor network (WSN) domain. To demonstrate that this approach is feasible, we develop one data collection protocol called EDAL, which stands for Energy-efficient Delay-aware Lifetime-balancing data collection. The algorithm design of EDAL leverages one result from OVR to prove that the problem formulation is inherently NP-hard. Therefore, we proposed both a centralized heuristic to reduce its computational overhead and a distributed heuristic to make the algorithm scalable for large-scale network operations. We also develop EDAL to be closely integrated with compressive sensing, an emerging technique that promises considerable reduction in total traffic cost for collecting sensor readings under loose delay bounds. Finally, we systematically evaluate EDAL to compare its performance to related protocols in both simulations and a hardware testbed.

332 citations


Journal ArticleDOI
01 Feb 2015
TL;DR: A survey of genetic algorithms that are designed for solving multi depot vehicle routing problem, and the efficiency of different existing genetic methods on standard benchmark problems in detail are presented.
Abstract: We reviewed the use of genetic algorithms on the MDVRP (multi depot vehicle routing problem).Survey was made on every operator and setting of genetic algorithm for this problem.We tested different genetic operators and compared the results.We compared the genetic algorithms to other metaheuristic algorithms on MDVRP based on the results on standard benchmarks. This article presents a survey of genetic algorithms that are designed for solving multi depot vehicle routing problem. In this context, most of the articles focus on different genetic approaches, methods and operators, commonly used in practical applications to solve this well-known and researched problem. Besides providing an up-to-date overview of the research in the field, the results of a thorough experiment are presented and discussed, which evaluated the efficiency of different existing genetic methods on standard benchmark problems in detail. In this manner, the insights into strengths and weaknesses of specific methods, operators and settings are presented, which should help researchers and practitioners to optimize their solutions in further studies done with the similar type of the problem in mind. Finally, genetic algorithm based solutions are compared with other existing approaches, both exact and heuristic, for solving this same problem.

239 citations


Journal ArticleDOI
TL;DR: In this paper, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA), achieving a time-to-99% success probability that is 10^8$ times faster than SA running on a single processor core.
Abstract: Quantum annealing (QA) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. Here, we demonstrate how finite range tunneling can provide considerable computational advantage. For a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA). For instances with 945 variables, this results in a time-to-99%-success-probability that is $\sim 10^8$ times faster than SA running on a single processor core. We also compared physical QA with Quantum Monte Carlo (QMC), an algorithm that emulates quantum tunneling on classical processors. We observe a substantial constant overhead against physical QA: D-Wave 2X again runs up to $\sim 10^8$ times faster than an optimized implementation of QMC on a single core. We note that there exist heuristic classical algorithms that can solve most instances of Chimera structured problems in a timescale comparable to the D-Wave 2X. However, we believe that such solvers will become ineffective for the next generation of annealers currently being designed. To investigate whether finite range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. For random instances of the number partitioning problem, we find numerically that QMC, as well as other algorithms designed to simulate QA, scale better than SA. We discuss the implications of these findings for the design of next generation quantum annealers.

234 citations


Posted Content
TL;DR: A simple heuristic based on an estimate of the Lipschitz constant is investigated that captures the most important aspect of this interaction at negligible computational overhead and compares well, in running time, with much more elaborate alternatives.
Abstract: The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These facilities could be computational or physical facets of the process being optimized. E.g. in biological experiments many experimental set ups allow several samples to be simultaneously processed. Batch methods, however, require modeling of the interaction between the evaluations in the batch, which can be expensive in complex scenarios. We investigate a simple heuristic based on an estimate of the Lipschitz constant that captures the most important aspect of this interaction (i.e. local repulsion) at negligible computational overhead. The resulting algorithm compares well, in running time, with much more elaborate alternatives. The approach assumes that the function of interest, $f$, is a Lipschitz continuous function. A wrap-loop around the acquisition function is used to collect batches of points of certain size minimizing the non-parallelizable computational effort. The speed-up of our method with respect to previous approaches is significant in a set of computationally expensive experiments.

221 citations


Journal ArticleDOI
TL;DR: A novel meta-heuristic algorithm called Grey Wolf Optimizer (GWO), which is inspired by living and haunting behavior of wolves, is proposed and it is shown that the GWO outperforms the other employed well-known meta- heuristic algorithms.

217 citations


Journal ArticleDOI
TL;DR: The grey wolf optimizer and differential evolution algorithms are applied to solve the optimal power flow (OPF) problem and show the effectiveness of the proposed algorithms in comparison with the other recent heuristic algorithms in the literature.
Abstract: This article applies the grey wolf optimizer and differential evolution (DE) algorithms to solve the optimal power flow (OPF) problem. Both algorithms are used to optimize single objective functions sequentially under the system constraints. Then, the DE algorithm is utilized to solve multi-objective OPF problems. The indicator of the static line stability index is incorporated into the OPF problem. The fuzzy-based Pareto front method is tested to find the best compromise point of multi-objective functions. The proposed algorithms are used to determine the optimal values of the continuous and discrete control variables. These algorithms are applied to the standard IEEE 30-bus and 118-bus systems with different scenarios. The simulation results are investigated and analyzed. The achieved results show the effectiveness of the proposed algorithms in comparison with the other recent heuristic algorithms in the literature.

217 citations


Journal ArticleDOI
TL;DR: This paper proposes a 3-step mathematical programming based heuristic for the static repositioning problem of bike-sharing systems, and was shown to outperform a previous method suggested in the literature for the same problem.
Abstract: Over the last few years, bike-sharing systems have emerged as a new mode of transportation in a large number of big cities worldwide. This new type of mobility mode is still developing, and many challenges associated with its operation are not well addressed yet. One such major challenge of bike-sharing systems is the need to respond to fluctuating demands for bicycles and for vacant lockers at each station, which directly influences the service level provided to its users. This is done using dedicated repositioning vehicles (light trucks) that are routed through the stations, loading and unloading bicycles to/from them. Performing this operation during the night when the demand in the system is negligible is referred to as the static repositioning problem. In this paper, we propose a 3-step mathematical programming based heuristic for the static repositioning problem. In the first step, stations are clustered according to geographic as well as inventory (of bicycles) considerations. In the second step the repositioning vehicles are routed through the clusters while tentative inventory decisions are made for each individual station. Finally, the original repositioning problem is solved with the restriction that traversal of the repositioning vehicles is allowed only between stations that belong to consecutive clusters according to the routes determined in the previous step, or between stations of the same cluster. In the first step the clusters are formed using a specialized saving heuristic. The last two steps are formulated as Mixed Integer Linear Programs and solved by a commercial solver. The method was tested on instances of up to 200 stations and three repositioning vehicles, and was shown to outperform a previous method suggested in the literature for the same problem.

216 citations


Journal ArticleDOI
TL;DR: A novel roadside unit (RSU) cloud, a vehicular cloud, as the operational backbone of the vehicle grid in the Internet of Vehicles (IoV), and an efficient heuristic approach to minimize the reconfiguration costs is proposed.
Abstract: We propose a novel roadside unit (RSU) cloud, a vehicular cloud, as the operational backbone of the vehicle grid in the Internet of Vehicles (IoV). The architecture of the proposed RSU cloud consists of traditional and specialized RSUs employing software-defined networking (SDN) to dynamically instantiate, replicate, and/or migrate services. We leverage the deep programmability of SDN to dynamically reconfigure the services hosted in the network and their data forwarding information to efficiently serve the underlying demand from the vehicle grid. We then present a detailed reconfiguration overhead analysis to reduce reconfigurations, which are costly for service providers. We use the reconfiguration cost analysis to design and formulate an integer linear programming (ILP) problem to model our novel RSU cloud resource management (CRM). We begin by solving for the Pareto optimal frontier (POF) of nondominated solutions, such that each solution is a configuration that minimizes either the number of service instances or the RSU cloud infrastructure delay, for a given average demand. Then, we design an efficient heuristic to minimize the reconfiguration costs. A fundamental contribution of our heuristic approach is the use of reinforcement learning to select configurations that minimize reconfiguration costs in the network over the long term. We perform reconfiguration cost analysis and compare the results of our CRM formulation and heuristic. We also show the reduction in reconfiguration costs when using reinforcement learning in comparison to a myopic approach. We show significant improvement in the reconfigurations costs and infrastructure delay when compared to purist service installations.

Journal ArticleDOI
01 Mar 2015
TL;DR: The method proposed in this study is compared with recently published studies in the literature on real-world problems and it is proven that this method is more effective than the studies belonging to other literature on this sort of problems.
Abstract: Optimization can be defined as an effort of generating solutions to a problem under bounded circumstances. Optimization methods have arisen from a desire to utilize existing resources in the best possible way. An important class of optimization methods is heuristic algorithms. Heuristic algorithms have generally been proposed by inspiration from the nature. For instance, Particle Swarm Optimization has been inspired by social behavior patterns of fish schooling or bird flocking. Bat algorithm is a heuristic algorithm proposed by Yang in 2010 and has been inspired by a property, named as echolocation, which guides the bats' movements during their flight and hunting even in complete darkness. In this work, local and global search characteristics of bat algorithm have been enhanced through three different methods. To validate the performance of the Enhanced Bat Algorithm (EBA), standard test functions and constrained real-world problems have been employed. The results obtained by these test sets have proven EBA superior to the standard one. Furthermore, the method proposed in this study is compared with recently published studies in the literature on real-world problems and it is proven that this method is more effective than the studies belonging to other literature on this sort of problems.

Journal ArticleDOI
TL;DR: A parallel simulated annealing algorithm that includes a Residual Capacity and Radial Surcharge insertion-based heuristic is developed and applied to solve a variant of the vehicle routing problem in which customers require simultaneous pickup and delivery of goods during specific individual time windows.

Journal ArticleDOI
TL;DR: The combination of the solutions to TCOV and NCON offers a promising solution to the original MSD problem that balances the load of different sensors and prolongs the network lifetime consequently.
Abstract: Coverage of interest points and network connectivity are two main challenging and practically important issues of Wireless Sensor Networks (WSNs). Although many studies have exploited the mobility of sensors to improve the quality of coverage andconnectivity, little attention has been paid to the minimization of sensors’ movement, which often consumes the majority of the limited energy of sensors and thus shortens the network lifetime significantly. To fill in this gap, this paper addresses the challenges of the Mobile Sensor Deployment (MSD) problem and investigates how to deploy mobile sensors with minimum movement to form a WSN that provides both target coverage and network connectivity. To this end, the MSD problem is decomposed into two sub-problems: the Target COVerage (TCOV) problem and the Network CONnectivity (NCON) problem. We then solve TCOV and NCON one by one and combine their solutions to address the MSD problem. The NP-hardness of TCOV is proved. For a special case of TCOV where targets disperse from each other farther than double of the coverage radius, an exact algorithm based on the Hungarian method is proposed to find the optimal solution. For general cases of TCOV, two heuristic algorithms, i.e., the Basic algorithm based on clique partition and the TV-Greedy algorithm based on Voronoi partition of the deployment region, are proposed to reduce the total movement distance ofsensors. For NCON, an efficient solution based on the Steiner minimum tree with constrained edge length is proposed. Thecombination of the solutions to TCOV and NCON, as demonstrated by extensive simulation experiments, offers a promising solutionto the original MSD problem that balances the load of different sensors and prolongs the network lifetime consequently.

Journal ArticleDOI
TL;DR: Simulation results reveal that the PV/WT/battery system is the most cost-effective one and adaptive inertia weight-based PSO algorithm yields more promising results than the other PSO variants.

Journal ArticleDOI
TL;DR: An optimal power dispatch problem on a 24-h basis for distribution systems with distributed energy resources also including directly controlled shiftable loads is presented, using a novel nature-inspired multiobjective optimization algorithm based on an original extension of a glowworm swarm particles optimization algorithm.
Abstract: In this paper, an optimal power dispatch problem on a 24-h basis for distribution systems with distributed energy resources (DER) also including directly controlled shiftable loads is presented. In the literature, the optimal energy management problems in smart grids (SGs) where such types of loads exist are formulated using integer or mixed integer variables. In this paper, a new formulation of shiftable loads is employed. Such formulation allows reduction in the number of optimization variables and the adoption of real valued optimization methods such as the one proposed in this paper. The method applied is a novel nature-inspired multiobjective optimization algorithm based on an original extension of a glowworm swarm particles optimization algorithm, with algorithmic enhancements to treat multiple objective formulations. The performance of the algorithm is compared to the NSGA-II on the considered power systems application.

Journal ArticleDOI
19 Nov 2015
TL;DR: This work analyses how task replication reduces latency, and proposes a heuristic algorithm to search for the best replication strategies when it is difficult to model the empirical behavior of task execution time and uses the proposed analysis techniques.
Abstract: In cloud computing jobs consisting of many tasks run in parallel, the tasks on the slowest machines (straggling tasks) become the bottleneck in the completion of the job. One way to combat the variability in machine response time is to add replicas of straggling tasks and wait for the earliest copy to finish. Using the theory of extreme order statistics, we analyze how task replication reduces latency, and its impact on the cost of computing resources. We also propose a heuristic algorithm to search for the best replication strategies when it is difficult to model the empirical behavior of task execution time and use the proposed analysis techniques. Evaluation of the heuristic policies on Google Trace data shows a significant latency reduction compared to the replication strategy used in MapReduce.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a framework to estimate the electrical equivalent circuit parameters of photovoltaic arrays by use of an efficient heuristic technique inspired by the mating process of different bird species.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: In this article, pairwise costs are added to the min-cost network flow framework for multi-object tracking, and a convex relaxation solution with an efficient rounding heuristic is proposed to give certificates of small suboptimality.
Abstract: Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the “tracking-by-detection” paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.

Journal ArticleDOI
TL;DR: A new heuristic procedure called the Prescreened Heuristic Sampling Method (PHSM) is proposed and tested on seven WDS cases studies of varying size and shows that PHSM clearly performs best overall, both in terms of computational efficiency and the ability to find near-optimal solutions.
Abstract: Over the last two decades, evolutionary algorithms (EAs) have become a popular approach for solving water resources optimization problems. However, the issue of low computational efficiency limits their application to large, realistic problems. This paper uses the optimal design of water distribution systems (WDSs) as an example to illustrate how the efficiency of genetic algorithms (GAs) can be improved by using heuristic domain knowledge in the sampling of the initial population. A new heuristic procedure called the Prescreened Heuristic Sampling Method (PHSM) is proposed and tested on seven WDS cases studies of varying size. The EPANet input files for these case studies are provided as supplementary material. The performance of the PHSM is compared with that of another heuristic sampling method and two non-heuristic sampling methods. The results show that PHSM clearly performs best overall, both in terms of computational efficiency and the ability to find near-optimal solutions. In addition, the relative advantage of using the PHSM increases with network size. A new heuristic sampling method is introduced for the optimization of WDSs using GAs.The proposed PHSM performs better than three other sampling methods.The advantages are both in efficiency and the ability to find near-optimal solutions.The relative advantage of using the PHSM increases with network size and complexity.

Proceedings ArticleDOI
09 Nov 2015
TL;DR: This paper adapt two widely-used sampling strategies for performance prediction to the domain of configurable systems and evaluate them in terms of sampling cost, which considers prediction accuracy and measurement effort simultaneously.
Abstract: A key challenge of the development and maintenance of configurable systems is to predict the performance of individual system variants based on the features selected. It is usually infeasible to measure the performance of all possible variants, due to feature combinatorics. Previous approaches predict performance based on small samples of measured variants, but it is still open how to dynamically determine an ideal sample that balances prediction accuracy and measurement effort. In this paper, we adapt two widely-used sampling strategies for performance prediction to the domain of configurable systems and evaluate them in terms of sampling cost, which considers prediction accuracy and measurement effort simultaneously. To generate an initial sample, we introduce a new heuristic based on feature frequencies and compare it to a traditional method based on t-way feature coverage. We conduct experiments on six real-world systems and provide guidelines for stakeholders to predict performance by sampling.

Journal ArticleDOI
TL;DR: In this paper, an integrated approach to optimize electrical, natural gas, and district heating networks simultaneously is studied, where several interdependencies between these infrastructures are considered in details including a nonlinear part-load performance for boilers and CHPs besides the valve-point effect for generators.

Journal ArticleDOI
TL;DR: A generalized heuristic approach is proposed to solve the optimal power flow problem in multicarrier energy systems using the modified teaching-learning-based optimization method, which can successfully reach the global optimal solution of the problem.
Abstract: In this paper, a generalized heuristic approach is proposed to solve the optimal power flow problem in multicarrier energy systems. This technique omits the use of any extra variable, such as dispatch factors or dummy variables required for conventional techniques. The unified proposed approach can be utilized with all evolutionary algorithms. Modeling hub devices with constant efficiency may produce a considerable error in finding the actual optimal operating point of the whole network. However, using variable efficiency model adds complexity to the conventional methods while increasing the computation–demand of these techniques, but this target can be simply implemented by the proposed scheme. A multicarrier energy system consists of an electrical, a natural gas, and a district heating network is analyzed by the proposed algorithm using the modified teaching–learning-based optimization method. Results validate the utilized approach and show that it can successfully reach the global optimal solution of the problem.

Journal ArticleDOI
TL;DR: Different constraint handling strategies used in heuristic optimisation algorithms and especially particle swarm optimisation (PSO) are reviewed to provide a broad view to researchers in related field and help them to identify the appropriate constraint handling strategy for their own optimisation problem.
Abstract: Almost all real-world optimisation problems are constrained. Solving constrained problems is difficult for optimisation techniques. In this paper, different constraint handling strategies used in heuristic optimisation algorithms and especially particle swarm optimisation (PSO) are reviewed. Since PSO is a very common optimisation algorithm, this paper can provide a broad view to researchers in related field and help them to identify the appropriate constraint handling strategy for their own optimisation problem.

Journal ArticleDOI
TL;DR: This paper investigates the integrated optimization of production, distribution, and inventory decisions related to supplying multiple retailers from a central production facility through a two-phase iterative method that iteratively focuses on lot-sizing and distribution decisions.
Abstract: This paper investigates the integrated optimization of production, distribution, and inventory decisions related to supplying multiple retailers from a central production facility. A single-item capacitated lot-sizing problem is defined for optimizing production decisions and inventory management. The optimization of daily distribution is modeled as a traveling salesman problem or a vehicle routing problem depending on the number of vehicles. A two-phase iterative method, from which several heuristics are derived, is proposed that iteratively focuses on lot-sizing and distribution decisions. Computational results show that our best heuristic outperforms existing methods.

Journal ArticleDOI
TL;DR: A heuristic seeding mechanism is introduced to CGP which allows for improving not only the quality of evolved circuits, but also reducing the time of evolution and the efficiency of the proposed method is evaluated.
Abstract: In approximate computing, the requirement of perfect functional behavior can be relaxed because some applications are inherently error resilient. Approximate circuits, which fall into the approximate computing paradigm, are designed in such a way that they do not fully implement the logic behavior given by the specification and, hence, their accuracy can be exchanged for lower area, delay or power consumption. In order to automate the design process, we propose to evolve approximate digital circuits that show a minimal error for a supplied amount of resources. The design process, which is based on Cartesian genetic programming (CGP), can be repeated many times in order to obtain various tradeoffs between the accuracy and area. A heuristic seeding mechanism is introduced to CGP, which allows for improving not only the quality of evolved circuits, but also reducing the time of evolution. The efficiency of the proposed method is evaluated for the gate as well as the functional level evolution. In particular, approximate multipliers and median circuits that show very good parameters in comparison with other available implementations were constructed by means of the proposed method.

Journal ArticleDOI
TL;DR: In this article, an effective heuristic approach is proposed to achieve a near optimal solution with low computational costs, which can be implemented in an embedded device with severe limitations on memory size and computational power, and can get an optimal value in real-time.

Journal ArticleDOI
TL;DR: The Non-Centralized Model Predictive Control framework is proposed, which proposes suitable on-line methods to decide which information is shared and how this information is used between the different local predictive controllers operating in a decentralized, distributed, and/or hierarchical way.
Abstract: The Non-Centralized Model Predictive Control (NC-MPC) framework refers in this paper to any distributed, hierarchical, or decentralized model predictive controller (or a combination of them) the structure of which can change over time and the control actions of which are not obtained based on a centralized computation. Within this framework, we propose suitable on-line methods to decide which information is shared and how this information is used between the different local predictive controllers operating in a decentralized, distributed, and/or hierarchical way. Evaluating all the possible structures of the NC-MPC controller leads into a combinatorial optimization problem. Therefore, we also propose heuristic reduction methods, to keep tractable the number of NC-MPC problems to be solved. To show the benefits of the proposed framework, a case study of a set of coupled water tanks is presented.

Journal ArticleDOI
01 Jan 2015
TL;DR: The proposed heuristic algorithms produce close-to-optimal solutions and scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Abstract: Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer quality of experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider’s total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.

Journal ArticleDOI
TL;DR: In this article, a new modeling approach for integrating speed optimization in the planning of shipping routes, as well as a rolling horizon heuristic for solving the combined problem is proposed, which yields good solutions to the integrated problem within reasonable time.