scispace - formally typeset
Search or ask a question

Showing papers on "Simulated annealing published in 2015"


Proceedings Article
21 Feb 2015
TL;DR: In this paper, the authors study the connection between the loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of variable independence, redundancy in network parametrization, and uniformity.
Abstract: We study the connection between the highly non-convex loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of: i) variable independence, ii) redundancy in network parametrization, and iii) uniformity. These assumptions enable us to explain the complexity of the fully decoupled neural network through the prism of the results from random matrix theory. We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum. The number of local minima outside that band diminishes exponentially with the size of the network. We empirically verify that the mathematical model exhibits similar behavior as the computer simulations, despite the presence of high dependencies in real networks. We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between largeand small-size networks where for the latter poor quality local minima have nonzero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting.

970 citations


Journal ArticleDOI
TL;DR: This survey presented a comprehensive investigation of PSO, including its modifications, extensions, and applications to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology.
Abstract: Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.

836 citations


Journal ArticleDOI
TL;DR: An overview over theses sampling methods is presented in an attempt to shed light on which should be selected depending on the type of system property studied, and whether metadynamics and replica-exchange molecular dynamics are the most adopted sampling methods to study biomolecular dynamics.

528 citations


Journal ArticleDOI
TL;DR: The results indicate that the proposed Vortex Search algorithm outperforms the SA, PS and ABC algorithms while being competitive with the PSO2011 algorithm.

269 citations


Journal ArticleDOI
TL;DR: In this paper, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA), achieving a time-to-99% success probability that is 10^8$ times faster than SA running on a single processor core.
Abstract: Quantum annealing (QA) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. Here, we demonstrate how finite range tunneling can provide considerable computational advantage. For a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA). For instances with 945 variables, this results in a time-to-99%-success-probability that is $\sim 10^8$ times faster than SA running on a single processor core. We also compared physical QA with Quantum Monte Carlo (QMC), an algorithm that emulates quantum tunneling on classical processors. We observe a substantial constant overhead against physical QA: D-Wave 2X again runs up to $\sim 10^8$ times faster than an optimized implementation of QMC on a single core. We note that there exist heuristic classical algorithms that can solve most instances of Chimera structured problems in a timescale comparable to the D-Wave 2X. However, we believe that such solvers will become ineffective for the next generation of annealers currently being designed. To investigate whether finite range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. For random instances of the number partitioning problem, we find numerically that QMC, as well as other algorithms designed to simulate QA, scale better than SA. We discuss the implications of these findings for the design of next generation quantum annealers.

234 citations


Journal ArticleDOI
TL;DR: In this article, the authors adopt two methods where the first method being the sensitivity analysis and the second method is the Gravitational Search Algorithm (GSA), which is a methodical technique, which is used to reduce the search space and to arrive at an accurate solution for recognizing the locality of capacitors.

225 citations


Journal ArticleDOI
TL;DR: It can be seen that not only average results produced by ABSO are more promising than those of the other algorithms but also ABSO has the most robustness and in other LPSP max values, the PV/WT/battery is the most cost-effective systems.

211 citations


Journal ArticleDOI
TL;DR: A parallel simulated annealing algorithm that includes a Residual Capacity and Radial Surcharge insertion-based heuristic is developed and applied to solve a variant of the vehicle routing problem in which customers require simultaneous pickup and delivery of goods during specific individual time windows.

181 citations


Journal ArticleDOI
TL;DR: This work presents a general framework for solving a real-world multimodal home-healthcare scheduling (MHS) problem from a major Austrian home- healthcare provider and develops four metaheuristics: variable neighborhood search, a memetic algorithm, scatter search and a simulated annealing hyper-heuristic.
Abstract: We present a general framework for solving a real-world multimodal home-healthcare scheduling (MHS) problem from a major Austrian home-healthcare provider. The goal of MHS is to assign home-care staff to customers and determine efficient multimodal tours while considering staff and customer satisfaction. Our approach is designed to be as problem-independent as possible, such that the resulting methods can be easily adapted to MHS setups of other home-healthcare providers. We chose a two-stage approach: in the first stage, we generate initial solutions either via constraint programming techniques or by a random procedure. During the second stage, the initial solutions are (iteratively) improved by applying one of four metaheuristics: variable neighborhood search, a memetic algorithm, scatter search and a simulated annealing hyper-heuristic. An extensive computational comparison shows that the approach is capable of solving real-world instances in reasonable time and produces valid solutions within only a few seconds.

162 citations


Journal ArticleDOI
TL;DR: Simulation results indicate that PSO-CF produces more promising results than the other variants, tabu search (TS), simulated annealing (SA) and harmony search (HS) algorithms in terms of mean, standard deviation, worst (Worst) and best (Best) the total annual cost (TAC).

148 citations


Journal ArticleDOI
01 Apr 2015
TL;DR: An evolutionary hybrid algorithm of invasive weed optimization merged with oppositional based learning to solve the large scale economic load dispatch (ELD) problems.
Abstract: Application of invasive weed optimization to economic dispatch problems.The oppositional based learning is implemented in IWO algorithm.The merits of proposed methodology are high accuracy and less execution time.The results of OIWO algorithm show its superiority to other tested techniques. This paper presents an evolutionary hybrid algorithm of invasive weed optimization (IWO) merged with oppositional based learning to solve the large scale economic load dispatch (ELD) problems. The oppositional invasive weed optimization (OIWO) is based on the colonizing behavior of weed plants and empowered by quasi opposite numbers. The proposed OIWO methodology has been developed to minimize the total generation cost by satisfying several constraints such as generation limits, load demand, valve point loading effect, multi-fuel options and transmission losses. The proposed algorithm is tested and validated using five different test systems. The most important merit of the proposed methodology is high accuracy and good convergence characteristics and robustness to solve ELD problems. The simulation results of the proposed OIWO algorithm show its applicability and superiority when compared with the results of other tested algorithms such as oppositional real coded chemical reaction, shuffled differential evolution, biogeography based optimization, improved coordinated aggregation based PSO, quantum-inspired particle swarm optimization, hybrid quantum mechanics inspired particle swarm optimization, modified shuffled frog leaping algorithm with genetic algorithm, simulated annealing based optimization and estimation of distribution and differential evolution algorithm.

Journal ArticleDOI
TL;DR: In this article, Brainstorm Optimization Algorithm (BSOA) is employed to find optimal location and setting of flexible AC transmission system (FACTS) devices in IEEE 57 bus system.

Journal ArticleDOI
TL;DR: This work proposes to complement head-to-head scaling studies that compare quantum annealing machines to state-of-the-art classical codes with an approach that compares the performance of different algorithms and/or computing architectures on different classes of computationally hard tunable spin-glass instances.
Abstract: While manufacturing limitations are imposing constraints on Moore's law, researchers are searching for novel computing architectures based on quantum-mechanical effects. However, it remains to be shown that quantum annealing techniques consistently outperform classical simulated annealing to minimize optimization problems.

Journal ArticleDOI
TL;DR: In this article, a hybrid glowworm swarm optimization algorithm (SAGSO) is used to combine the synergy effect of the local search with simulated annealing (SA) and the global search with glow worm swarm optimization (GSO).

Journal ArticleDOI
TL;DR: Simulated Annealing (SA) is proposed, as an alternative approach for optimal DL using modern optimization technique, i.e. metaheuristic algorithm, to improve the performance of Convolution Neural Network (CNN).

Journal ArticleDOI
TL;DR: In this paper, an energy management strategy for a series plug-in hybrid electric vehicle is proposed by using quadratic programming and simulated annealing method together to find the optimal battery power commands and the engine-on power.

Journal ArticleDOI
TL;DR: An iteration optimization approach integrating backpropagation neural network (BPNN) with genetic algorithm (GA) with modified Levenberg-Marquardt algorithms to optimize the thickness of blow molded polypropylene bellows used in cars.
Abstract: A hybrid optimization approach integrating both BPNN and GA is proposed.Standard Levenberg-Marquardt training algorithm is modified to accelerate the BPNN convergence.Simulated annealing algorithm is embedded into GA to enhance its local searching ability.Effectiveness of the proposed approach is demonstrated via its application in an engineering field.Results show that desired thickness in blow molded parts can be obtained via only fewer experimental trials. An iteration optimization approach integrating backpropagation neural network (BPNN) with genetic algorithm (GA) is proposed. The main idea of the approach is that a BPNN model is first developed and trained using fewer learning samples, then the trained BPNN model is solved using GA in the feasible region to search the model optimum. The result of verification conducted based on this optimum is added as a new sample into the training pattern set to retrain the BPNN model. Four strategies are proposed in the approach to deal with the possible deficiency of prediction accuracy due to fewer training patterns used. Specifically, in training the BPNN model, the Bayesian regularization and modified Levenberg-Marquardt algorithms are applied to improve its generalization ability and convergence, respectively; elitist strategy is adopted and simulated annealing algorithm is embedded into the GA to improve its local searching ability. The proposed approach is then applied to optimize the thickness of blow molded polypropylene bellows used in cars. The results show that the optimal die gap profile can be obtained after three iterations. The thicknesses at nine teeth peaks of the bellow molded using the optimal gap profile fall into the desired range (0.7ź0.05mm) and the usage of materials is reduced by 22%. More importantly, this optimal gap profile is obtained via only 23 times of experiments, which is far fewer than that needed in practical molding process. So the effectiveness of the proposed approach is demonstrated.

Journal ArticleDOI
TL;DR: A genetic algorithm (GA) with dual-chromosome coding for CTSP is presented and the results suggest that SAGA can achieve the best quality of solutions and HCGA should be the choice making good tradeoff between the solution quality and computing time.
Abstract: The multiple traveling salesman problem (MTSP) is an important combinatorial optimization problem. It has been widely and successfully applied to the practical cases in which multiple traveling individuals (salesmen) share the common workspace (city set). However, it cannot represent some application problems where multiple traveling individuals not only have their own exclusive tasks but also share a group of tasks with each other. This work proposes a new MTSP called colored traveling salesman problem (CTSP) for handling such cases. Two types of city groups are defined, i.e., each group of exclusive cities of a single color for a salesman to visit and a group of shared cities of multiple colors allowing all salesmen to visit. Evidences show that CTSP is NP-hard and a multidepot MTSP and multiple single traveling salesman problems are its special cases. We present a genetic algorithm (GA) with dual-chromosome coding for CTSP and analyze the corresponding solution space. Then, GA is improved by incorporating greedy, hill-climbing (HC), and simulated annealing (SA) operations to achieve better performance. By experiments, the limitation of the exact solution method is revealed and the performance of the presented GAs is compared. The results suggest that SAGA can achieve the best quality of solutions and HCGA should be the choice making good tradeoff between the solution quality and computing time.

Proceedings ArticleDOI
23 Sep 2015
TL;DR: This work presents CLTune, an auto-tuner for OpenCL kernels that evaluates and tunes kernel performance of a generic, user-defined search space of possible parameter-value combinations, and supports multiple search strategies including simulated annealing and particle swarm optimisation.
Abstract: This work presents CLTune, an auto-tuner for OpenCL kernels. It evaluates and tunes kernel performance of a generic, user-defined search space of possible parameter-value combinations. Example parameters include the OpenCL workgroup size, vector data-types, tile sizes, and loop unrolling factors. CLTune can be used in the following scenarios: 1) when there are too many tunable parameters to explore manually, 2) when performance portability across OpenCL devices is desired, or 3) when the optimal parameters change based on input argument values (e.g. matrix dimensions). The auto-tuner is generic, easy to use, open-source, and supports multiple search strategies including simulated annealing and particle swarm optimisation. CLTune is evaluated on two GPU case-studies inspired by the recent successes in deep learning: 2D convolution and matrix-multiplication (GEMM). For 2D convolution, we demonstrate the need for auto-tuning by optimizing for different filter sizes, achieving performance on-par or better than the state-of-the-art. For matrix-multiplication, we use CLTune to explore a parameter space of more than two-hundred thousand configurations, we show the need for device-specific tuning, and outperform the clBLAS library on NVIDIA, AMD and Intel GPUs.

Journal ArticleDOI
TL;DR: The newly developed algorithm (MOSPD) is applied to the OTC reservoir releasing problem during the snow melting season in 1998, 2000 and 2001, in which the more spreading and converged non-dominated solutions of MOSPD provide decision makers with better operational alternatives.
Abstract: This study demonstrates the application of an improved Evolutionary optimization Algorithm (EA), titled Multi-Objective Complex Evolution Global Optimization Method with Principal Component Analysis and Crowding Distance Operator (MOSPD), for the hydropower reservoir operation of the Oroville-Thermalito Complex (OTC) - a crucial head-water resource for the California State Water Project (SWP). In the OTC's water-hydropower joint management study, the nonlinearity of hydropower generation and the reservoir's water elevation-storage relationship are explicitly formulated by polynomial function in order to closely match realistic situations and reduce linearization approximation errors. Comparison among different curve-fitting methods is conducted to understand the impact of the simplification of reservoir topography. In the optimization algorithm development, techniques of crowding distance and principal component analysis are implemented to improve the diversity and convergence of the optimal solutions towards and along the Pareto optimal set in the objective space. A comparative evaluation among the new algorithm MOSPD, the original Multi-Objective Complex Evolution Global Optimization Method (MOCOM), the Multi-Objective Differential Evolution method (MODE), the Multi-Objective Genetic Algorithm (MOGA), the Multi-Objective Simulated Annealing approach (MOSA), and the Multi-Objective Particle Swarm Optimization scheme (MOPSO) is conducted using the benchmark functions. The results show that best the MOSPD algorithm demonstrated the best and most consistent performance when compared with other algorithms on the test problems. The newly developed algorithm (MOSPD) is further applied to the OTC reservoir releasing problem during the snow melting season in 1998 (wet year), 2000 (normal year) and 2001 (dry year), in which the more spreading and converged non-dominated solutions of MOSPD provide decision makers with better operational alternatives for effectively and efficiently managing the OTC reservoirs in response to the different climates, especially drought, which has become more and more severe and frequent in California. A new multi objective optimization algorithm, entitled MOSPD, is developed.Comparison study is carried out over eight complex test functions.MOSPD is effective and efficient in searching global Pareto optimal.A reservoir system model is built for Oroville-Thermalito complex in California.MOSPD provides flexible reservoir release strategies to support decision making.

Journal ArticleDOI
TL;DR: In this paper, a robust stop-skipping approach is proposed to reach the optimum stop schedule patterns in urban railway lines under uncertainty, where trains are allowed to skip any intermediate stations to increase the commercial speed and to save energy consumption.
Abstract: In this paper an operation mode which is based on the stop-skipping approach is studied in urban railway lines under uncertainty. In this mode, each train follows a specific stop schedule. Trains are allowed to skip any intermediate stations to increase the commercial speed and to save energy consumption. As the commercial speed increases, the number of required trains in operation reduces and results eliminating unnecessary costs. To that end, a new mathematical model is proposed to reach the optimum stop schedule patterns. In the planning step, based on the traffic studies, the headway distributions are computed for different weekdays, and holidays. However, in practice, because of many unexpected events, the traffic may alter from what is planned. Therefore, in this condition, a robust plan is required that is optimized and immunized from uncertainty. In this paper, a new robust mathematical model, as well as two heuristic algorithms including (1) a decomposition-based algorithm and (2) a Simulated Annealing (SA) based algorithm is proposed. Finally, an Iranian metro line is studied and the optimum patterns are presented and analyzed.

Journal ArticleDOI
01 Sep 2015
TL;DR: The proposed MOHDE-SAT integrates the orthogonal initialization method into the differential evolution, which enlarges the population diversity at the beginning of population evolution and can properly avoid the premature convergence problem.
Abstract: The orthogonal initialization method is integrated into differential evolution.Modified mutation operator is used to control convergence rate with simulated annealing technique.Entropy diversity method is utilized to adaptively monitor the population diversity.The results show the quality and efficiency of proposed algorithm in DEED problem. This paper proposes an improved multi-objective differential evolutionary algorithm named multi-objective hybrid differential evolution with simulated annealing technique (MOHDE-SAT) to solve dynamic economic emission dispatch (DEED) problem. The proposed MOHDE-SAT integrates the orthogonal initialization method into the differential evolution, which enlarges the population diversity at the beginning of population evolution. In addition, modified mutation operator and archive retention mechanisms are used to control convergence rate, and simulated annealing technique and entropy diversity method are utilized to adaptively monitor the population diversity as the evolution proceeds, which can properly avoid the premature convergence problem. Furthermore, the MOHDE-SAT is applied on the thermal system with a heuristic constraint handling method, and obtains more desirable results in comparison to those alternatives established recently. The obtained results also reveal that the proposed MOHDE-SAT can provide a viable way for solving DEED problems.

Journal ArticleDOI
TL;DR: In this paper, two probability models are proposed to generate candidate structures by random perturbations in the upper level, and the minimum total annualized cost of each candidate structure is solved in the lower level and then sent to the higher level where different structures are evaluated by simulated annealing mechanism.

Journal ArticleDOI
TL;DR: A simulated annealing (SA)-based heuristic for solving open location-routing problem (OLRP) is proposed and computational results indicate that the proposed heuristic efficiently solves OLRP.

Journal ArticleDOI
TL;DR: In this article, a very fast simulated annealing (VFSA) global optimization is used to interpret residual gravity anomaly in a very large model space and the nature of uncertainty in the interpretation is also examined simultaneously in the present study.
Abstract: A very fast simulated annealing (VFSA) global optimization is used to interpret residual gravity anomaly Since, VFSA optimization yields a large number of best-fitted models in a vast model space; the nature of uncertainty in the interpretation is also examined simultaneously in the present study The results of VFSA optimization reveal that various parameters show a number of equivalent solutions when shape of the target body is not known and shape factor ‘q’ is also optimized together with other model parameters The study reveals that amplitude coefficient k is strongly dependent on shape factor This shows that there is a multi-model type uncertainty between these two model parameters derived from the analysis of cross-plots However, the appraised values of shape factor from various VFSA runs clearly indicate whether the subsurface structure is sphere, horizontal or vertical cylinder type structure Accordingly, the exact shape factor (15 for sphere, 10 for horizontal cylinder and 05 for vertical cylinder) is fixed and optimization process is repeated After fixing the shape factor, analysis of uncertainty and cross-plots shows a well-defined uni-model characteristic The mean model computed after fixing the shape factor gives the utmost consistent results Inversion of noise-free and noisy synthetic data as well as field data demonstrates the efficacy of the approach

Journal ArticleDOI
TL;DR: In this paper, an updated model of light and charge yields from nuclear recoils in liquid xenon with a simultaneously constrained parameter set is presented, where the model is constrained by constructing global cost functions and using a simulated annealing algorithm and a Markov Chain Monte Carlo approach to optimize and find confidence intervals on all free parameters in the model.
Abstract: We present an updated model of light and charge yields from nuclear recoils in liquid xenon with a simultaneously constrained parameter set. A global analysis is performed using measurements of electron and photon yields compiled from all available historical data, as well as measurements of the ratio of the two. These data sweep over energies from $1 - 300~\hbox{keV}$ and external applied electric fields from $0 - 4060~\hbox{V/cm}$ . The model is constrained by constructing global cost functions and using a simulated annealing algorithm and a Markov Chain Monte Carlo approach to optimize and find confidence intervals on all free parameters in the model. This analysis contrasts with previous work in that we do not unnecessarily exclude data sets nor impose artificially conservative assumptions, do not use spline functions, and reduce the number of parameters used in NEST v0.98. We report our results and the calculated best-fit charge and light yields. These quantities are crucial to understanding the response of liquid xenon detectors in the energy regime important for rare event searches such as the direct detection of dark matter particles.

Journal ArticleDOI
TL;DR: The results show that GA is competitive only for pairwise testing for subjects with a small number of constraints; the results for the greedy algorithm are actually slightly superior, however, the results are critically dependent on the approach adopted to constraint handling.
Abstract: Combinatorial interaction testing (CIT) is important because it tests the interactions between the many features and parameters that make up the configuration space of software systems. Simulated Annealing (SA) and Greedy Algorithms have been widely used to find CIT test suites. From the literature, there is a widely-held belief that SA is slower, but produces more effective tests suites than Greedy and that SA cannot scale to higher strength coverage. We evaluated both algorithms on seven real-world subjects for the well-studied two-way up to the rarely-studied six-way interaction strengths. Our findings present evidence to challenge this current orthodoxy: real-world constraints allow SA to achieve higher strengths. Furthermore, there was no evidence that Greedy was less effective (in terms of time to fault revelation) compared to SA; the results for the greedy algorithm are actually slightly superior. However, the results are critically dependent on the approach adopted to constraint handling. Moreover, we have also evaluated a genetic algorithm for constrained CIT test suite generation. This is the first time strengths higher than 3 and constraint handling have been used to evaluate GA. Our results show that GA is competitive only for pairwise testing for subjects with a small number of constraints.

Journal ArticleDOI
01 Oct 2015
TL;DR: The objective of the work presented in this paper is to develop an effective method for classification problems that can find high-quality solutions at a high convergence speed and to achieve this objective, a method that hybridizes the firefly algorithm with simulated annealing (denoted as SFA).
Abstract: Hybridizes the firefly algorithm with simulated annealing, where simulated annealing is applied to control the randomness step inside the firefly algorithm.A Levy flight is embedded within the firefly algorithm to better explore the search space.A combination of firefly, Levy flight and simulated annealing is investigated to further improve the solution. Classification is one of the important tasks in data mining. The probabilistic neural network (PNN) is a well-known and efficient approach for classification. The objective of the work presented in this paper is to build on this approach to develop an effective method for classification problems that can find high-quality solutions (with respect to classification accuracy) at a high convergence speed. To achieve this objective, we propose a method that hybridizes the firefly algorithm with simulated annealing (denoted as SFA), where simulated annealing is applied to control the randomness step inside the firefly algorithm while optimizing the weights of the standard PNN model. We also extend our work by investigating the effectiveness of using Levy flight within the firefly algorithm (denoted as LFA) to better explore the search space and by integrating SFA with Levy flight (denoted as LSFA) in order to improve the performance of the PNN. The algorithms were tested on 11 standard benchmark datasets. Experimental results indicate that the LSFA shows better performance than the SFA and LFA. Moreover, when compared with other algorithms in the literature, the LSFA is able to obtain better results in terms of classification accuracy.

Journal ArticleDOI
TL;DR: The imperialist competitive algorithm (ICA) is implemented as a newly developed powerful optimization tool and an innovative method of imposing Grashof's law for generation of acceptable initial population to improve the optimization performance is presented.

Journal ArticleDOI
TL;DR: Simulation results indicate that the use of simulated annealing and thermodynamic simulatedAnnealing in the scheduling of a dynamic multi-cloud system with virtual machines of heterogeneous performance serving Bag-of-Tasks applications can have a significant impact in performance while maintaining a good cost-performance trade-off.