scispace - formally typeset
Search or ask a question

Showing papers on "Simulated annealing published in 2013"


Journal ArticleDOI
TL;DR: This experiment uses groups of eight superconducting flux qubits with programmable spin-spin couplings, embedded on a commercially available chip with >100 functional qubits, and suggests that programmable quantum devices, scalable with currentsuperconducting technology, implement quantum annealing with a surprising robustness against noise and imperfections.
Abstract: Quantum annealing is the quantum computational equivalent of the classical approach to solving optimization problems known as simulated annealing. Boixo et al. report experimental evidence for the realization of quantum annealing processes that are unexpectedly robust against noise and imperfections.

314 citations


Journal ArticleDOI
TL;DR: Comparison between the results obtained by the proposed algorithms and those obtained by different optimization algorithms shows the better performance of the proposed algorithm.

289 citations


Journal ArticleDOI
TL;DR: In this article, an integrated rule-based meta-heuristic optimization approach is used to deal with a multi-level energy management system for a multisource electric vehicle for sharing energy and power between two sources with different characteristics.

270 citations


Journal ArticleDOI
TL;DR: The proposed IADE algorithm provides better performance for estimation of the solar cell and module parameter values than other popular optimization methods such as particle swarm optimization, genetic algorithm, conventional DE, simulated annealing (SA), and a recently proposed analytical method.

237 citations


Journal ArticleDOI
TL;DR: GenSA as discussed by the authors is a package for generalized simulated annealing to process complicated nonlinear objective functions with a large number of local minima, which can serve as a complementary tool to other widely used R packages for optimization.
Abstract: Many problems in statistics, finance, biology, pharmacology, physics, mathematics, economics, and chemistry involve determination of the global minimum of multidimensional functions R packages for different stochastic methods such as genetic algorithms and differential evolution have been developed and successfully used in the R community Based on Tsallis statistics, the R package GenSA was developed for generalized simulated annealing to process complicated non-linear objective functions with a large number of local minima In this paper we provide a brief introduction to the R package and demonstrate its utility by solving a non-convex portfolio optimization problem in finance and the Thomson problem in physics GenSA is useful and can serve as a complementary tool to, rather than a replacement for, other widely used R packages for optimization

234 citations


01 Jan 2013
TL;DR: A brief introduction to the GenSA R package is provided and its utility is demonstrated by solving a non-convex portfolio optimization problem in finance and the Thomson problem in physics.
Abstract: Many problems in statistics, finance, biology, pharmacology, physics, mathematics, eco- nomics, and chemistry involve determination of the global minimum of multidimensional functions. R packages for different stochastic methods such as genetic algorithms and differential evolution have been developed and successfully used in the R community. Based on Tsallis statistics, the R package GenSA was developed for generalized simulated annealing to process complicated non-linear objective functions with a large number of local minima. In this paper we provide a brief introduction to the R package and demonstrate its utility by solving a non-convex portfolio optimization problem in finance and the Thomson problem in physics. GenSA is useful and can serve as a complementary tool to, rather than a replacement for, other widely used R packages for optimization. In metallurgy, annealing a molten metal causes it to reach its crystalline state which is the global minimum in terms of thermodynamic energy. The simulated annealing algorithm was developed to simulate the annealing process to find a global minimum of the objective function (Kirkpatrick et al., 1983). In the simulated annealing algorithm, the objective function is treated as the energy function of a molten metal and one or more artificial temperatures are introduced and gradually cooled, analagous to the annealing technique. This artificial temperature (or set of temperatures) acts as a source of stochasticity, which is convenient for the systems to eventually escape from local minima. Near the end of the annealing process, the system is hopefully inside the attractive basin of the global minimum (or in one of the global minima if more than one global minimum exists). In contrast to the simulation of the annealing process of molten metal, genetic algorithms (Holland, 1975) were developed by mimicing the process of natural evolution. A population of strings which encode candidate solutions for an optimization problem evolve over many iterations toward better solutions. In general the solutions are represented by bitstrings, but other encodings such as floating- point numbers are also widely used. The evolution usually starts from a population of randomly generated individuals. In each generation, the fitness of each individual in the population is evaluated. New members of the population in the next generation are generated by cross-over, mutation, and selection (based on their fitness). Differential evolution belongs to a class of genetic algorithms. The basic idea behind the taboo search method (Glover et al., 1993) is to forbid the search to return to points already visited in the (usually discrete) search space, at least for the upcoming few steps. Similar to simulated annealing, taboo search can temporarily accept new solutions which are worse than earlier solutions, in order to avoid paths already investigated. Taboo search has traditionally been applied to combinatorial optimization problems and it has been extended to be applicable to continuous global optimization problems by a discrete approximation (encoding) of the problem (Cvijovic and Klinowski, 2002, 1995).

217 citations


Journal ArticleDOI
TL;DR: CATMIP as discussed by the authors combines the Metropolis algorithm with elements of simulated annealing and genetic algorithms to dynamically optimize the algorithm's efficiency as it runs, and it works independently of the model design, a priori constraints and data under consideration, and can be used for a wide variety of scientific problems.
Abstract: The estimation of finite fault earthquake source models is an inherently underdetermined problem: there is no unique solution to the inverse problem of determining the rupture history at depth as a function of time and space when our data are limited to observations at the Earth’s surface. Bayesian methods allow us to determine the set of all plausible source model parameters that are consistent with the observations, our a priori assumptions about the physics of the earthquake source and wave propagation, and models for the observation errors and the errors due to the limitations in our forward model. Because our inversion approach does not require inverting any matrices other than covariance matrices, we can restrict our ensemble of solutions to only those models that are physically defensible while avoiding the need to restrict our class of models based on considerations of numerical invertibility. We only use prior information that is consistent with the physics of the problem rather than some artefice (such as smoothing) needed to produce a unique optimal model estimate. Bayesian inference can also be used to estimate model-dependent and internally consistent effective errors due to shortcomings in the forward model or data interpretation, such as poor Green’s functions or extraneous signals recorded by our instruments. Until recently, Bayesian techniques have been of limited utility for earthquake source inversions because they are computationally intractable for problems with as many free parameters as typically used in kinematic finite fault models. Our algorithm, called cascading adaptive transitional metropolis in parallel (CATMIP), allows sampling of high-dimensional problems in a parallel computing framework. CATMIP combines the Metropolis algorithm with elements of simulated annealing and genetic algorithms to dynamically optimize the algorithm’s efficiency as it runs. The algorithm is a generic Bayesian Markov Chain Monte Carlo sampler; it works independently of the model design, a priori constraints and data under consideration, and so can be used for a wide variety of scientific problems. We compare CATMIP’s efficiency relative to several existing sampling algorithms and then present synthetic performance tests of finite fault earthquake rupture models computed using CATMIP.

197 citations


Journal ArticleDOI
01 Mar 2013
TL;DR: A novel hybrid optimization algorithm entitled hybrid robust differential evolution (HRDE) is developed by adding positive properties of the Taguchi's method to the differential evolution algorithm for minimizing the production cost associated with multi-pass turning problems.
Abstract: Hybridizing of the optimization algorithms provides a scope to improve the searching abilities of the resulting method. The purpose of this paper is to develop a novel hybrid optimization algorithm entitled hybrid robust differential evolution (HRDE) by adding positive properties of the Taguchi's method to the differential evolution algorithm for minimizing the production cost associated with multi-pass turning problems. The proposed optimization approach is applied to two case studies for multi-pass turning operations to illustrate the effectiveness and robustness of the proposed algorithm in machining operations. The results reveal that the proposed hybrid algorithm is more effective than particle swarm optimization algorithm, immune algorithm, hybrid harmony search algorithm, hybrid genetic algorithm, scatter search algorithm, genetic algorithm and integration of simulated annealing and Hooke-Jeevespatter search.

196 citations


Journal ArticleDOI
TL;DR: A community detection method based on modularity and an improved genetic algorithm (MIGA) is put forward, which takes the modularity Q as the objective function, and uses prior information, which makes the algorithm more targeted and improves the stability and accuracy of community detection.
Abstract: Complex networks are widely applied in every aspect of human society, and community detection is a research hotspot in complex networks. Many algorithms use modularity as the objective function, which can simplify the algorithm. In this paper, a community detection method based on modularity and an improved genetic algorithm (MIGA) is put forward. MIGA takes the modularity Q as the objective function, which can simplify the algorithm, and uses prior information (the number of community structures), which makes the algorithm more targeted and improves the stability and accuracy of community detection. Meanwhile, MIGA takes the simulated annealing method as the local search method, which can improve the ability of local search by adjusting the parameters. Compared with the state-of-art algorithms, simulation results on computer-generated and four real-world networks reflect the effectiveness of MIGA.

180 citations


Journal ArticleDOI
TL;DR: The ant colony optimization (ACO) algorithm is used for the optimization in this paper and the computing results show that the ACO algorithm performs well in process planning optimization than other three algorithms.
Abstract: One objective of process planning optimization is to cut down the total cost for machining process, and the ant colony optimization (ACO) algorithm is used for the optimization in this paper. Firstly, the process planning problem, considering the selection of machining resources, operations sequence optimization and the manufacturing constraints, is mapped to a weighted graph and is converted to a constraint-based traveling salesman problem. The operation sets for each manufacturing features are mapped to city groups, the costs for machining processes (including machine cost and tool cost) are converted to the weights of the cities; the costs for preparing processes (including machine changing, tool changing and set-up changing) are converted to the `distance' between cities. Then, the mathematical model for process planning problem is constructed by considering the machining constraints and goal of optimization. The ACO algorithm has been employed to solve the proposed mathematical model. In order to ensure the feasibility of the process plans, the Constraint Matrix and State Matrix are used in this algorithm to show the state of the operations and the searching range of the candidate operations. Two prismatic parts are used to compare the ACO algorithm with tabu search, simulated annealing and genetic algorithm. The computing results show that the ACO algorithm performs well in process planning optimization than other three algorithms.

147 citations


Journal ArticleDOI
TL;DR: A novel discrete chaotic harmony search-based simulated annealing algorithm, named DCHSSA, is developed and the proposed methodology is used to find the optimum design of a PV/wind hybrid system.

Journal ArticleDOI
TL;DR: MORPGEASA, a Pareto-based hybrid algorithm that combines evolutionary computation and simulated annealing, is proposed and analyzed for solving these multi-objective formulations of the VRPTW and the results obtained show the good performance of this hybrid approach.

Journal ArticleDOI
TL;DR: In this paper, a two-stage mixed-integer programming (MIP) model for the location of cross-docking centers and vehicle routing scheduling problems with cross-ding due to potential applications in the distribution networks is presented.

Journal ArticleDOI
TL;DR: The procedure MT-PSA outperforms SPEA2 in the benchmarks here considered, with respect to the solution quality and execution time, and Computational results obtained on Solomon's benchmark problems show that the island-based parallelization produces Pareto-fronts of higher quality that those obtained by the sequential versions without increasing the computational cost.
Abstract: The Capacitated Vehicle Routing Problem with Time Windows (VRPTW) consists in determining the routes of a given number of vehicles with identical capacity stationed at a central depot which are used to supply the demands of a set of customers within certain time windows. This is a complex multi-constrained problem with industrial, economic, and environmental implications that has been widely analyzed in the past. This paper deals with a multi-objective variant of the VRPTW that simultaneously minimizes the travelled distance and the imbalance of the routes. This imbalance is analyzed from two perspectives: the imbalance in the distances travelled by the vehicles, and the imbalance in the loads delivered by them. A multi-objective procedure based on Simulated Annealing, the Multiple Temperature Pareto Simulated Annealing (MT-PSA), is proposed in this paper to cope with these multi-objective formulations of the VRPTW. The procedure MT-PSA and an island-based parallel version of MT-PSA have been evaluated and compared with, respectively, sequential and island-based parallel implementations of SPEA2. Computational results obtained on Solomon's benchmark problems show that the island-based parallelization produces Pareto-fronts of higher quality that those obtained by the sequential versions without increasing the computational cost, while also producing significant reduction in the runtimes while maintaining solution quality. More specifically, for the most part, our procedure MT-PSA outperforms SPEA2 in the benchmarks here considered, with respect to the solution quality and execution time.

Journal ArticleDOI
15 Jun 2013-Energy
TL;DR: Experimental results show that, in terms of the model accuracy and training time, ELM with theLogarithmic transformation is better than LS-SVM and RBFNN with/without the logarithic transformation, and PSO outperforms SA interms of fitness and standard deviation.

Journal ArticleDOI
TL;DR: A joint optimization framework is presented, which combines the objective of control as well as other relevant system objectives and constraints such as communication errors, delays and the limited capabilities of devices.
Abstract: Networked cyber-physical systems (NCPS), where control and communication are closely integrated, have been envisioned to have a large number of high-impact applications In this paper, a joint optimization framework is presented, which combines the objective of control as well as other relevant system objectives and constraints such as communication errors, delays and the limited capabilities (eg, energy capacities) of devices The problem is solved by an online optimization approach, which consists of a communication protocol and a simulated annealing based control algorithm Meanwhile, by taking into account the communication cost, we optimize the control intervals by integrating two kinds of acceptances, ie, cyber and physical acceptances, into the control algorithm Numerical results show the effectiveness of the proposed approach

Journal ArticleDOI
TL;DR: This work presents and compares formulations and procedures for the optimization of the task allocation, the signal to message mapping, and the assignment of priorities to tasks and messages in order to meet end-to-end deadline constraints and minimize latencies.
Abstract: The complexity and physical distribution of modern active safety, chassis, and powertrain automotive applications requires the use of distributed architectures. Complex functions designed as networks of function blocks exchanging signal information are deployed onto the physical HW and implemented in a SW architecture consisting of a set of tasks and messages. The typical configuration features priority-based scheduling of tasks and messages and imposes end-to-end deadlines. In this work, we present and compare formulations and procedures for the optimization of the task allocation, the signal to message mapping, and the assignment of priorities to tasks and messages in order to meet end-to-end deadline constraints and minimize latencies. Our formulations leverage worst-case response time analysis within a mixed integer linear optimization framework and are compared for performance against a simulated annealing implementation. The methods are applied for evaluation to an automotive case study of complexity comparable to industrial design problems.

Journal ArticleDOI
TL;DR: In this paper, the authors presented the optimization aspects of process parameters of three machining processes including an advanced machining process known as abrasive water jet machining, grinding and milling.
Abstract: The optimum selection of process parameters plays a significant role to ensure quality of product, to reduce the machining cost and to increase the productivity of any machining process. This paper presents the optimization aspects of process parameters of three machining processes including an advanced machining process known as abrasive water jet machining process and two important conventional machining processes namely grinding and milling. A recently developed advanced optimization algorithm, teaching–learning-based optimization (TLBO), is presented to find the optimal combination of process parameters of the considered machining processes. The results obtained by using TLBO algorithm are compared with those obtained by using other advanced optimization techniques such as genetic algorithm, simulated annealing, particle swarm optimization, harmony search, and artificial bee colony algorithm. The results show better performance of the TLBO algorithm.

Journal ArticleDOI
01 Jan 2013
TL;DR: The proposed GenClustMOO is able to detect the appropriate number of clusters and the appropriate partitioning from data sets having either well-separated clusters of any shape or symmetrical clusters with or without overlaps.
Abstract: In this paper a new multiobjective (MO) clustering technique (GenClustMOO) is proposed which can automatically partition the data into an appropriate number of clusters. Each cluster is divided into several small hyperspherical subclusters and the centers of all these small sub-clusters are encoded in a string to represent the whole clustering. For assigning points to different clusters, these local sub-clusters are considered individually. For the purpose of objective function evaluation, these sub-clusters are merged appropriately to form a variable number of global clusters. Three objective functions, one reflecting the total compactness of the partitioning based on the Euclidean distance, the other reflecting the total symmetry of the clusters, and the last reflecting the cluster connectedness, are considered here. These are optimized simultaneously using AMOSA, a newly developed simulated annealing based multiobjective optimization method, in order to detect the appropriate number of clusters as well as the appropriate partitioning. The symmetry present in a partitioning is measured using a newly developed point symmetry based distance. Connectedness present in a partitioning is measured using the relative neighborhood graph concept. Since AMOSA, as well as any other MO optimization technique, provides a set of Pareto-optimal solutions, a new method is also developed to determine a single solution from this set. Thus the proposed GenClustMOO is able to detect the appropriate number of clusters and the appropriate partitioning from data sets having either well-separated clusters of any shape or symmetrical clusters with or without overlaps. The effectiveness of the proposed GenClustMOO in comparison with another recent multiobjective clustering technique (MOCK), a single objective genetic algorithm based automatic clustering technique (VGAPS-clustering), K-means and single linkage clustering techniques is comprehensively demonstrated for nineteen artificial and seven real-life data sets of varying complexities. In a part of the experiment the effectiveness of AMOSA as the underlying optimization technique in GenClustMOO is also demonstrated in comparison to another evolutionary MO algorithm, PESA2.

Journal ArticleDOI
TL;DR: GSA is a new cooperative agents’ approach, which is inspired by the observation of the behaviors of all the masses present in the universe due to gravitation force, to solve thermal unit commitment (UC) problem.

Proceedings ArticleDOI
09 Sep 2013
TL;DR: A thorough experimental analysis shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS, which makes JA- BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.
Abstract: Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.

Journal ArticleDOI
TL;DR: In this paper, the adaptive neuro-fuzzy inference system (ANFIS) has been used to generate mapping relationship between process factors and main response using experimental observations, and the developed models were applied as objective function to select optimal parameters, in which the process reaches to its desirable mechanical properties by using the simulated annealing algorithm.
Abstract: Current work deals with experimental investigation, modeling, and optimization of friction stir welding process (FSW) to reach desirable mechanical properties of aluminum 7075 plates. Main factors of process were tool pin profile, tool rotary speed, welding speed, and welding axial force. Also, main responses were tensile strength, yield strength, and hardness of welded zone. Four factors and five levels of central composite design have been utilized to minimize the number of experimental observations. Then, adaptive neuro-fuzzy inference systems (ANFIS) have been used to generate mapping relationship between process factors and main response using experimental observations. Afterward, the developed models were applied as objective function to select optimal parameters, in which the process reaches to its desirable mechanical properties by using the simulated annealing algorithm. Results indicated that the tool with square pin profile, rotary speed of 1,400 RPM, welding speed of 1.75 mm/s, and axial force of 7.5 KN resulted in desirable mechanical properties in both cases of single response and multi-response optimization. Also, these solutions have been verified by confirmation tests and FSW process physical behavior. These verifications indicated that both ANFIS model and simulated annealing algorithm are appropriate tools for modeling and optimization of process.

Journal ArticleDOI
TL;DR: Computational results indicate that the proposed algorithm is significant in terms of the number and quality of non-dominated solutions compared to other algorithms in the literature.
Abstract: Flexible job-shop problem has been widely addressed in literature. Due to its complexity, it is still under consideration for research. This paper addresses flexible job-shop scheduling problem (FJSP) with three objectives to be minimized simultaneously: makespan, maximal machine workload, and total workload. Due to the discrete nature of the FJSP problem, conventional particle swarm optimization (PSO) fails to address this problem and therefore, a variant of PSO for discrete problems is presented. A hybrid discrete particle swarm optimization (DPSO) and simulated annealing (SA) algorithm is proposed to identify an approximation of the Pareto front for FJSP. In the proposed hybrid algorithm, DPSO is significant for global search and SA is used for local search. Furthermore, Pareto ranking and crowding distance method are incorporated to identify the fitness of particles in the proposed algorithm. The displacement of particles is redefined and a new strategy is presented to retain all non-dominated solutions during iterations. In the presented algorithm, pbest of particles are used to store the fixed number of non-dominated solutions instead of using an external archive. Experiments are performed to identify the performance of the proposed algorithm compared to some famous algorithms in literature. Two benchmark sets are presented to study the efficiency of the proposed algorithm. Computational results indicate that the proposed algorithm is significant in terms of the number and quality of non-dominated solutions compared to other algorithms in the literature.

Journal ArticleDOI
TL;DR: Simulations results demonstrates the AR-ICA is an efficient optimization technique, since it obtained promising solutions for the reliability redundancy allocation problem when compared with the previously best-known results of four different benchmarks forThe reliability-redundancy allocation problem presented in the literature.
Abstract: System reliability analysis and optimization are important to efficiently utilize available resources and to develop an optimal system design architecture. System reliability optimization has been solved by using optimization techniques including meta-heuristics. Meanwhile, the development of meta-heuristics has been an active research field of the reliability optimization wherein the redundancy, the component reliability, or both are to be determined. In recent years, a broad class of stochastic meta-heuristics, such as simulated annealing, genetic algorithm, tabu search, ant colony, and particle swarm optimization paradigms, has been developed for reliability-redundancy optimization of systems. Recently, a new kind of evolutionary algorithm called Imperialist Competitive Algorithm (ICA) was proposed. The ICA is based on imperialistic competition where the populations are represented by countries, which are classified as imperialists or colonies. However, the trade-off between the exploration (i.e. the global search) and the exploitation (i.e. the local search) of the search space is critical to the success of the classical ICA approach. An improvement in the ICA by implementing an attraction and repulsion concept during the search for better solutions, the AR-ICA approach, is proposed in this paper. Simulations results demonstrates the AR-ICA is an efficient optimization technique, since it obtained promising solutions for the reliability redundancy allocation problem when compared with the previously best-known results of four different benchmarks for the reliability-redundancy allocation problem presented in the literature.

Journal ArticleDOI
TL;DR: A novel probabilistic sensing model for sensors with line-of-sight-based coverage to tackle the sensor placement problem for these sensors, which consists of membership functions for sensing range and sensing angle and takes into consideration sensing capacity probability as well as critical environmental factors such as terrain topography.
Abstract: This paper proposes a probabilistic sensor model for the optimization of sensor placement. Traditional schemes rely on simple sensor behaviour and environmental factors. The consequences of these oversimplifications are unrealistic simulation of sensor performance and, thus, suboptimal sensor placement. In this paper, we develop a novel probabilistic sensing model for sensors with line-of-sight-based coverage (e.g., cameras) to tackle the sensor placement problem for these sensors. The probabilistic sensing model consists of membership functions for sensing range and sensing angle, which takes into consideration sensing capacity probability as well as critical environmental factors such as terrain topography. We then implement several optimization schemes for sensor placement optimization, including simulated annealing, limited-memory Broyden-Fletcher-Goldfarb-Shanno method, and covariance matrix adaptation evolution strategy.

Journal ArticleDOI
TL;DR: In this paper, a robust optimization model is formulated for the proposed problem, which aims to minimize the sum of the expected value of the operator cost and its variability multiplied by a weighting value.
Abstract: The design of urban bus transit systems aims to determine a network configuration with a set of bus lines and associated frequencies that achieve the targeted objective. This paper presents a methodology framework to formulate and solve the bus transit network design problem (TNDP). It first proposes a TNDP taking into account the travel time stochasticity. A robust optimization model is formulated for the proposed problem, which aims to minimize the sum of the expected value of the operator cost and its variability multiplied by a weighting value. A heuristic solution approach, based on k-shortest path algorithm, simulated annealing algorithm, Monte Carlo simulation, and probit-type discrete choice model, is subsequently developed to solve the robust optimization model. Finally, the proposed methodology is applied to a numerical example. (C) 2013 American Society of Civil Engineers.

Journal ArticleDOI
TL;DR: This paper studies the simultaneous dock assignment and sequencing of inbound trucks for a multi-door cross docking operation with the objective to minimize total weighted tardiness, under a fixed outbound truck departure schedule.

Journal ArticleDOI
TL;DR: Different types of normalizing functions are analyzed and the analyses show that the one that is usually employed in the literature has several flaws, and the paper presents a different normalizing function that is very simple and does not suffer from these limitations.
Abstract: SUMMARY The use of search algorithms for test data generation has seen many successful results. For structural criteria like branch coverage, heuristics have been designed to help the search. The most common heuristic is the use of approach level (usually represented with an integer) to reward test cases whose executions get close (in the control flow graph) to the target branch. To solve the constraints of the predicates in the control flow graph, the branch distance is commonly employed. These two measures are linearly combined. Since the approach level is more important, the branch distance is normalized, often in the range [0, 1]. In this paper, different types of normalizing functions are analyzed. The analyses show that the one that is usually employed in the literature has several flaws. The paper presents a different normalizing function that is very simple and does not suffer from these limitations. Empirical and analytical analyses are carried out to compare these two functions. In particular, their effect is studied on commonly used search algorithms, such as Hill Climbing, Simulated Annealing and Genetic Algorithms. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This study proposed a simulated annealing with heuristic local search (SA_HLS) to solve the two-dimensional loading heterogeneous fleet vehicle routing problem and the search was then extended with a collection of packing heuristics to solved the loading constraints in 2L-HFVRP.

Journal ArticleDOI
TL;DR: The explicit finite-difference operator is greatly improved by the optimized scheme, which allows for tighter error limitation, which is shown to be necessary to avoid rapid error accumulations for simulations on large-scale models with long travel times.