scispace - formally typeset
Search or ask a question

Showing papers on "Simulated annealing published in 2016"


Journal ArticleDOI
04 Nov 2016-Science
TL;DR: It is shown that an optical processing approach based on a network of coupled optical pulses in a ring fiber can be used to model and optimize large-scale Ising systems, and a coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.
Abstract: The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.

555 citations


Journal ArticleDOI
TL;DR: It is demonstrated how finite range tunneling can provide considerable computational advantage over classical processors for a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing.
Abstract: Quantum annealing is a quantum enhanced heuristic optimization algorithm that exploits quantum tunneling. New work shows that it can significantly outperform its classical analog (simulated annealing) as well as the most popular classical algorithm for simulating quantum annealing (quantum Monte Carlo).

351 citations


Journal ArticleDOI
01 Jun 2016
TL;DR: The proposed hybrid feature selection algorithm, called HPSO-LS, uses a local search technique which is embedded in particle swarm optimization to select the reduced sized and salient feature subset to enhance the search process near global optima.
Abstract: The proposed method uses a local search technique which is embedded in particle swarm optimization (PSO) to select the reduced sized and salient feature subset. The goal of the local search technique is to guide the PSO search process to select distinct features by using their correlation information. Therefore, the proposed method selects the subset of features with reduced redundancy. A hybrid feature selection method based on particle swarm optimization is proposed.Our method uses a novel local search to enhance the search process near global optima.The method efficiently finds the discriminative features with reduced correlations.The size of final feature set is determined using a subset size detection scheme.Our method is compared with well-known and state-of-the-art feature selection methods. Feature selection has been widely used in data mining and machine learning tasks to make a model with a small number of features which improves the classifier's accuracy. In this paper, a novel hybrid feature selection algorithm based on particle swarm optimization is proposed. The proposed method called HPSO-LS uses a local search strategy which is embedded in the particle swarm optimization to select the less correlated and salient feature subset. The goal of the local search technique is to guide the search process of the particle swarm optimization to select distinct features by considering their correlation information. Moreover, the proposed method utilizes a subset size determination scheme to select a subset of features with reduced size. The performance of the proposed method has been evaluated on 13 benchmark classification problems and compared with five state-of-the-art feature selection methods. Moreover, HPSO-LS has been compared with four well-known filter-based methods including information gain, term variance, fisher score and mRMR and five well-known wrapper-based methods including genetic algorithm, particle swarm optimization, simulated annealing and ant colony optimization. The results demonstrated that the proposed method improves the classification accuracy compared with those of the filter based and wrapper-based feature selection methods. Furthermore, several performed statistical tests show that the proposed method's superiority over the other methods is statistically significant.

301 citations


Journal ArticleDOI
TL;DR: A discrete version of the bat algorithm to solve the well-known symmetric and asymmetric Traveling Salesman Problems and an improvement in the basic structure of the classic bat algorithm are proposed.

267 citations


Journal ArticleDOI
TL;DR: In this article, a simulated annealing (SA)-based global maximum power point tracking (GMPPT) technique is proposed for photovoltaic (PV) systems which experience partial shading conditions (PSC).
Abstract: This paper proposes a simulated annealing (SA)-based global maximum power point tracking (GMPPT) technique designed for photovoltaic (PV) systems which experience partial shading conditions (PSC). The proposed technique is compared with the common perturb and observe MPPT technique and the particle swarm optimization method for GMPPT. The performance is assessed by considering the time taken to converge and the number of sample cases where the technique converges to the GMPP. Simulation results indicate the improved performance of the SA-based GMPPT algorithm, with arbitrarily selected parameters, in tracking to the global maxima in a multiple module PV system which experiences PSC. Experimental validation of the technique is presented based on PV modules that experience nonuniform environmental conditions. Additionally, studies regarding the influence of the key parameters of the SA-based algorithm are described. Simulation and experimental results verify the effectiveness of the proposed GMPPT method.

210 citations


Journal ArticleDOI
Emad Nabil1
TL;DR: An enhanced version of the Flower Pollination Algorithm (FPA) is introduced and the proposed algorithm is compared with five well-known optimization algorithms and is able to find more accurate solutions than the standard FPA and the other four techniques.
Abstract: An enhanced version of the Flower Pollination Algorithm (FPA) is proposed.Testing is performed using 23 optimization benchmark problems.The proposed algorithm is compared with five well-known optimization algorithms.Experimental results show the superiority of the proposed algorithm. Expert and intelligent systems try to simulate intelligent human experts in solving complex real-world problems. The domain of problems varies from engineering and industry to medicine and education. In most situations, the system is required to take decisions based on multiple inputs, but the search space is usually very huge so that it will be very hard to use the traditional algorithms to take a decision; at this point, the metaheuristic algorithms can be used as an alternative tool to find near-optimal solutions. Thus, inventing new metaheuristic techniques and enhancing the current algorithms is necessary. In this paper, we introduced an enhanced variant of the Flower Pollination Algorithm (FPA). We hybridized the standard FPA with the Clonal Selection Algorithm (CSA) and tested the new algorithm by applying it to 23 optimization benchmark problems. The proposed algorithm is compared with five famous optimization algorithms, namely, Simulated Annealing, Genetic Algorithm, Flower Pollination Algorithm, Bat Algorithm, and Firefly Algorithm. The results show that the proposed algorithm is able to find more accurate solutions than the standard FPA and the other four techniques. The superiority of the proposed algorithm nominates it for being a part of intelligent and expert systems.

191 citations


Proceedings Article
19 Jun 2016
TL;DR: In this article, the authors propose to exploit the injection of appropriate noise so that the gradients may flow easily, even if the noiseless application of the activation function would yield zero gradients.
Abstract: Common nonlinear activation functions used in neural networks can cause training difficulties due to the saturation behavior of the activation function, which may hide dependencies that are not visible to vanilla-SGD (using first order gradients only). Gating mechanisms that use softly saturating activation functions to emulate the discrete switching of digital logic circuits are good examples of this. We propose to exploit the injection of appropriate noise so that the gradients may flow easily, even if the noiseless application of the activation function would yield zero gradients. Large noise will dominate the noise-free gradient and allow stochastic gradient descent to explore more. By adding noise only to the problematic parts of the activation function, we allow the optimization procedure to explore the boundary between the degenerate (saturating) and the well-behaved parts of the activation function. We also establish connections to simulated annealing, when the amount of noise is annealed down, making it easier to optimize hard objective functions. We find experimentally that replacing such saturating activation functions by noisy variants helps optimization in many contexts, yielding state-of-the-art or competitive results on different datasets and task, especially when training seems to be the most difficult, e.g., when curriculum learning is necessary to obtain good results.

177 citations


Journal ArticleDOI
01 Feb 2016
TL;DR: A simulated annealing heuristic based exact solution approach to solve the green vehicle routing problem (G-VRP) which extends the classical vehicle routing problems by considering a limited driving range of vehicles in conjunction with limited refueling infrastructure.
Abstract: We develop a solution approach to solve the green vehicle routing problem.We propose a simulated annealing heuristic to improve the quality of solutions.We present a new formulation having fewer variable and constraints.We evaluate the algorithm in terms of the several performance criterions.Our algorithm is able to optimally solve 22 of 40 benchmark instances. This paper develops a simulated annealing heuristic based exact solution approach to solve the green vehicle routing problem (G-VRP) which extends the classical vehicle routing problem by considering a limited driving range of vehicles in conjunction with limited refueling infrastructure. The problem particularly arises for companies and agencies that employ a fleet of alternative energy powered vehicles on transportation systems for urban areas or for goods distribution. Exact algorithm is based on the branch-and-cut algorithm which combines several valid inequalities derived from the literature to improve lower bounds and introduces a heuristic algorithm based on simulated annealing to obtain upper bounds. Solution approach is evaluated in terms of the number of test instances solved to optimality, bound quality and computation time to reach the best solution of the various test problems. Computational results show that 22 of 40 instances with 20 customers can be solved optimally within reasonable computation time.

168 citations


Journal ArticleDOI
01 Mar 2016
TL;DR: A new two-stage stochastic global optimization model for the production scheduling of open pit mining complexes with uncertainty is proposed, capable of generating designs that reduce the risk of not meeting production targets and have 6.6% higher expected net present value than an industry-standard deterministic mine planning software.
Abstract: Graphical abstractDisplay Omitted HighlightsA stochastic global optimization framework for open pit mining complexes is proposed.The method simultaneously optimizes production schedules and downstream processes.The modeling is flexible and may be applied to numerous types of mining complexes.Three combinations of metaheuristics are tested.Results from an example show a substantial economic benefit when using this approach. Global optimization for mining complexes aims to generate a production schedule for the various mines and processing streams that maximizes the economic value of the enterprise as a whole. Aside from the large scale of the optimization models, one of the major challenges associated with optimizing mining complexes is related to the blending and non-linear geo-metallurgical interactions in the processing streams as materials are transformed from bulk material to refined products. This work proposes a new two-stage stochastic global optimization model for the production scheduling of open pit mining complexes with uncertainty. Three combinations of metaheuristics, including simulated annealing, particle swarm optimization and differential evolution, are tested to assess the performance of the solver. Experimental results for a copper-gold mining complex demonstrate that the optimizer is capable of generating designs that reduce the risk of not meeting production targets, have 6.6% higher expected net present value than the deterministic-equivalent design and 22.6% higher net present value than an industry-standard deterministic mine planning software.

146 citations


Journal ArticleDOI
01 Mar 2016
TL;DR: A new, efficient and novel krill herd algorithm (KHA) method for solving the optimal DG allocation problem of distribution networks and simulation results indicate that installing DG in the optimal location can significantly reduce the power loss of distributed power system.
Abstract: This paper presents KH algorithm to solve optimal placement of distributed generator (ODG) problem.ODG problem is studied with an objective of reducing power loss and energy cost.Three illustrative examples of radial distribution network are presented.Proposed method shows better results when compared with other techniques in terms of the quality of solution. Distributed generator (DG) is recognized as a viable solution for controlling line losses, bus voltage, voltage stability, etc. and represents a new era for distribution systems. This paper focuses on developing an approach for placement of DG in order to minimize the active power loss and energy loss of distribution lines while maintaining bus voltage and voltage stability index within specified limits of a given power system. The optimization is carried out on the basis of optimal location and optimal size of DG. This paper developed a new, efficient and novel krill herd algorithm (KHA) method for solving the optimal DG allocation problem of distribution networks. To test the feasibility and effectiveness, the proposed KH algorithm is tested on standard 33-bus, 69-bus and 118-bus radial distribution networks. The simulation results indicate that installing DG in the optimal location can significantly reduce the power loss of distributed power system. Moreover, the numerical results, compared with other stochastic search algorithms like genetic algorithm (GA), particle swarm optimization (PSO), combined GA and PSO (GA/PSO) and loss sensitivity factor simulated annealing (LSFSA), show that KHA could find better quality solutions.

139 citations


Posted Content
TL;DR: This work proposes to exploit the injection of appropriate noise so that the gradients may flow easily, even if the noiseless application of the activation function would yield zero gradient, and establishes connections to simulated annealing, making it easier to optimize hard objective functions.
Abstract: Common nonlinear activation functions used in neural networks can cause training difficulties due to the saturation behavior of the activation function, which may hide dependencies that are not visible to vanilla-SGD (using first order gradients only). Gating mechanisms that use softly saturating activation functions to emulate the discrete switching of digital logic circuits are good examples of this. We propose to exploit the injection of appropriate noise so that the gradients may flow easily, even if the noiseless application of the activation function would yield zero gradient. Large noise will dominate the noise-free gradient and allow stochastic gradient descent toexplore more. By adding noise only to the problematic parts of the activation function, we allow the optimization procedure to explore the boundary between the degenerate (saturating) and the well-behaved parts of the activation function. We also establish connections to simulated annealing, when the amount of noise is annealed down, making it easier to optimize hard objective functions. We find experimentally that replacing such saturating activation functions by noisy variants helps training in many contexts, yielding state-of-the-art or competitive results on different datasets and task, especially when training seems to be the most difficult, e.g., when curriculum learning is necessary to obtain good results.

Journal ArticleDOI
15 Nov 2016-Energy
TL;DR: This study mathematically formulates the effects of socio-economic indicators on Iran's electric energy consumption and assesses the applicability and accuracy of the proposed artificial cooperative search algorithm in electric power consumption forecasting as compared with other optimization methods.

Journal ArticleDOI
TL;DR: In this article, an overview of different sequential, nontailored, as well as specialized tailored algorithms on the Google instances is given, and the typical complexity of the benchmark problems using insights from the study of spin glasses.
Abstract: To date, a conclusive detection of quantum speedup remains elusive. Recently, a team by Google Inc. [V. S. Denchev et al., Phys. Rev. X 6, 031015 (2016)] proposed a weak-strong cluster model tailored to have tall and narrow energy barriers separating local minima, with the aim to highlight the value of finite-range tunneling. More precisely, results from quantum Monte Carlo simulations as well as the D-Wave 2X quantum annealer scale considerably better than state-of-the-art simulated annealing simulations. Moreover, the D-Wave 2X quantum annealer is $\ensuremath{\sim}{10}^{8}$ times faster than simulated annealing on conventional computer hardware for problems with approximately ${10}^{3}$ variables. Here, an overview of different sequential, nontailored, as well as specialized tailored algorithms on the Google instances is given. We show that the quantum speedup is limited to sequential approaches and study the typical complexity of the benchmark problems using insights from the study of spin glasses.

Journal ArticleDOI
TL;DR: In this article, the problem of robustly tuning of PI based LFC design is formulated as an optimization problem according to time domain objective function that is solved by BAT algorithm to find the most optimistic results.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an optimum sizing of battery energy storage system (BESS) using particle swarm optimization (PSO) incorporating dynamic demand response (DR) to improve a fast, smooth and secure system stability and performance, avoiding a microgrid from instability and system collapse during an emergency situation.

Journal ArticleDOI
TL;DR: The proposed evolutionary fireflies algorithm exploits the spiral search behaviour of moths and attractiveness search actions of fireflies to mitigate premature convergence of the Levy-flight firefly algorithm (LFA) and the moth-flame optimization (MFO) algorithm.
Abstract: A descriptor combining LBP, LGBP and LBPV is proposed for feature extraction.Moth-firefly optimization is proposed for feature selection.It mitigates premature convergence of FA and MFO algorithms.Simulated Annealing is also used to further improve the most promising solution.It outperforms other optimization and facial expression recognition methods. In this research, we propose a facial expression recognition system with a variant of evolutionary firefly algorithm for feature optimization. First of all, a modified Local Binary Pattern descriptor is proposed to produce an initial discriminative face representation. A variant of the firefly algorithm is proposed to perform feature optimization. The proposed evolutionary firefly algorithm exploits the spiral search behaviour of moths and attractiveness search actions of fireflies to mitigate premature convergence of the Levy-flight firefly algorithm (LFA) and the moth-flame optimization (MFO) algorithm. Specifically, it employs the logarithmic spiral search capability of the moths to increase local exploitation of the fireflies, whereas in comparison with the flames in MFO, the fireflies not only represent the best solutions identified by the moths but also act as the search agents guided by the attractiveness function to increase global exploration. Simulated Annealing embedded with Levy flights is also used to increase exploitation of the most promising solution. Diverse single and ensemble classifiers are implemented for the recognition of seven expressions. Evaluated with frontal-view images extracted from CK+, JAFFE, and MMI, and 45-degree multi-view and 90-degree side-view images from BU-3DFE and MMI, respectively, our system achieves a superior performance, and outperforms other state-of-the-art feature optimization methods and related facial expression recognition models by a significant margin.

Journal ArticleDOI
01 Mar 2016-Energy
TL;DR: In this article, an optimization model is developed to determine the most advantageous size of autonomous hybrid photovoltaic/wind turbine/fuel cell, wind turbine/ fuel cell and photovolastic/fuelcell systems for electrification of a remote area involving five homes (1 block) located in Namin, Ardabil, Iran.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that ILP outperforms SA with respect to both solution quality (how close it is to optimality) and processing time over a range of problem sizes.

Journal ArticleDOI
TL;DR: A list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP), whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Abstract: Simulated annealing SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing LBSA algorithm to solve traveling salesman problem TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an optimal secondary controller for combined Load Frequency Control (LFC) and Automatic Voltage Regulation (AVR) of multi source multi area system using simulated annealing technique.

Journal ArticleDOI
TL;DR: A simulated annealing (SA) algorithm that incorporates several neighborhood structures to improve the performance on solving OVRPCD and outperforms existing approaches for vehicle routing problem with cross-docking.

Proceedings ArticleDOI
12 Jan 2016
TL;DR: This work considers a classical algorithm known as Simulated Quantum Annealing (SQA) which relates certain quantum systems to classical Markov chains and proves that these chains mix rapidly, showing that SQA runs in polynomial time on the Hamming weight with spike problem in much of the parameter regime.
Abstract: Can quantum computers solve optimization problems much more quickly than classical computers? One major piece of evidence for this proposition has been the fact that Quantum Annealing (QA) finds the minimum of some cost functions exponentially more quickly than classical Simulated Annealing (SA). One such cost function is the simple "Hamming weight with a spike" function in which the input is an n-bit string and the objective function is simply the Hamming weight, plus a tall thin barrier centered around Hamming weight n/4. While the global minimum of this cost function can be found by inspection, it is also a plausible toy model of the sort of local minima that arise in real-world optimization problems. It was shown by Farhi, Goldstone and Gutmann that for this example SA takes exponential time and QA takes polynomial time, and the same result was generalized by Reichardt to include barriers with width and height scaling as positive powers of n such that the total area under the barrier is at most the square root of n. This advantage could be explained in terms of quantum-mechanical "tunneling." Our work considers a classical algorithm known as Simulated Quantum Annealing (SQA) which relates certain quantum systems to classical Markov chains. By proving that these chains mix rapidly, we show that SQA runs in polynomial time on the Hamming weight with spike problem in much of the parameter regime where QA achieves an exponential advantage over SA. While our analysis only covers this toy model, it can be seen as evidence against the prospect of exponential quantum speedup using tunneling. Our technical contributions include extending the canonical path method for analyzing Markov chains to cover the case when not all vertices can be connected by low-congestion paths. We also develop methods for taking advantage of warm starts and for relating the quantum state in QA to the probability distribution in SQA. These techniques may be of use in future studies of SQA or of rapidly mixing Markov chains in general.

Journal ArticleDOI
TL;DR: A restarted simulated annealing algorithm is developed to deal with the complexity of the model, which utilizes new local search with three neighbor structures and restart phase based on the crowding distance assignment procedure to obtain well-spread Pareto-optimal set as mentioned in this paper.

Journal ArticleDOI
TL;DR: The first contribution of this paper is the dynamic optimization of the FCM structure, i.e., the entire FCM model is learned in a completely new manner and is continuously adapted to the current local characteristics of the forecasted time series.
Abstract: In this paper we propose a new approach to learning fuzzy cognitive maps (FCMs) as a predictive model for time series forecasting. The first contribution of this paper is the dynamic optimization of the FCM structure, i.e., we propose to select concepts involved in the FCM model before every prediction is made. In addition, the FCM transformation function together with the corresponding parameters are proposed to be optimized dynamically. Finally, the FCM weights are learned. In this way, the entire FCM model is learned in a completely new manner, i.e., it is continuously adapted to the current local characteristics of the forecasted time series. To optimize all of the aforementioned elements, we apply and compare 5 different population-based algorithms: genetic, particle swarm optimization, simulated annealing, artificial bee colony and differential evolution. For the evaluation of the proposed approach we use 11 publicly available data sets. The results of comparative experiments provide evidence that our approach offers a competitive forecasting method that outperforms many state-of-the-art forecasting models. We recommend to use our FCM-based approach for the forecasting of time series that are linear and tend to be trend stationary.

Journal ArticleDOI
01 Dec 2016
TL;DR: A dynamic neighborhood structure for the hybrid algorithm to improve search efficiency by reducing the randomness of the conventional 2-opt neighborhood is developed and adaptive parameters that can be automatically adjusted by the algorithm based on context specific examples are proposed, which negates the need to frequently readjust algorithm parameters.
Abstract: AHSATS-d-CM: Adaptive Hybrid Simulated Annealing Tabu Search Algorithm with Dynamic Neighborhood Based on Circle-directed Mutation.AHSATS-2-opt: Adaptive Hybrid Simulated Annealing Tabu Search Algorithm with 2-opt.Conclusions: This paper develops an adaptive hybrid meta-heuristic algorithm that combines simulated annealing and tabu search algorithms with a dynamic neighborhood structure to solve the Traveling Salesman Problem (TSP). The experimental results demonstrated that the proposed algorithm can obtain satisfactory solutions within a reasonable time. Moreover, the proposed algorithm can overcome the individual disadvantages of simulated annealing and tabu search. The new hybrid algorithm provides a fast decrease rate and a clear convergence process, and is not dependent on initial solutions. Furthermore, compared with the classical 2-opt neighborhood, the dynamic neighborhood can significantly reduce computational time by decreasing the number of calculations and simultaneously improve solution quality. Finally, the adaptive parameters are appropriate for solving almost all benchmark test examples.Display Omitted A hybrid simulated annealingtabusearch algorithm is applied to solve the traveling salesman problem.A dynamic neighborhood based on circle-directed mutation is developed for the hybrid algorithm.Adaptive parameters are proposed for the hybrid algorithm.Experiments are conducted to verify the accuracy and efficiency of the hybrid algorithm proposed.Experimental results are compared with other algorithms derived from literature. This paper applies a hybrid simulated annealing tabu search algorithm to solve the Traveling Salesman Problem (TSP). Fully considering the characteristics of the hybrid algorithm, we develop a dynamic neighborhood structure for the hybrid algorithm to improve search efficiency by reducing the randomness of the conventional 2-opt neighborhood. A circle-directed mutation is developed to achieve this dynamic neighborhood structure. Furthermore, we propose adaptive parameters that can be automatically adjusted by the algorithm based on context specific examples. This negates the need to frequently readjust algorithm parameters. We employ benchmarks obtained from TSPLIB (a library of sample instances for the TSP) to test our algorithm, and find that the proposed algorithm can obtain satisfactory solutions within a reasonable amount of time. The experimental results demonstrate that the proposed hybrid algorithm can overcome the disadvantages of traditional simulated annealing and tabu search methods. The results also show that the dynamic neighborhood structure is more efficient and accurate than the classical 2-opt. Also, adaptive parameters are appropriate for almost all of the numerical examples tested in this paper. Finally, the experimental results are compared with those of other algorithms, to demonstrate the improved accuracy and efficiency of the proposed algorithm.

Journal ArticleDOI
TL;DR: The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements.
Abstract: We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the "0" and "1" elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements.

Journal ArticleDOI
TL;DR: This paper proposes the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN and shows an increase in the computation time but their accuracy has also been improved.
Abstract: A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

Journal ArticleDOI
TL;DR: A review of research reported on simulated annealing (SA) finds different cooling/annealing schedules are summarized and recent applications of SA in engineering are reviewed.
Abstract: This paper presents a review of research reported on simulated annealing (SA) Different cooling/annealing schedules are summarized Variants of SA are delineated Recent applications of SA in engineering are reviewed

Journal ArticleDOI
TL;DR: In this article, a simulated annealing process was developed to optimize the location of the turbines by taking into account the hourly variation of flows throughout an average year and the consequent impact of this variation on the turbine efficiency.
Abstract: In water supply systems, the potential exists for micro-hydropower that uses the pressure excess in the networks to produce electricity. However, because urban drinking water networks are complex systems in which flows and pressure vary constantly, identification of the ideal locations for turbines is not straightforward, and assessment implies the need for simulation. In this paper, an optimization algorithm is proposed to provide a selection of optimal locations for the installation of a given number of turbines in a distribution network. A simulated annealing process was developed to optimize the location of the turbines by taking into account the hourly variation of flows throughout an average year and the consequent impact of this variation on the turbine efficiency. The optimization is achieved by considering the characteristic and efficiency curves of a turbine model for different impeller diameters as well as simulations of the annual energy production in a coupled hydraulic model. The developed algorithm was applied to the water supply system of the city Lausanne (Switzerland). This work focuses on the definition of the neighborhood of the simulated annealing process and the analysis of convergence towards the optimal solution for different restrictions and numbers of installed turbines.

Journal ArticleDOI
TL;DR: The proposed cPSO algorithm is implemented in Matlab environment and verified extensively using five experimental studies, which show that the proposed algorithm outperforms genetic algorithm (GA), simulated annealing (SA) based approach, and hybrid algorithm.
Abstract: Chaotic PSO algorithm is proposed to solve NP-hard IPPS problem.Ten chaotic maps are implemented to avoid premature convergence to local optimum.Makespan, balanced level of machine utilization and mean flow time are observed.Five experimental studies show that cPSO outperforms GA, SA, and hybrid algorithm.Scheduling plans are tested by mobile robot within a laboratory environment. Process planning and scheduling are two of the most important manufacturing functions traditionally performed separately and sequentially. These functions being complementary and interrelated, their integration is essential for the optimal utilization of manufacturing resources. Such integration is also significant for improving the performance of the modern manufacturing system. A variety of alternative manufacturing resources (machine tools, cutting tools, tool access directions, etc.) causes integrated process planning and scheduling (IPPS) problem to be strongly NP-hard (non deterministic polynomial) in terms of combinatorial optimization. Therefore, an optimal solution for the problem is searched in a vast search space. In order to explore the search space comprehensively and avoid being trapped into local optima, this paper focuses on using the method based on the particle swarm optimization algorithm and chaos theory (cPSO). The initial solutions for the IPPS problem are presented in the form of the particles of cPSO algorithm. The particle encoding/decoding scheme is also proposed in this paper. Flexible process and scheduling plans are presented using AND/OR network and five flexibility types: machine, tool, tool access direction (TAD), process, and sequence flexibility. Optimal process plans are obtained by multi-objective optimization of production time and production cost. On the other hand, optimal scheduling plans are generated based on three objective functions: makespan, balanced level of machine utilization, and mean flow time. The proposed cPSO algorithm is implemented in Matlab environment and verified extensively using five experimental studies. The experimental results show that the proposed algorithm outperforms genetic algorithm (GA), simulated annealing (SA) based approach, and hybrid algorithm. Moreover, the scheduling plans obtained by the proposed methodology are additionally tested by Khepera II mobile robot using a laboratory model of manufacturing environment.