scispace - formally typeset
Search or ask a question

Showing papers on "Population-based incremental learning published in 2009"


Journal ArticleDOI
TL;DR: The proposed MI-LXPM is a suitably modified and extended version of the real coded genetic algorithm, LXPM, of Deep and Thakur and incorporates a special truncation procedure to handle integer restrictions on decision variables along with a parameter free penalty approach for handling constraints.

595 citations


Book ChapterDOI
27 Aug 2009
TL;DR: This work proposes a new graph-based label propagation algorithm for transductive learning that can be extended to incorporate additional prior information, and demonstrates it with classifying data where the labels are not mutually exclusive.
Abstract: We propose a new graph-based label propagation algorithm for transductive learning. Each example is associated with a vertex in an undirected graph and a weighted edge between two vertices represents similarity between the two corresponding example. We build on Adsorption, a recently proposed algorithm and analyze its properties. We then state our learning algorithm as a convex optimization problem over multi-label assignments and derive an efficient algorithm to solve this problem. We state the conditions under which our algorithm is guaranteed to converge. We provide experimental evidence on various real-world datasets demonstrating the effectiveness of our algorithm over other algorithms for such problems. We also show that our algorithm can be extended to incorporate additional prior information, and demonstrate it with classifying data where the labels are not mutually exclusive.

248 citations


Journal ArticleDOI
TL;DR: This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP) using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm.

238 citations


Proceedings ArticleDOI
Chenhong Zhao1, Shanshan Zhang1, Qingfeng Liu1, Jian Xie1, Jicheng Hu1 
24 Sep 2009
TL;DR: Though GA is designed to solve combinatorial optimization problem, it's inefficient for global optimization, so this paper concludes with further researches in optimized genetic algorithm.
Abstract: Task scheduling algorithm, which is an NP-completeness problem, plays a key role in cloud computing systems. In this paper, we propose an optimized algorithm based on genetic algorithm to schedule independent and divisible tasks adapting to different computation and memory requirements. We prompt the algorithm in heterogeneous systems, where resources (including CPUs) are of computational and communication heterogeneity. Dynamic scheduling is also in consideration. Though GA is designed to solve combinatorial optimization problem, it's inefficient for global optimization. So we conclude with further researches in optimized genetic algorithm.

196 citations


Proceedings ArticleDOI
01 Jan 2009
TL;DR: The developed software is the result of a two-year project focused on a robust implementation of a computer-aided optimization tool to deal with realistic well placement problems with arbitrary well trajectories, complex model grids and linear and nonlinear constraints.
Abstract: Well placement optimization is a very challenging problem due to the large number of decision variables involved and the nonlinearity of the reservoir response as well as of the well placement constraints. Over the years, a lot of research has been done on this problem, most of which using optimization routines coupled to reservoir simulation models. Despite all this research, there is still a lack of robust computer-aided optimization tools ready to be applied by asset teams in real field development projects. This paper describes the implementation of a tool, based on a Genetic Algorithm, for the simultaneous optimization of number, location and trajectory of producer and injector wells. The developed software is the result of a two-year project focused on a robust implementation of a computer-aided optimization tool to deal with realistic well placement problems with arbitrary well trajectories, complex model grids and linear and nonlinear constraints. The developed optimization tool uses a commercial reservoir simulator as the evaluation function without using proxies to substitute the full numerical model. Due to the large size of the problem, in some cases involving more than 100 decision variables, the optimization process may require thousands of reservoir simulations. Such a task has become feasible through a distributed computing environment running multiple simulations at the same time. The implementation uses a technique called Genocop III – Genetic Algorithm for Numerical Optimization of Constrained Problems – to deal with well placement constraints. Such constraints include grid size, maximum length of wells, minimum distance between wells, inactive grid cells and user-defined regions of the model, with non-uniform shape, where the optimization routine is not supposed to place wells. The optimization process was applied to three full-field reservoir models based on real cases. It increased the net present values and the oil recovery factors obtained by well placement scenarios previously proposed by reservoir engineers. The process was also applied to a synthetic case, based on outcrop data, to analyze the impact of using reservoir quality maps to generate an initial well placement scenario for the optimization routine without using an engineer-defined configuration. Introduction The definition of a well placement is a key aspect with major impact in a field development project. In this sense, the use of reservoir simulation allows the engineer to evaluate different placement scenarios. However, the current industry practice is still, in most cases, a manual procedure of trial and error that requires a lot of experience and knowledge from the engineers involved in the project. Considering that, the development of well placement optimization tools which can automate this process is a high desirable goal. Well placement optimization is a very challenging problem due to the large number of decision variables involved and the nonlinearity of the reservoir response as well as of the well placement constraints. Over the years, a lot of research has been done on this problem, most of which using optimization routines coupled to reservoir simulation and economical models. In 1995, Beckner and Song applied a Simulated Annealing algorithm to optimize the location and scheduling of 12 wells with fixed orientation and length. In 1997, Bittencourt and Horner applied a Genetic Algorithm (GA) hybridized with Polytope and Tabu Search methods to optimize the location of 33 vertical and horizontal wells, including wells, producers and injectors. In 1998, Pan and Horner investigated the use of multivariate interpolation algorithms, Least Squares and Kriging, as proxies to reservoir simulations for optimization problems including well placement. In 1999, Cruz et al. introduced the

173 citations


Journal ArticleDOI
TL;DR: A new clustering algorithm based on genetic algorithm (GA) with gene rearrangement (GAGR) is proposed, which in application may effectively remove the degeneracy for the purpose of a more efficient search.

161 citations


Journal ArticleDOI
TL;DR: A statistical analysis shows that the generalization error afforded agents by the collaborative training algorithm can be bounded in terms of the relationship between the network topology and the representational capacity of the relevant reproducing kernel Hilbert space.
Abstract: In this paper, an algorithm is developed for collaboratively training networks of kernel-linear least-squares regression estimators. The algorithm is shown to distributively solve a relaxation of the classical centralized least-squares regression problem. A statistical analysis shows that the generalization error afforded agents by the collaborative training algorithm can be bounded in terms of the relationship between the network topology and the representational capacity of the relevant reproducing kernel Hilbert space. Numerical experiments suggest that the algorithm is effective at reducing noise. The algorithm is relevant to the problem of distributed learning in wireless sensor networks by virtue of its exploitation of local communication. Several new questions for statistical learning theory are proposed.

155 citations


Journal ArticleDOI
TL;DR: A novel adaptive mutation operator that has no parameter is reported that is non-revisiting: It remembers every position that it has searched before and shows and maintains a stable good performance.
Abstract: A novel genetic algorithm is reported that is non-revisiting: It remembers every position that it has searched before. An archive is used to store all the solutions that have been explored before. Different from other memory schemes in the literature, a novel binary space partitioning tree archive design is advocated. Not only is the design an efficient method to check for revisits, if any, it in itself constitutes a novel adaptive mutation operator that has no parameter. To demonstrate the power of the method, the algorithm is evaluated using 19 famous benchmark functions. The results are as follows. (1) Though it only uses finite resolution grids, when compared with a canonical genetic algorithm, a generic real-coded genetic algorithm, a canonical genetic algorithm with simple diversity mechanism, and three particle swarm optimization algorithms, it shows a significant improvement. (2) The new algorithm also shows superior performance compared to covariance matrix adaptation evolution strategy (CMA-ES), a state-of-the-art method for adaptive mutation. (3) It can work with problems that have large search spaces with dimensions as high as 40. (4) The corresponding CPU overhead of the binary space partitioning tree design is insignificant for applications with expensive or time-consuming fitness evaluations, and for such applications, the memory usage due to the archive is acceptable. (5) Though the adaptive mutation is parameter-less, it shows and maintains a stable good performance. However, for other algorithms we compare, the performance is highly dependent on suitable parameter settings.

154 citations


Journal ArticleDOI
TL;DR: From the reliability point of view, it seems that the real encoded differential algorithm, improved by the technology described in this paper, is a universal and reliable method capable of solving all proposed test problems.
Abstract: This paper presents several types of evolutionary algorithms (EAs) used for global optimization on real domains. The interest has been focused on multimodal problems, where the difficulties of a premature convergence usually occurs. First the standard genetic algorithm (SGA) using binary encoding of real values and its unsatisfactory behavior with multimodal problems is briefly reviewed together with some improvements of fighting premature convergence. Two types of real encoded methods based on differential operators are examined in detail: the differential evolution (DE), a very modern and effective method firstly published by R. Storn and K. Price, and the simplified real-coded differential genetic algorithm SADE proposed by the authors. In addition, an improvement of the SADE method, called CERAF technology, enabling the population of solutions to escape from local extremes, is examined. All methods are tested on an identical set of objective functions and a systematic comparison based on a reliable methodology is presented. It is confirmed that real coded methods generally exhibit better behavior on real domains than the binary algorithms, even when extended by several improvements. Furthermore, the positive influence of the differential operators due to their possibility of self-adaptation is demonstrated. From the reliability point of view, it seems that the real encoded differential algorithm, improved by the technology described in this paper, is a universal and reliable method capable of solving all proposed test problems.

145 citations


01 Jan 2009
TL;DR: The results of the experiments show that the Importance Score method is more efficient when dealing with little noise and small number of interacting features, while the genetic algorithms can provide a more robust solution at the expense of increased computational effort.
Abstract: This paper presents a comparison between two feature selection methods, the Importance Score (IS) which is based on a greedy-like search and a genetic algorithm-based (GA) method, in order to better understand their strengths and limitations and their area of application. The results of our experiments show a very strong relation between the nature of the data and the behavior of both systems. The Importance Score method is more efficient when dealing with little noise and small number of interacting features, while the genetic algorithms can provide a more robust solution at the expense of increased computational effort.

144 citations


Journal ArticleDOI
12 Aug 2009-Sensors
TL;DR: A comparison of training algorithms of radial basis function (RBF) neural networks for classification purposes and results show that the use of the ABC algorithm results in better learning than those of others.
Abstract: This paper introduces a comparison of training algorithms of radial basis function (RBF) neural networks for classification purposes. RBF networks provide effective solutions in many science and engineering fields. They are especially popular in the pattern classification and signal processing areas. Several algorithms have been proposed for training RBF networks. The Artificial Bee Colony (ABC) algorithm is a new, very simple and robust population based optimization algorithm that is inspired by the intelligent behavior of honey bee swarms. The training performance of the ABC algorithm is compared with the Genetic algorithm, Kalman filtering algorithm and gradient descent algorithm. In the experiments, not only well known classification problems from the UCI repository such as the Iris, Wine and Glass datasets have been used, but also an experimental setup is designed and inertial sensor based terrain classification for autonomous ground vehicles was also achieved. Experimental results show that the use of the ABC algorithm results in better learning than those of others.

Proceedings Article
07 Dec 2009
TL;DR: An extension of incremental decremental algorithm which efficiently works for simultaneous update of multiple data points for online SVM learning in which the authors need to remove old data points and add new data points in a short time.
Abstract: We propose a multiple incremental decremental algorithm of Support Vector Machine (SVM). Conventional single incremental decremental SVM can update the trained model efficiently when single data point is added to or removed from the training set. When we add and/or remove multiple data points, this algorithm is time-consuming because we need to repeatedly apply it to each data point. The proposed algorithm is computationally more efficient when multiple data points are added and/or removed simultaneously. The single incremental decremental algorithm is built on an optimization technique called parametric programming. We extend the idea and introduce multi-parametric programming for developing the proposed algorithm. Experimental results on synthetic and real data sets indicate that the proposed algorithm can significantly reduce the computational cost of multiple incremental decremental operation. Our approach is especially useful for online SVM learning in which we need to remove old data points and add new data points in a short amount of time.

Journal ArticleDOI
TL;DR: This paper formulate a crop-planning problem as a multi-objective optimization model and solve two different versions of the problem using three different optimization approaches, including the @?-constrained method, NSGAII and the proposed multi- objective constrained algorithm (MCA).

Journal ArticleDOI
TL;DR: The hybrid discrete dynamically dimensioned search (HD-DDS) algorithm as mentioned in this paper combines two local search heuristics with a discrete DDS search strategy adapted from the continuous DDS algorithm.
Abstract: [1] The dynamically dimensioned search (DDS) continuous global optimization algorithm by Tolson and Shoemaker (2007) is modified to solve discrete, single-objective, constrained water distribution system (WDS) design problems. The new global optimization algorithm for WDS optimization is called hybrid discrete dynamically dimensioned search (HD-DDS) and combines two local search heuristics with a discrete DDS search strategy adapted from the continuous DDS algorithm. The main advantage of the HD-DDS algorithm compared with other heuristic global optimization algorithms, such as genetic and ant colony algorithms, is that its searching capability (i.e., the ability to find near globally optimal solutions) is as good, if not better, while being significantly more computationally efficient. The algorithm's computational efficiency is due to a number of factors, including the fact that it is not a population-based algorithm and only requires computationally expensive hydraulic simulations to be conducted for a fraction of the solutions evaluated. This paper introduces and evaluates the algorithm by comparing its performance with that of three other algorithms (specific versions of the genetic algorithm, ant colony optimization, and particle swarm optimization) on four WDS case studies (21- to 454-dimensional optimization problems) on which these algorithms have been found to perform well. The results obtained indicate that the HD-DDS algorithm outperforms the state-of-the-art existing algorithms in terms of searching ability and computational efficiency. In addition, the algorithm is easier to use, as it does not require any parameter tuning and automatically adjusts its search to find good solutions given the available computational budget.

Journal Article
TL;DR: This paper presents a new algorithm, in which the instances are not discarded, but are instead projected onto the space spanned by the previous online hypothesis, and it is proved that its solution is guaranteed to be bounded.
Abstract: A common problem of kernel-based online algorithms, such as the kernel-based Perceptron algorithm, is the amount of memory required to store the online hypothesis, which may increase without bound as the algorithm progresses. Furthermore, the computational load of such algorithms grows linearly with the amount of memory used to store the hypothesis. To attack these problems, most previous work has focused on discarding some of the instances, in order to keep the memory bounded. In this paper we present a new algorithm, in which the instances are not discarded, but are instead projected onto the space spanned by the previous online hypothesis. We call this algorithm Projectron. While the memory size of the Projectron solution cannot be predicted before training, we prove that its solution is guaranteed to be bounded. We derive a relative mistake bound for the proposed algorithm, and deduce from it a slightly different algorithm which outperforms the Perceptron. We call this second algorithm Projectron++. We show that this algorithm can be extended to handle the multiclass and the structured output settings, resulting, as far as we know, in the first online bounded algorithm that can learn complex classification tasks. The method of bounding the hypothesis representation can be applied to any conservative online algorithm and to other online algorithms, as it is demonstrated for ALMA2. Experimental results on various data sets show the empirical advantage of our technique compared to various bounded online algorithms, both in terms of memory and accuracy.

Journal ArticleDOI
TL;DR: The proposed hybrid learning algorithm combines the genetic algorithm and the nonlinear Levenberg-Marquardt algorithm to identify parameters for the aggregate load model (ZIP augmented with induction motor).
Abstract: Parameter identification is the key technology in measurement-based load modeling A hybrid learning algorithm is proposed to identify parameters for the aggregate load model (ZIP augmented with induction motor) The hybrid learning algorithm combines the genetic algorithm (GA) and the nonlinear Levenberg-Marquardt (L-M) algorithm It takes advantages of the global search ability of GA and the local search ability of L-M algorithm, which is a more powerful search technique The proposed algorithm is tested for load parameter identifications using both simulation data and field measurement data Numerical results illustrate that the hybrid learning algorithm can improve the accuracy and reduce the computation time for load model parameter identifications

Proceedings ArticleDOI
14 Jun 2009
TL;DR: An algorithm, known as the Adaptive k-Meteorologists Algorithm, is used to create a new reinforcement-learning algorithm for factored-state problems that enjoys significant improvement over the previous state-of-the-art algorithm.
Abstract: The purpose of this paper is three-fold. First, we formalize and study a problem of learning probabilistic concepts in the recently proposed KWIK framework. We give details of an algorithm, known as the Adaptive k-Meteorologists Algorithm, analyze its sample-complexity upper bound, and give a matching lower bound. Second, this algorithm is used to create a new reinforcement-learning algorithm for factored-state problems that enjoys significant improvement over the previous state-of-the-art algorithm. Finally, we apply the Adaptive k-Meteorologists Algorithm to remove a limiting assumption in an existing reinforcement-learning algorithm. The effectiveness of our approaches is demonstrated empirically in a couple benchmark domains as well as a robotics navigation problem.

Journal ArticleDOI
TL;DR: The objective of this research was the development of a method that integrated an activity analysis model of profits from production with a biophysical model, and included the capacity for optimization over multiple objectives.

Book ChapterDOI
17 Nov 2009
TL;DR: A hybrid algorithm is presented that combines the EM methodology and genetic operators to obtain the best/optimal schedule for this single machine scheduling problem, which attempts to achieve convergence and diversity effect when they iteratively solve the problem.
Abstract: Electromagnetism-like algorithm (EM) is a population-based meta-heuristic which has been proposed to solve continuous problems effectively. In this paper, we present a new meta-heuristic that uses the EM methodology to solve the single machine scheduling problem. Single machine scheduling is a combinatorial optimization problem. Schedule representation for our problem is based on random keys. Because there is little research in solving the combinatorial optimization problem (COP) by EM, the paper attempts to employ the random-key concept enabling EM to solve COP in single machine scheduling problem. We present a hybrid algorithm that combines the EM methodology and genetic operators to obtain the best/optimal schedule for this single machine scheduling problem, which attempts to achieve convergence and diversity effect when they iteratively solve the problem. The objective in our problem is minimization of the sum of earliness and tardiness. This hybrid algorithm was tested on a set of standard test problems available in the literature. The computational results show that this hybrid algorithm performs better than the standard genetic algorithm.

Journal ArticleDOI
TL;DR: A meta-heuristic multi-objective is proposed to obtain diverse locally non-dominated solutions for project selection problem and the computational results show the superiority of the proposed algorithm in comparison with NSGA-II.

Proceedings ArticleDOI
06 Jun 2009
TL;DR: Simulation results show iteration times is significant less than that of traditional batch BP learning algorithm with constant learning rate, and the formula for self-adaptive learning rate is given.
Abstract: This paper addresses the questions of improving convergence performance for back propagation (BP) neural network. For traditional BP neural network algorithm, the learning rate selection is depended on experience and trial. In this paper, based on Taylor formula the function relationship between the total quadratic training error change and connection weights and biases changes is obtained, and combined with weights and biases changes in batch BP learning algorithm, the formula for self-adaptive learning rate is given. Unlike existing algorithm, the self-adaptive learning rate depends on only neural network topology, training samples, average quadratic error and error curve surface gradient but not artificial selection. Simulation results show iteration times is significant less than that of traditional batch BP learning algorithm with constant learning rate.

Proceedings ArticleDOI
06 Jul 2009
TL;DR: This paper presents a new hybrid algorithm, which is based on the concepts of the Artificial Bee Colony (ABC) and Greedy Randomized Adaptive Search Procedure (GRASP), for optimally clustering N objects into K clusters.
Abstract: This paper presents a new hybrid algorithm, which is based on the concepts of the Artificial Bee Colony (ABC) and Greedy Randomized Adaptive Search Procedure (GRASP), for optimally clustering N objects into K clusters. The proposed algorithm is a two phase algorithm which combines an Artificial Bee Colony Optimization algorithm for the solution of the feature selection problem and a GRASP algorithm for the solution of the clustering problem. As the feature selection problem is a discrete problem, a modification of the initially proposed Artificial Bee Colony optimization algorithm, a Discrete Artificial Bee Colony optimization algorithm, is proposed in this study. The performance of the algorithm is compared with other popular metaheuristic methods like classic genetic algorithms, tabu search, GRASP, ant colony optimization, particle swarm optimization and honey bees mating optimization algorithm. In order to assess the efficacy of the proposed algorithm, this methodology is evaluated on datasets from the UCI Machine Learning Repository. The high performance of the proposed algorithm is achieved as the algorithm gives very good results and in some instances the percentage of the corrected clustered samples is very high and is larger than 98%.

Journal ArticleDOI
TL;DR: In this article, a genetic algorithm is used to solve the problem of job shop setup times where the setup times are sequence dependent under minimization of the maximum completion time or makespan.
Abstract: In this work we consider job shop problems where the setup times are sequence dependent under minimisation of the maximum completion time or makespan. We present a genetic algorithm to solve the problem. The genetic algorithm is hybridised with a diversification mechanism, namely the restart phase, and a simple form of local search to enrich the algorithm. Various operators and parameters of the genetic algorithm are reviewed to calibrate the algorithm by means of the Taguchi method. For the evaluation of the proposed hybrid algorithm, it is compared against existing algorithms through a benchmark. All the results demonstrate that our hybrid genetic algorithm is very effective for the problem.

Journal ArticleDOI
TL;DR: A hybrid evolutionary algorithm which synergistically exploits differential evolution, genetic algorithms and particle swarm optimization, has been developed and applied to spacecraft trajectory optimization and is successfully employed to determine mission opportunities in a large search space.
Abstract: A hybrid evolutionary algorithm which synergistically exploits differential evolution, genetic algorithms and particle swarm optimization, has been developed and applied to spacecraft trajectory optimization. The cooperative procedure runs the three basic algorithms in parallel, while letting the best individuals migrate to the other populations at prescribed intervals. Rendezvous problems and round-trip Earth–Mars missions have been considered. The results show that the hybrid algorithm has better performance compared to the basic algorithms that are employed. In particular, for the rendezvous problem, a 100% efficiency can be obtained both by differential evolution and the genetic algorithm only when particular strategies and parameter settings are adopted. On the other hand, the hybrid algorithm always attains the global optimum, even though nonoptimal strategies and parameter settings are adopted. Also the number of function evaluations, which must be performed to attain the optimum, is reduced when the hybrid algorithm is used. In the case of Earth–Mars missions, the hybrid algorithm is successfully employed to determine mission opportunities in a large search space.

Journal ArticleDOI
TL;DR: The experimental results show that the MPAGAFS cannot only be used for serial feature selection but also for parallel feature selection with satisfying precision and number of features.
Abstract: Search algorithm is an essential part of feature selection algorithm. In this paper, through constructing double chain-like agent structure and with improved genetic operators, the authors propose one novel agent genetic algorithm-multi-population agent genetic algorithm (MPAGAFS) for feature selection. The double chain-like agent structure is more like local environment in real world, the introduction of this structure is good to keep the diversity of population. Moreover, the structure can help to construct multi-population agent GA, thereby realizing parallel searching for optimal feature subset. In order to evaluate the performance of MPAGAFS, several groups of experiments are conducted. The experimental results show that the MPAGAFS cannot only be used for serial feature selection but also for parallel feature selection with satisfying precision and number of features.

Journal ArticleDOI
TL;DR: This paper extends the notion of cellularity to memetic algorithms (MA), a configuration termed cellular memetic algorithm (CMA), and proposes adaptive mechanisms that tailor the amount of exploration versus exploitation of local solutions carried out by the CMA.
Abstract: A cellular genetic algorithm (CGA) is a decentralized form of GA where individuals in a population are usually arranged in a 2D grid and interactions among individuals are restricted to a set neighborhood. In this paper, we extend the notion of cellularity to memetic algorithms (MA), a configuration termed cellular memetic algorithm (CMA). In addition, we propose adaptive mechanisms that tailor the amount of exploration versus exploitation of local solutions carried out by the CMA. We systematically benchmark this adaptive mechanism and provide evidence that the resulting adaptive CMA outperforms other methods both in the quality of solutions obtained and the number of function evaluations for a range of continuous optimization problems.

Journal ArticleDOI
TL;DR: This paper aims at developing an approach that combines a local improvement strategy with genetic algorithm that not only converges to the best solution very quickly but also produces solutions that are as accurate as any results reported so far in literature.

Journal ArticleDOI
TL;DR: A spatial correlation hybrid genetic algorithm based on the characteristics of fractal and partitioned iterated function system (PIFS) and adopts dyadic mutation operator to take place of the traditional one to avoid premature convergence.

Journal ArticleDOI
TL;DR: A novel genetic algorithm is described in this paper for the problem of constrained optimization that incorporates modified genetic operators that preserve the feasibility of the trial solutions encoded in the chromosomes, the stochastic application of a local search procedure and a stopping rule which is based on asymptotic considerations.

Journal ArticleDOI
TL;DR: Identified results showed that the hybrid HS–Solver algorithm requires fewer iterations and gives more effective results than other deterministic and stochastic solution algorithms.
Abstract: In this article, a hybrid global–local optimization algorithm is proposed to solve continuous engineering optimization problems. In the proposed algorithm, the harmony search (HS) algorithm is used as a global-search method and hybridized with a spreadsheet ‘Solver’ to improve the results of the HS algorithm. With this purpose, the hybrid HS–Solver algorithm has been proposed. In order to test the performance of the proposed hybrid HS–Solver algorithm, several unconstrained, constrained, and structural-engineering optimization problems have been solved and their results are compared with other deterministic and stochastic solution methods. Also, an empirical study has been carried out to test the performance of the proposed hybrid HS–Solver algorithm for different sets of HS solution parameters. Identified results showed that the hybrid HS–Solver algorithm requires fewer iterations and gives more effective results than other deterministic and stochastic solution algorithms.