scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Metaheuristic design of feedforward neural networks

TL;DR: A broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches are summarized, which provides interesting research challenges for future research to cope-up with the present information processing era.
About: This article is published in Engineering Applications of Artificial Intelligence.The article was published on 2017-04-01 and is currently open access. It has received 398 citations till now. The article focuses on the topics: Metaheuristic & Extreme learning machine.
Citations
More filters
Journal ArticleDOI
TL;DR: The statistical simulation results revealed that the LFD algorithm provides better results with superior performance in most tests compared to several well-known metaheuristic algorithms such as simulated annealing (SA), differential evolution (DE), particle swarm optimization (PSO), elephant herding optimization (EHO), the genetic algorithm (GA), moth-flame optimization algorithm (MFO), whale optimization algorithm

248 citations

Journal ArticleDOI
01 Sep 2019
TL;DR: It is shown and proved that the proposed stochastic training algorithm GOAMLP is substantially beneficial in improving the classification rate of MLPs.
Abstract: This paper proposes a new hybrid stochastic training algorithm using the recently proposed grasshopper optimization algorithm (GOA) for multilayer perceptrons (MLPs) neural networks. The GOA algorithm is an emerging technique with a high potential in tackling optimization problems based on its flexible and adaptive searching mechanisms. It can demonstrate a satisfactory performance by escaping from local optima and balancing the exploration and exploitation trends. The proposed GOAMLP model is then applied to five important datasets: breast cancer, parkinson, diabetes, coronary heart disease, and orthopedic patients. The results are deeply validated in comparison with eight recent and well-regarded algorithms qualitatively and quantitatively. It is shown and proved that the proposed stochastic training algorithm GOAMLP is substantially beneficial in improving the classification rate of MLPs.

224 citations

Journal ArticleDOI
TL;DR: The approach proposed in this study demonstrates the feasibility of creating tools that helps in regional waste planning by means of sourcing, pre-processing, integrating and modeling of publically available data from various sources.

187 citations

Journal ArticleDOI
TL;DR: A new parameter learning strategy based on an improved grey wolf optimization (IGWO) strategy, in which a new hierarchical mechanism was established to improve the stochastic behavior, and exploration capability of grey wolves, is proposed.
Abstract: Since its introduction, kernel extreme learning machine (KELM) has been widely used in a number of areas The parameters in the model have an important influence on the performance of KELM Therefore, model parameters must be properly adjusted before they can be put into practical use This study proposes a new parameter learning strategy based on an improved grey wolf optimization (IGWO) strategy, in which a new hierarchical mechanism was established to improve the stochastic behavior, and exploration capability of grey wolves In the proposed mechanism, random local search around the optimal grey wolf was introduced in Beta grey wolves, and random global search was introduced in Omega grey wolves The effectiveness of IGWO strategy is first validated on 10 commonly used benchmark functions Results have shown that the proposed IGWO can find good balance between exploration and exploitation In addition, when IGWO is applied to solve the parameter adjustment problem of KELM model, it also provides better performance than other seven meta-heuristic algorithms in three practical applications, including students’ second major selection, thyroid cancer diagnosis and financial stress prediction Therefore, the method proposed in this paper can serve as a good candidate tool for tuning the parameters of KELM, thus enabling the KELM model to achieve more promising results in practical applications

151 citations

Journal ArticleDOI
TL;DR: It can be observed that combined methods are viable and accurate approaches for time series forecasting and also the parallel–series hybrid structure can obtain more accurate and promising results than other those hybrid structures.

110 citations

References
More filters
Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
13 May 1983-Science
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Abstract: There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.

41,772 citations

Journal ArticleDOI
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

37,861 citations


"Metaheuristic design of feedforward..." refers background in this paper

  • ...Many other ANN models, like radial basis function [23] and support vector machine [24] are a special class of three-layer FNNs....

    [...]

Journal ArticleDOI
TL;DR: In this article, a modified Monte Carlo integration over configuration space is used to investigate the properties of a two-dimensional rigid-sphere system with a set of interacting individual molecules, and the results are compared to free volume equations of state and a four-term virial coefficient expansion.
Abstract: A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two‐dimensional rigid‐sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four‐term virial coefficient expansion.

35,161 citations


"Metaheuristic design of feedforward..." refers methods in this paper

  • ...Similarly, a PSO and SA based hybrid algorithm for optimizing FNN were proposed in [191], where, in each iteration, each PSO particle was governed by SA metropolis criteria [100] that determined global best particle for PSO algorithm....

    [...]

  • ...It uses Monte Carlo method [100] to determine the acceptance probability of a newly generated solution in the neighborhood of the current solution....

    [...]

Journal ArticleDOI

28,888 citations


"Metaheuristic design of feedforward..." refers background or methods in this paper

  • ...Many researchers suggested that the Levenberg-Marquardt (LM) method outperforms BP, CG, and Quasi-Newton methods [79, 80]....

    [...]

  • ...Similar to the CG, many other variants of derivative-based conventional methods are used for weights optimization: Quasi-Newton [65], Gauss-Newton [76], or Levenberg-Marquardt [77]....

    [...]

  • ...problem, which suggests to using the sum of squared error (4) [77]....

    [...]