scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A new parallel galactic swarm optimization algorithm for training artificial neural networks

01 Jan 2020-Journal of Intelligent and Fuzzy Systems (IOS Press)-Vol. 38, Iss: 5, pp 6691-6701
About: This article is published in Journal of Intelligent and Fuzzy Systems.The article was published on 2020-01-01. It has received 2 citations till now. The article focuses on the topics: Swarm behaviour & Artificial neural network.
Citations
More filters
Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a feature map-based pruning strategy to reduce redundant parameters and reduce the time and space complexity of parallelized deep convolutional neural network (DCNN) training.
Abstract: In high-paced and efficient life and work, fatigue is one of the important factors that cause accidents such as traffic and medical accidents. This study designs a feature map-based pruning strategy (PFM), which effectively reduces redundant parameters and reduces the time and space complexity of parallelized deep convolutional neural network (DCNN) training; a correction is proposed in the Map stage. The secant conjugate gradient method (CGMSE) realizes the fast convergence of the conjugate gradient method and improves the convergence speed of the network; in the Reduce stage, a load balancing strategy to control the load rate (LBRLA) is proposed to achieve fast and uniform data grouping to ensure the parallelization performance of the parallel system. Finally, the related fatigue algorithm's research and simulation based on the human eye are carried out on the PC. The human face and eye area are detected from the video image collected using the USB camera, and the frame difference method and the position information of the human eye on the face are used. To track the human eye area, extract the relevant human eye fatigue characteristics, combine the blink frequency, closed eye duration, PERCLOS, and other human eye fatigue determination mechanisms to determine the fatigue state, and test and verify the designed platform and algorithm through experiments. This system is designed to enable people who doze off, such as drivers, to discover their state in time through the system and reduce the possibility of accidents due to fatigue.

5 citations

Journal ArticleDOI
TL;DR: The research shows that the wavelet packet principal component analysis model performance is significantly better than the traditional algorithm, and the recognition rate for some subtle motions is also high.
Abstract: EMG signal acquisition is mostly used in medical research. However, it has not been applied in athletes’ sports state recognition and body state detection, and there are few related studies at present. In order to promote the application of EMG signal acquisition in sports, this study combined with the actual needs of athletes to construct an EMG signal acquisition system that can collect athletes’ motion status. At the same time, in order to improve the effect of EMG signal acquisition, a wavelet packet principal component analysis model is proposed. In addition, in order to ensure the recognition efficiency of athletes’ motion state, this paper uses linear discriminant analysis method as the motion recognition assistant algorithm. Finally, this paper judges the performance of this research model by setting up comparative experiments. The research shows that the wavelet packet principal component analysis model performance is significantly better than the traditional algorithm, and the recognition rate for some subtle motions is also high. In addition, this study provides a theoretical reference for the application of EMG signals in the sports industry.

2 citations

References
More filters
Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations

Journal ArticleDOI
TL;DR: There is no a priori reason why machine learning must borrow from nature, but many machine learning systems now borrow heavily from current thinking in cognitive science, and rekindled interest in neural networks and connectionism is evidence of serious mechanistic and philosophical currents running through the field.
Abstract: There is no a priori reason why machine learning must borrow from nature. A field could exist, complete with well-defined algorithms, data structures, and theories of learning, without once referring to organisms, cognitive or genetic structures, and psychological or evolutionary theories. Yet at the end of the day, with the position papers written, the computers plugged in, and the programs debugged, a learning edifice devoid of natural metaphor would lack something. It would ignore the fact that all these creations have become possible only after three billion years of evolution on this planet. It would miss the point that the very ideas of adaptation and learning are concepts invented by the most recent representatives of the species Homo sapiens from the careful observation of themselves and life around them. It would miss the point that natural examples of learning and adaptation are treasure troves of robust procedures and structures. Fortunately, the field of machine learning does rely upon nature's bounty for both inspiration and mechanism. Many machine learning systems now borrow heavily from current thinking in cognitive science, and rekindled interest in neural networks and connectionism is evidence of serious mechanistic and philosophical currents running through the field. Another area where natural example has been tapped is in work on genetic algorithms (GAs) and genetics-based machine learning. Rooted in the early cybernetics movement (Holland, 1962), progress has been made in both theory (Holland, 1975; Holland, Holyoak, Nisbett, & Thagard, 1986) and application (Goldberg, 1989; Grefenstette, 1985, 1987) to the point where genetics-based systems are finding their way into everyday commercial use (Davis & Coombs, 1987; Fourman, 1985).

3,019 citations

Journal ArticleDOI
TL;DR: In this article, the application of conventional, especially evolutionary computation, combination of simulation-optimization and multi objectives optimization in reservoir operation is discussed and investigated, and new optimization algorithm from other applications is presented by focusing on Artificial Bee Colony (ABC) and Gravitational Search Algorithm (GSA) as alternative methods that can be explored by researchers in water resources field.
Abstract: This paper reviews current optimization technique developed to solve reservoir operation problems in water resources. The application of conventional, especially evolutionary computation, combination of simulation-optimization and multi objectives optimization in reservoir operation will be discussed and investigated. Furthermore, new optimization algorithm from other applications will be presented by focusing on Artificial Bee Colony (ABC) and Gravitational Search Algorithm (GSA) as alternative methods that can be explored by researchers in water resources field. Finally this paper looks into the challenges and issues of climate change in reservoir optimization.

209 citations

Journal ArticleDOI
01 Jan 2016
TL;DR: Extensive simulation results show that the GSO algorithm proposed in this paper converges faster to a significantly more accurate solution on a wide variety of high dimensional and multimodal benchmark optimization problems.
Abstract: Graphical abstractDisplay Omitted HighlightsA new global optimization meta-heuristic inspired by galactic motion is proposed.The proposed algorithm employs alternating phases of exploration and exploitation.Performance on rotated and shifted versions of benchmark problems is also considered.The proposed GSO algorithm outperforms 8 state-of-the-art PSO algorithms. This paper proposes a new global optimization metaheuristic called Galactic Swarm Optimization (GSO) inspired by the motion of stars, galaxies and superclusters of galaxies under the influence of gravity. GSO employs multiple cycles of exploration and exploitation phases to strike an optimal trade-off between exploration of new solutions and exploitation of existing solutions. In the explorative phase different subpopulations independently explore the search space and in the exploitative phase the best solutions of different subpopulations are considered as a superswarm and moved towards the best solutions found by the superswarm. In this paper subpopulations as well as the superswarm are updated using the PSO algorithm. However, the GSO approach is quite general and any population based optimization algorithm can be used instead of the PSO algorithm. Statistical test results indicate that the GSO algorithm proposed in this paper significantly outperforms 4 state-of-the-art PSO algorithms and 4 multiswarm PSO algorithms on an overwhelming majority of 15 benchmark optimization problems over 50 independent trials and up to 50 dimensions. Extensive simulation results show that the GSO algorithm proposed in this paper converges faster to a significantly more accurate solution on a wide variety of high dimensional and multimodal benchmark optimization problems.

138 citations

Journal ArticleDOI
TL;DR: Some of their important and emerging studies are discussed, their applications in several fields are investigated, and the differences between both algorithms are clarified so as to remove confusion between them.
Abstract: Nature-inspired metaheuristic algorithms are considered as the most effective techniques for solving various optimization problems. This paper provides a briefly review of the key features of the cuckoo-inspired metaheuristics: cuckoo search (CS) and cuckoo optimization algorithm (COA). In addition, it discusses some of their important and emerging studies, investigates their applications in several fields, and finally clarifies the differences between both algorithms so as to remove confusion between them.

50 citations