scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Parameter identification of PEMFC based on Convolutional neural network optimized by balanced deer hunting optimization algorithm

01 Nov 2020-Energy Reports (Elsevier)-Vol. 6, pp 1572-1580
TL;DR: A new improved version based on deer hunting optimization algorithm (DHOA) is applied to the Convolutional neural network for the PEMFC parameters identification purpose and it is declared that utilizing the proposed method gives a prediction with higher accuracy for the parameters of the PemFC model.
About: This article is published in Energy Reports.The article was published on 2020-11-01 and is currently open access. It has received 31 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: Simulation results show the better results of the proposed dAOA to provide accurate parameters of the PEMFC stack system.

62 citations

Journal ArticleDOI
TL;DR: A novel swarm-based algorithm called coyote optimization algorithm (COA) for finding the optimal parameter of PEM fuel cell as well as PEM stack is proposed and the final estimated results and statistical analysis show a significant accuracy of the proposed method.
Abstract: In recent years, the penetration of fuel cells in distribution systems is significantly increased worldwide. The fuel cell is considered an electrochemical energy conversion component. It has the ability to convert chemical to electrical energies as well as heat. The proton exchange membrane (PEM) fuel cell uses hydrogen and oxygen as fuel. It is a low-temperature type that uses a noble metal catalyst, such as platinum, at reaction sites. The optimal modeling of PEM fuel cells improves the cell performance in different applications of the smart microgrid. Extracting the optimal parameters of the model can be achieved using an efficient optimization technique. In this line, this paper proposes a novel swarm-based algorithm called coyote optimization algorithm (COA) for finding the optimal parameter of PEM fuel cell as well as PEM stack. The sum of square deviation between measured voltages and the optimal estimated voltages obtained from the COA algorithm is minimized. Two practical PEM fuel cells including 250 W stack and Ned Stack PS6 are modeled to validate the capability of the proposed algorithm under different operating conditions. The effectiveness of the proposed COA is demonstrated through the comparison with four optimizers considering the same conditions. The final estimated results and statistical analysis show a significant accuracy of the proposed method. These results emphasize the ability of COA to estimate the parameters of the PEM fuel cell model more precisely.

45 citations

Journal ArticleDOI
15 Apr 2021-Energy
TL;DR: In this article, a Jellyfish search algorithm (JSA) was employed for solving parameters identifications problem of polymer exchange membrane fuel cells (PEMFCs) model and the minimization of the sum of squared errors (SSEs) between measured and estimated voltage dataset points defined the fitness function to be optimized by the JSA subject to set of self-constrained inequality bounds.

40 citations

Journal ArticleDOI
TL;DR: Levenberg-Marquardt backpropagation (LMBP) algorithm based on artificial neural networks (ANNs) is proposed for PEMFC parameter identification and simulation results indicate that LMBP performs a higher accuracy and faster speed for parameter identification.

27 citations

Journal ArticleDOI
TL;DR: In this article, a genetic algorithm was adopted to optimize original three-dimensional simulation model by obtaining the optimal design of the channel configuration, which set the widths of top and bottom edges of the anode/cathode flow channels as independent variables with constrained range to optimize HT-PEMFC performance.

22 citations

References
More filters
Book
01 Jan 2002

17,039 citations

Journal ArticleDOI
TL;DR: Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods.

7,090 citations

Proceedings ArticleDOI
05 Jul 1995
TL;DR: C Culling is near optimal for this problem, highly noise tolerant, and the best known a~~roach in some regimes, and some new large deviation bounds on this submartingale enable us to determine the running time of the algorithm.
Abstract: We analyze the performance of a Genetic Type Algorithm we call Culling and a variety of other algorithms on a problem we refer to as ASP. Culling is near optimal for this problem, highly noise tolerant, and the best known a~~roach . . in some regimes. We show that the problem of learning the Ising perception is reducible to noisy ASP. These results provide an example of a rigorous analysis of GA’s and give insight into when and how C,A’s can beat competing methods. To analyze the genetic algorithm, we view it as a special type of submartingale. We prove some new large deviation bounds on this submartingale w~ich enable us to determine the running time of the algorithm.

4,520 citations

Proceedings Article
04 Dec 2006
TL;DR: These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
Abstract: Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

4,385 citations

Journal ArticleDOI
TL;DR: In this paper, a mathematical model has been formulated for the performance and operation of a single polymer electrolyte membrane fuel cell, which incorporates all the essential fundamental physical and electrochemical processes occurring in the membrane electrolyte, cathode catalyst layer, electrode backing and flow channel.

401 citations