Journal ArticleDOI
Learning representations by back-propagating errors
Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.Abstract:
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.read more
Citations
More filters
Journal ArticleDOI
A novel deep learning method based on attention mechanism for bearing remaining useful life prediction
TL;DR: A recurrent neural network based on encoder–decoder framework with attention mechanism is proposed to predict HI values, which are designed closely related with the RUL values in this paper.
Journal ArticleDOI
Fundamentals, materials, and machine learning of polymer electrolyte membrane fuel cell technology
TL;DR: In this article, the authors present the most recent status of polymer electrolyte membrane (PEM) fuel cell applications in the portable, stationary, and transportation sectors and describe the important fundamentals for the further advancement of fuel cell technology in terms of design and control optimization, cost reduction, and durability improvement.
Journal ArticleDOI
The Future of Sensitivity Analysis: An essential discipline for systems modeling and policy support
Saman Razavi,Anthony Jakeman,Andrea Saltelli,Clémentine Prieur,Bertrand Iooss,Emanuele Borgonovo,Elmar Plischke,Samuele Lo Piano,Takuya Iwanaga,William E. Becker,Stefano Tarantola,Joseph H. A. Guillaume,John D. Jakeman,Hoshin V. Gupta,Nicola Melillo,Giovanni Rabitti,Vincent Chabridon,Qingyun Duan,Xifu Sun,Stefan Smith,R. Sheikholeslami,R. Sheikholeslami,Nasim Hosseini,Masoud Asadzadeh,Arnald Puy,Arnald Puy,Sergei Kucherenko,Holger R. Maier +27 more
TL;DR: A multidisciplinary group of researchers and practitioners revisit the current status of Sensitivity analysis, and outline research challenges in regard to both theoretical frameworks and their applications to solve real-world problems.
Journal ArticleDOI
Dimension Reduction With Extreme Learning Machine
TL;DR: This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace, and experimental results show the efficacy of linear and non-linear ELM-AE and SELM- AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.
Book ChapterDOI
Intrinsic Motivation and Reinforcement Learning
TL;DR: This chapter argues that the answer to both questions is assuredly “yes” and that the machine learning framework of reinforcement learning is particularly appropriate for bringing learning together with what in animals one would call motivation.