scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

A survey on Deep Learning based bearing fault diagnosis

TL;DR: The three popular Deep Learning algorithms for Bearing fault diagnosis including Autoencoder, Restricted Boltzmann Machine, and Convolutional Neural Network are briefly introduced and their applications are reviewed through publications and research works on the area of bearing fault diagnosis.
Posted Content

Named Entity Recognition with Bidirectional LSTM-CNNs

TL;DR: A novel neural network architecture is presented that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering.
Patent

System and method for detecting fraudulent transactions

TL;DR: In this paper, a system for detecting fraudulent transactions is disclosed, where a first score is computed depending on the propensity of the transacted commodity to be involved in fraud, and a second score is calculated as a function of the authentication of the remaining parameters of the transaction.
Journal ArticleDOI

Learning in spiking neural networks by reinforcement of stochastic synaptic transmission.

TL;DR: The hypothesis that the randomness of synaptic transmission is harnessed by the brain for learning, in analogy to the way that genetic mutation is utilized by Darwinian evolution is considered.
Journal ArticleDOI

Stochastic Configuration Networks: Fundamentals and Algorithms

TL;DR: In this paper, the authors proposed a stochastic configuration (SCN) algorithm for neural networks, which randomly assigns the input weights and biases of hidden nodes in the light of a supervisory mechanism, and the output weights are analytically evaluated in either a constructive or selective manner.
References
More filters
Related Papers (5)