Learning representations by back-propagating errors
Citations
7,767 citations
Cites background or methods from "Learning representations by back-pr..."
...The idea of distributed representation is an oldidea in machine learning and neural networks research (Hinton, 1986; Rumelhart et al., 1986a; Miikkulainen & Dyer, 1991; Bengio, Ducharme, & Vincent, 2001; Schwenk & Gauvain, 2002), and it may be of help in dealing with the curse of dimensionality and…...
[...]
...…Stacked Auto-Encoders) exploit as component or monitoring device a particular type of neural network: the auto-encoder, also called autoassociator, or Diabolo network (Rumelhart et al., 1986b; Bourlard & Kamp, 1988; Hinton & Zemel, 1994; Schwenk & Milgram, 1995; Japkowicz, Hanson, & Gluck, 2000)....
[...]
...• When we put artificial neurons (affine transformation followed by a non-linearity) in our set of elements, we obtain ordinary multi-layer neural networks (Rumelhart et al., 1986b)....
[...]
...4.1 Multi-Layer Neural Networks A typical set of equations for multi-layer neural networks (Rumelhart et al., 1986b) is the following....
[...]
...In the realm of supervised learning, multi-layer neural networks (Rumelhart et al., 1986a, 1986b) and in the realm of unsupervised learning, Boltzmann machines (Ackley, Hinton, & Sejnowski, 1985) have been introduced with the goal of learning distributed internal representations in the hidden…...
[...]
7,537 citations
7,119 citations
6,899 citations
6,899 citations
References
15,313 citations