scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters

Bayesian Methods for Adaptive Models

TL;DR: The Bayesian framework for model comparison and regularisation is demonstrated by studying interpolation and classification problems modelled with both linear and non–linear models and it is shown that the careful incorporation of error bar information into a classifier’s predictions yields improved performance.
Journal ArticleDOI

Monte Carlo convolution for learning on non-uniformly sampled point clouds

TL;DR: MCCNN as mentioned in this paper represents the convolution kernel itself as a multilayer perceptron, phrasing convolution as a Monte Carlo integration problem, using this notion to combine information from multiple samplings at different levels, and using Poisson disk sampling as a scalable means of hierarchical point cloud learning.
Journal ArticleDOI

Deep Learning for Acoustic Modeling in Parametric Speech Generation: A systematic review of existing techniques and future trends

TL;DR: In this article, Hidden Markov Models (HMMs) and Gaussian Mixture Models (GMMs) are used for generating low-level speech waveforms from high-level symbolic inputs via intermediate acoustic feature sequences.
Journal ArticleDOI

Monthly Rainfall Prediction Using Wavelet Neural Network Analysis

TL;DR: In this paper, an attempt has been made to find an alternative method for rainfall prediction by combining the wavelet technique with Artificial Neural Network (ANN), which has been applied to monthly rainfall data of Darjeeling rain gauge station.
Journal ArticleDOI

An intelligent-agent-based fuzzy group decision making model for financial multicriteria decision support: The case of credit scoring

TL;DR: A novel intelligent-agent-based fuzzy group decision making (GDM) model is proposed as an effective multicriteria decision analysis (MCDA) tool for credit risk evaluation.
References
More filters
Related Papers (5)