scispace - formally typeset
Journal ArticleDOI

Representational learning with ELMs for big data

TLDR
Huang et al. as mentioned in this paper proposed ELM-AE, a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.
Abstract
Geoffrey Hinton and Pascal Vincent showed that a restricted Boltzmann machine (RBM) and auto-encoders (AE) could be used for feature engineering. These engineered features then could be used to train multiple-layer neural networks, or deep networks. Two types of deep networks based on RBM exist: the deep belief network (DBN)1 and the deep Boltzmann machine (DBM). Guang-Bin Huang and colleagues introduced the extreme learning machine (ELM) as an single-layer feed-forward neural networks (SLFN) with a fast learning speed and good generalization capability. The ELM for SLFNs shows that hidden nodes can be randomly generated. ELM-AE output weights can be determined analytically, unlike RBMs and traditional auto-encoders, which require iterative algorithms. ELM-AE can be seen as a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.

read more

Citations
More filters
Journal ArticleDOI

Rapid Decoding of Hand Gestures in Electrocorticography Using Recurrent Neural Networks

TL;DR: This study proposes to use recurrent neural networks (RNNs) to exploit the temporal information in ECoG signals for robust hand gesture decoding and indicates that the temporal dynamics is especially informative for effective and rapid decoding of hand gestures.
Journal ArticleDOI

Discriminative manifold extreme learning machine and applications to image and EEG signal classification

TL;DR: The results show that DMELM consistently achieves better performance than original ELM and yields promising results in comparison with several state-of-the-art algorithms, which suggests that both the discriminative as well as manifold information are beneficial to classification.
Journal ArticleDOI

Towards a more efficient and cost-sensitive extreme learning machine: A state-of-the-art review of recent trend

TL;DR: This review discussed the major drawbacks of ELM, which include difficulty in determination of hidden layer structure, prediction instability and Imbalanced data distributions, the poor capability of sample structure preserving (SSP), and difficulty in accommodating lateral inhibition by direct random feature mapping.
Journal ArticleDOI

A deep stacked random vector functional link network autoencoder for diagnosis of brain abnormalities and breast cancer

TL;DR: A deep neural network termed as stacked random vector functional link (RVFL) based autoencoder (SRVFL-AE) is proposed to detect the multiclass brain abnormalities and the rectified linear unit (ReLU) activation function is incorporated in the proposed deep network to provide fast and better hidden representation of input features.
Proceedings ArticleDOI

A Structured Committee for Food Recognition

TL;DR: A committee-based recognition system that chooses the optimal features out of the existing plethora of available ones that outperforms state-of-the-art works on the most used three publicly available benchmark datasets.
References
More filters
Journal ArticleDOI

Extreme learning machine: Theory and applications

TL;DR: A new learning algorithm called ELM is proposed for feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs which tends to provide good generalization performance at extremely fast learning speed.
Journal ArticleDOI

Extreme Learning Machine for Regression and Multiclass Classification

TL;DR: ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly and in theory, ELM can approximate any target continuous function and classify any disjoint regions.
Journal ArticleDOI

Universal approximation using incremental constructive feedforward networks with random hidden nodes

TL;DR: This paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer.
Journal ArticleDOI

Optimization method based extreme learning machine for classification

TL;DR: Under the ELM learning framework, SVM's maximal margin property and the minimal norm of weights theory of feedforward neural networks are actually consistent and ELM for classification tends to achieve better generalization performance than traditional SVM.
Related Papers (5)