scispace - formally typeset
Journal ArticleDOI

Representational learning with ELMs for big data

TLDR
Huang et al. as mentioned in this paper proposed ELM-AE, a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.
Abstract
Geoffrey Hinton and Pascal Vincent showed that a restricted Boltzmann machine (RBM) and auto-encoders (AE) could be used for feature engineering. These engineered features then could be used to train multiple-layer neural networks, or deep networks. Two types of deep networks based on RBM exist: the deep belief network (DBN)1 and the deep Boltzmann machine (DBM). Guang-Bin Huang and colleagues introduced the extreme learning machine (ELM) as an single-layer feed-forward neural networks (SLFN) with a fast learning speed and good generalization capability. The ELM for SLFNs shows that hidden nodes can be randomly generated. ELM-AE output weights can be determined analytically, unlike RBMs and traditional auto-encoders, which require iterative algorithms. ELM-AE can be seen as a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.

read more

Citations
More filters
Journal ArticleDOI

Application of extreme learning machine to gas flow measurement with multipath acoustic transducers

TL;DR: This paper proposes to apply extreme learning machine to multipath ultrasonic flowmeters, which can analytically determine the output weights of networks instead of error backpropagation algorithm and iterative tuning of the parameters, and therefore provide high metering accuracy at extremely fast learning speed as well as require least human intervention.
Journal ArticleDOI

Deep and Wide Feature based Extreme Learning Machine for Image Classification

TL;DR: An extensive experimental study is provided, showing that when combined with ELM that serves as a classifier, using wide ResNets (WRNs) for feature extraction can produce a performance leap on all benchmark datasets compared to a plain end-to-end trained network over a wide range of selections regardless of architecture choices and ELM designs, while normal ResNet as feature extractors do not provide a performance gain.
Journal ArticleDOI

Extreme learning machine with multi-scale local receptive fields for texture classification

TL;DR: This paper proposes a method called extreme learning machine with multi-scale local receptive fields (ELM-MSLRF) to achieve feature learning and classification simultaneously for texture classification, which is fast and requires few computations.
Journal ArticleDOI

Adaptive backstepping control for magnetic bearing system via feedforward networks with random hidden nodes

TL;DR: To relax the online computation burden existing in the ABNC, a simplified ABNC with less parameters to be adjusted online is proposed to improve the control performance and the simulation results demonstrate that the proposed ABNC and Simpl_ABNC achieve better tracking performance comparing with other controllers.
Proceedings ArticleDOI

Deep Random Vector Functional Link Network for handwritten character recognition

TL;DR: This paper evaluates the performance of multi-layers Random Vector Functional Link Network (RVFL)/ extreme machine learning (EML) on four databases of handwritten characters, and the impact of the architecture (number of neurons per hidden layer), and the robustness of the distribution of the results across different runs.
References
More filters
Journal ArticleDOI

Extreme learning machine: Theory and applications

TL;DR: A new learning algorithm called ELM is proposed for feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs which tends to provide good generalization performance at extremely fast learning speed.
Journal ArticleDOI

Extreme Learning Machine for Regression and Multiclass Classification

TL;DR: ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly and in theory, ELM can approximate any target continuous function and classify any disjoint regions.
Journal ArticleDOI

Universal approximation using incremental constructive feedforward networks with random hidden nodes

TL;DR: This paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer.
Journal ArticleDOI

Optimization method based extreme learning machine for classification

TL;DR: Under the ELM learning framework, SVM's maximal margin property and the minimal norm of weights theory of feedforward neural networks are actually consistent and ELM for classification tends to achieve better generalization performance than traditional SVM.
Related Papers (5)