scispace - formally typeset
Journal ArticleDOI

Representational learning with ELMs for big data

TLDR
Huang et al. as mentioned in this paper proposed ELM-AE, a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.
Abstract
Geoffrey Hinton and Pascal Vincent showed that a restricted Boltzmann machine (RBM) and auto-encoders (AE) could be used for feature engineering. These engineered features then could be used to train multiple-layer neural networks, or deep networks. Two types of deep networks based on RBM exist: the deep belief network (DBN)1 and the deep Boltzmann machine (DBM). Guang-Bin Huang and colleagues introduced the extreme learning machine (ELM) as an single-layer feed-forward neural networks (SLFN) with a fast learning speed and good generalization capability. The ELM for SLFNs shows that hidden nodes can be randomly generated. ELM-AE output weights can be determined analytically, unlike RBMs and traditional auto-encoders, which require iterative algorithms. ELM-AE can be seen as a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.

read more

Citations
More filters
Journal ArticleDOI

Deep extreme learning machines

TL;DR: It is found that the method can correctly classify up to 99.19% of MNIST test images, which surpasses the best error rates reported for standard 3-layer ELMs or previous deep ELM approaches when applied to MNIST.
Journal ArticleDOI

Data-driven ship digital twin for estimating the speed loss caused by the marine fouling

TL;DR: In this paper, a data driven Digital Twin of the ship is built, leveraging on the large amount of information collected from the on-board sensors, and is used for estimating the speed loss due to marine fouling.
Journal ArticleDOI

Statistical Learning Theory and ELM for Big Social Data Analysis

TL;DR: This paper shows how to exploit the most recent technological tools and advances in Statistical Learning Theory (SLT) in order to efficiently build an Extreme Learning Machine (ELM) and assess the resultant model's performance when applied to big social data analysis.
Journal ArticleDOI

Learning word dependencies in text by means of a deep recurrent belief network

TL;DR: In this paper, a deep recurrent belief network with distributed time delays was proposed for learning multivariate Gaussians, where dynamic Gaussian Bayesian networks over training samples were evolved using Markov Chain Monte Carlo to determine the initial weights of each hidden layer of neurons.
Journal ArticleDOI

Prediction of Air Pollutants Concentration Based on an Extreme Learning Machine: The Case of Hong Kong.

TL;DR: This work proposes predicting the concentration of air pollutants by the use of trained extreme learning machines based on the data obtained from eight air quality parameters in two monitoring stations, including Sham Shui Po and Tap Mun in Hong Kong for six years.
References
More filters
Journal ArticleDOI

Extreme learning machine: Theory and applications

TL;DR: A new learning algorithm called ELM is proposed for feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs which tends to provide good generalization performance at extremely fast learning speed.
Journal ArticleDOI

Extreme Learning Machine for Regression and Multiclass Classification

TL;DR: ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly and in theory, ELM can approximate any target continuous function and classify any disjoint regions.
Journal ArticleDOI

Universal approximation using incremental constructive feedforward networks with random hidden nodes

TL;DR: This paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer.
Journal ArticleDOI

Optimization method based extreme learning machine for classification

TL;DR: Under the ELM learning framework, SVM's maximal margin property and the minimal norm of weights theory of feedforward neural networks are actually consistent and ELM for classification tends to achieve better generalization performance than traditional SVM.
Related Papers (5)