Journal ArticleDOI
Representational learning with ELMs for big data
TLDR
Huang et al. as mentioned in this paper proposed ELM-AE, a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.Abstract:
Geoffrey Hinton and Pascal Vincent showed that a restricted Boltzmann machine (RBM) and auto-encoders (AE) could be used for feature engineering. These engineered features then could be used to train multiple-layer neural networks, or deep networks. Two types of deep networks based on RBM exist: the deep belief network (DBN)1 and the deep Boltzmann machine (DBM). Guang-Bin Huang and colleagues introduced the extreme learning machine (ELM) as an single-layer feed-forward neural networks (SLFN) with a fast learning speed and good generalization capability. The ELM for SLFNs shows that hidden nodes can be randomly generated. ELM-AE output weights can be determined analytically, unlike RBMs and traditional auto-encoders, which require iterative algorithms. ELM-AE can be seen as a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.read more
Citations
More filters
Journal ArticleDOI
High-Performance Extreme Learning Machines: A Complete Toolbox for Big Data Applications
TL;DR: This paper presents a complete approach to a successful utilization of a high-performance extreme learning machines (ELM) Toolbox for Big Data, and summarizes recent advantages in algorithmic performance; gives a fresh view on the ELM solution in relation to the traditional linear algebraic performance; and reaps the latest software and hardware performance achievements.
Journal ArticleDOI
Dimension Reduction With Extreme Learning Machine
TL;DR: This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace, and experimental results show the efficacy of linear and non-linear ELM-AE and SELM- AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.
Journal ArticleDOI
Extreme Learning Machines: A new approach for prediction of reference evapotranspiration
TL;DR: In this article, the performance of ELM model is compared with the empirical P-M equation and with feedforward backpropagation (FFBP) model for predicting Penman-Monteith (P-M) ET 0 for Mosul, Baghdad, and Basrah meteorological stations, located at the north, mid and southern part of Iraq.
Journal ArticleDOI
Short-Term Wind Speed Forecasting via Stacked Extreme Learning Machine With Generalized Correntropy
Xiong Luo,Jiankun Sun,Long Wang,Weiping Wang,Wenbing Zhao,Jinsong Wu,Jenq-Haur Wang,Zijun Zhang +7 more
TL;DR: An enhanced SELM is developed via replacing the Euclidean norm of the mean square error (MSE) criterion in ELM with the generalized correntropy criterion to further improve the forecasting performance.
Journal ArticleDOI
Extreme Learning Machine With Composite Kernels for Hyperspectral Image Classification
TL;DR: Two spatial-spectral composite kernel ELM classification methods are proposed that outperform the general ELM, SVM, and SVM with CK methods on the hyperspectral images.
References
More filters
Journal ArticleDOI
Extreme learning machine: Theory and applications
TL;DR: A new learning algorithm called ELM is proposed for feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs which tends to provide good generalization performance at extremely fast learning speed.
Journal ArticleDOI
Extreme Learning Machine for Regression and Multiclass Classification
TL;DR: ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly and in theory, ELM can approximate any target continuous function and classify any disjoint regions.
Journal ArticleDOI
Universal approximation using incremental constructive feedforward networks with random hidden nodes
TL;DR: This paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer.
Journal ArticleDOI
Optimization method based extreme learning machine for classification
TL;DR: Under the ELM learning framework, SVM's maximal margin property and the minimal norm of weights theory of feedforward neural networks are actually consistent and ELM for classification tends to achieve better generalization performance than traditional SVM.