scispace - formally typeset
Journal ArticleDOI

Representational learning with ELMs for big data

TLDR
Huang et al. as mentioned in this paper proposed ELM-AE, a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.
Abstract
Geoffrey Hinton and Pascal Vincent showed that a restricted Boltzmann machine (RBM) and auto-encoders (AE) could be used for feature engineering. These engineered features then could be used to train multiple-layer neural networks, or deep networks. Two types of deep networks based on RBM exist: the deep belief network (DBN)1 and the deep Boltzmann machine (DBM). Guang-Bin Huang and colleagues introduced the extreme learning machine (ELM) as an single-layer feed-forward neural networks (SLFN) with a fast learning speed and good generalization capability. The ELM for SLFNs shows that hidden nodes can be randomly generated. ELM-AE output weights can be determined analytically, unlike RBMs and traditional auto-encoders, which require iterative algorithms. ELM-AE can be seen as a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.

read more

Citations
More filters
Book ChapterDOI

Study the Significance of ML-ELM Using Combined PageRank and Content-Based Feature Selection

Abstract: Scalable big data analysis frameworks are of paramount importance in the modern web society, which is characterized by a huge number of resources, including electronic text documents. Hence, choosing an adequate subset of features that provide a complete representation of the document while discarding the irrelevant one is of utmost importance. Aiming in this direction, this paper studies the suitability and importance of a deep learning classifier called Multilayer ELM (ML-ELM) by proposing a combined PageRank and content-based feature selection (CPRCFS) technique on all the terms present in a given corpus. Top \(k\%\) terms are selected to generate a reduced feature vector which is then used to train different classifiers including ML-ELM. Experimental results show that the proposed feature selection technique is better or comparable with the baseline techniques and the performance of Multilayer ELM can outperform state-of-the-arts machine and deep learning classifiers.
Posted Content

The RNN-ELM Classifier

TL;DR: This paper examines learning methods combining the Random Neural Network, a biologically inspired neural network and the Extreme Learning Machine that achieve state of the art classification performance while requiring much shorter training time.
Journal ArticleDOI

RETRACTED: A novel nonlinear VSG integrating ELM with noise injection for enhancing energy modelling and analysis on small data

TL;DR: Simulation results indicate that good virtual samples can be generated using the proposed method, and the accuracy of the energy analysis model is much improved with the aid of the newly generated virtual samples.
Journal ArticleDOI

Decision-Refillable-Based Two-Material-View Fuzzy Classification for Personal Thermal Comfort

TL;DR: In this article , a two-view thermal comfort fuzzy classification model was constructed using the interpretable zero-order Takagi-Sugeno-Kang (TSK) fuzzy classifier as the basic training subblock.
References
More filters
Journal ArticleDOI

Extreme learning machine: Theory and applications

TL;DR: A new learning algorithm called ELM is proposed for feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs which tends to provide good generalization performance at extremely fast learning speed.
Journal ArticleDOI

Extreme Learning Machine for Regression and Multiclass Classification

TL;DR: ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly and in theory, ELM can approximate any target continuous function and classify any disjoint regions.
Journal ArticleDOI

Universal approximation using incremental constructive feedforward networks with random hidden nodes

TL;DR: This paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer.
Journal ArticleDOI

Optimization method based extreme learning machine for classification

TL;DR: Under the ELM learning framework, SVM's maximal margin property and the minimal norm of weights theory of feedforward neural networks are actually consistent and ELM for classification tends to achieve better generalization performance than traditional SVM.
Related Papers (5)