Journal ArticleDOI
Representational learning with ELMs for big data
TLDR
Huang et al. as mentioned in this paper proposed ELM-AE, a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.Abstract:
Geoffrey Hinton and Pascal Vincent showed that a restricted Boltzmann machine (RBM) and auto-encoders (AE) could be used for feature engineering. These engineered features then could be used to train multiple-layer neural networks, or deep networks. Two types of deep networks based on RBM exist: the deep belief network (DBN)1 and the deep Boltzmann machine (DBM). Guang-Bin Huang and colleagues introduced the extreme learning machine (ELM) as an single-layer feed-forward neural networks (SLFN) with a fast learning speed and good generalization capability. The ELM for SLFNs shows that hidden nodes can be randomly generated. ELM-AE output weights can be determined analytically, unlike RBMs and traditional auto-encoders, which require iterative algorithms. ELM-AE can be seen as a special case of ELM, where the input is equal to output, and the randomly generated weights are chosen to be orthogonal.read more
Citations
More filters
Journal ArticleDOI
A Survey of Handwritten Character Recognition with MNIST and EMNIST
TL;DR: This paper summarizes the top state-of-the-art contributions reported on the MNIST dataset for handwritten digit recognition, and makes a distinction between works using some kind of data augmentation and works using the original dataset out of the box.
Journal ArticleDOI
A review on animal–robot interaction: from bio-hybrid organisms to mixed societies
Donato Romano,Elisa Donati,Giovanni Benelli,Giovanni Benelli,Cesare Stefanini,Cesare Stefanini +5 more
TL;DR: In this review, a comprehensive definition of animal–robot interactive technologies is given and an overview of the current state of the art and the recent trends in this novel context is provided.
Journal ArticleDOI
Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning
Yimin Yang,Q. M. Jonathan Wu +1 more
TL;DR: This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning.
Journal ArticleDOI
Non-iterative and Fast Deep Learning: Multilayer Extreme Learning Machines
TL;DR: A thorough review on the development of ML-ELMs, including stacked ELM autoencoder, residual ELM, and local receptive field based ELM (ELM-LRF), as well as address their applications, and the connection between random neural networks and conventional deep learning.
Journal ArticleDOI
Classification and diagnosis of cervical cancer with stacked autoencoder and softmax classification
TL;DR: New methods of diagnosis of cervical cancer are presented in this study in terms of patient diagnostic support systems using softmax classification with stacked autoencoder and other machine learning methods.
References
More filters
Journal ArticleDOI
Extreme learning machine: Theory and applications
TL;DR: A new learning algorithm called ELM is proposed for feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs which tends to provide good generalization performance at extremely fast learning speed.
Journal ArticleDOI
Extreme Learning Machine for Regression and Multiclass Classification
TL;DR: ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly and in theory, ELM can approximate any target continuous function and classify any disjoint regions.
Journal ArticleDOI
Universal approximation using incremental constructive feedforward networks with random hidden nodes
TL;DR: This paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer.
Journal ArticleDOI
Optimization method based extreme learning machine for classification
TL;DR: Under the ELM learning framework, SVM's maximal margin property and the minimal norm of weights theory of feedforward neural networks are actually consistent and ELM for classification tends to achieve better generalization performance than traditional SVM.