scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Voting based extreme learning machine

01 Feb 2012-Information Sciences (Elsevier Science Inc.)-Vol. 185, Iss: 1, pp 66-77
TL;DR: The proposed method incorporates the voting method into the popular extreme learning machine (ELM) in classification applications and generally outperforms the original ELM algorithm as well as several recent classification algorithms.
About: This article is published in Information Sciences.The article was published on 2012-02-01. It has received 329 citations till now. The article focuses on the topics: Computational learning theory & Multiclass classification.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors report the current state of the theoretical research and practical advances on this subject and provide a comprehensive view of these advances in ELM together with its future perspectives.

1,289 citations

Book
01 Jan 1994

607 citations

Journal ArticleDOI
TL;DR: A novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences is presented.
Abstract: Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time.

275 citations


Cites background from "Voting based extreme learning machi..."

  • ...Recently, Huang et al. proposed a new learning algorithm called extreme learning machine (ELM), which randomly assigns all the hidden node parameters of generalized single-hidden layer feed-forward networks (SLFNs) and analytically determines the output weights of SLFNs[18-21]....

    [...]

  • ...proposed a new learning algorithm called extreme learning machine (ELM), which randomly assigns all the hidden node parameters of generalized single-hidden layer feed-forward networks (SLFNs) and analytically determines the output weights of SLFNs[18-21]....

    [...]

Journal ArticleDOI
TL;DR: Simulations have shown that SaE-ELM not only performs better than E- ELM with several manually choosing generation strategies and control parameters but also obtains better generalization performances than several related methods.
Abstract: In this paper, we propose an improved learning algorithm named self-adaptive evolutionary extreme learning machine (SaE-ELM) for single hidden layer feedforward networks (SLFNs). In SaE-ELM, the network hidden node parameters are optimized by the self-adaptive differential evolution algorithm, whose trial vector generation strategies and their associated control parameters are self-adapted in a strategy pool by learning from their previous experiences in generating promising solutions, and the network output weights are calculated using the Moore–Penrose generalized inverse. SaE-ELM outperforms the evolutionary extreme learning machine (E-ELM) and the different evolutionary Levenberg–Marquardt method in general as it could self-adaptively determine the suitable control parameters and generation strategies involved in DE. Simulations have shown that SaE-ELM not only performs better than E-ELM with several manually choosing generation strategies and control parameters but also obtains better generalization performances than several related methods.

244 citations

Journal ArticleDOI
TL;DR: Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency.

190 citations

References
More filters
Book
16 Jul 1998
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Abstract: From the Publisher: This book represents the most comprehensive treatment available of neural networks from an engineering perspective. Thorough, well-organized, and completely up to date, it examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks. Written in a concise and fluid manner, by a foremost engineering textbook author, to make the material more accessible, this book is ideal for professional engineers and graduate students entering this exciting field. Computer experiments, problems, worked examples, a bibliography, photographs, and illustrations reinforce key concepts.

29,130 citations

Journal ArticleDOI

28,888 citations


"Voting based extreme learning machi..." refers methods in this paper

  • ...In this subsection, simulations using V-ELM, SVM [12], OP-ELM [28], BP [9,24,27], and KNN [7,2] are conducted on all the above 19 datasets....

    [...]

  • ...For BP, the most frequently adopted Levenberg–Marquardt algorithm [24,27] is used to train the neural network....

    [...]

  • ...It is a tuning free algorithm and learns much faster than traditional gradient-based approaches, such as Back-Propagation [9] algorithm (BP) and Levenberg–Marquardt [24,27] algorithm....

    [...]

  • ...Simulations on many real world classification datasets demonstrate that V-ELM outperforms several recent methods in general, including the original ELM [14], support vector machine (SVM) [12], optimally pruned extreme learning machine (OP-ELM) [28], Back-Propagation algorithm (BP) [9,24,27], K nearest neighbors algorithm (KNN) [2,7], robust fuzzy relational classifier (RFRC) [5], radial basis function neural network (RBFNN) [33] and multiobjective simultaneous learning framework (MSCC) [6]....

    [...]

  • ...In this section, the performance of the proposed V-ELM is compared with the original ELM [14], SVM [12] and several other recent classification methods, including OP-ELM [28], Back-Propagation algorithm (BP) [9,24,27], K nearest neighbors algorithm (KNN) [2,7], RFRC [5], RBFNN [33] and MSCC [6]....

    [...]

Journal ArticleDOI
TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.

18,794 citations


"Voting based extreme learning machi..." refers background in this paper

  • ...Second, neural networks have the universal approximation characteristic [10,11]....

    [...]

Journal ArticleDOI
TL;DR: The Fundamentals of Statistical Signal Processing: Estimation Theory as mentioned in this paper is a seminal work in the field of statistical signal processing, and it has been used extensively in many applications.
Abstract: (1995). Fundamentals of Statistical Signal Processing: Estimation Theory. Technometrics: Vol. 37, No. 4, pp. 465-466.

14,342 citations