scispace - formally typeset
Topic

Extreme learning machine

About: Extreme learning machine is a(n) research topic. Over the lifetime, 7010 publication(s) have been published within this topic receiving 130180 citation(s). The topic is also known as: ELM.

...read more

Papers
  More

Journal ArticleDOI: 10.1016/J.NEUCOM.2005.12.126
01 Dec 2006-Neurocomputing
Abstract: It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called e xtreme l earning m achine (ELM) for s ingle-hidden l ayer f eedforward neural n etworks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks. 1

...read more

Topics: Extreme learning machine (65%), Wake-sleep algorithm (63%), Competitive learning (63%) ...read more

8,861 Citations


Journal ArticleDOI: 10.1109/TSMCB.2011.2168604
01 Apr 2012-
Abstract: Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the “generalized” single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.

...read more

Topics: Multiclass classification (62%), Extreme learning machine (59%), Support vector machine (55%) ...read more

4,130 Citations


Proceedings ArticleDOI: 10.1109/IJCNN.2004.1380068
25 Jul 2004-
Abstract: It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient-based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these traditional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses the input weights and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide the best generalization performance at extremely fast learning speed. The experimental results based on real-world benchmarking function approximation and classification problems including large complex applications show that the new algorithm can produce best generalization performance in some cases and can learn much faster than traditional popular learning algorithms for feedforward neural networks.

...read more

Topics: Extreme learning machine (70%), Artificial neural network (66%), Wake-sleep algorithm (66%) ...read more

3,217 Citations


Open accessJournal ArticleDOI: 10.1117/1.JRS.11.015020
Abstract: We evaluate 179 classifiers arising from 17 families (discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearest-neighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods), implemented in Weka, R (with and without the caret package), C and Matlab, including all the relevant classifiers available today. We use 121 data sets, which represent the whole UCI data base (excluding the large-scale problems) and other own real problems, in order to achieve significant conclusions about the classifier behavior, not dependent on the data set collection. The classifiers most likely to be the bests are the random forest (RF) versions, the best of which (implemented in R and accessed via caret) achieves 94.1% of the maximum accuracy overcoming 90% in the 84.3% of the data sets. However, the difference is not statistically significant with the second best, the SVM with Gaussian kernel implemented in C using LibSVM, which achieves 92.3% of the maximum accuracy. A few models are clearly better than the remaining ones: random forest, SVM with Gaussian and polynomial kernels, extreme learning machine with Gaussian kernel, C5.0 and avNNet (a committee of multi-layer perceptrons implemented in R with the caret package). The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in the top-20, respectively).

...read more

Topics: Random subspace method (61%), Probabilistic classification (61%), Random forest (59%) ...read more

2,226 Citations


Journal ArticleDOI: 10.1109/TNN.2006.875977
Abstract: According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g:R→R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g:R→R and ∫Rg(x)dx≠0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.

...read more

Topics: Extreme learning machine (61%), Activation function (58%), Feedforward neural network (57%) ...read more

2,172 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202237
2021787
2020897
2019968
2018935
2017835

Top Attributes

Show by:

Topic's top 5 most impactful authors

Guang-Bin Huang

106 papers, 36.7K citations

Amaury Lendasse

52 papers, 2.6K citations

Jiuwen Cao

30 papers, 1.2K citations

Chi-Man Vong

29 papers, 1.8K citations

Zhiping Lin

26 papers, 1.3K citations

Network Information
Related Topics (5)
Support vector machine

73.6K papers, 1.7M citations

92% related
Genetic algorithm

67.5K papers, 1.2M citations

91% related
Artificial neural network

207K papers, 4.5M citations

90% related
Particle swarm optimization

56K papers, 952.6K citations

89% related
Feature extraction

111.8K papers, 2.1M citations

89% related