Journal ArticleDOI
OP-ELM: Optimally Pruned Extreme Learning Machine
TLDR
The proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM, and is still able to maintain an accuracy that is comparable to the performance of the SVM.Abstract:
In this brief, the optimally pruned extreme learning machine (OP-ELM) methodology is presented. It is based on the original extreme learning machine (ELM) algorithm with additional steps to make it more robust and generic. The whole methodology is presented in detail and then applied to several regression and classification problems. Results for both computational time and accuracy (mean square error) are compared to the original ELM and to three other widely used methodologies: multilayer perceptron (MLP), support vector machine (SVM), and Gaussian process (GP). As the experiments for both regression and classification illustrate, the proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM. Despite the simplicity and fast performance, the OP-ELM is still able to maintain an accuracy that is comparable to the performance of the SVM. A toolbox for the OP-ELM is publicly available online.read more
Citations
More filters
Journal ArticleDOI
Extreme Learning Machine for Regression and Multiclass Classification
TL;DR: ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly and in theory, ELM can approximate any target continuous function and classify any disjoint regions.
Journal ArticleDOI
Extreme learning machines: a survey
TL;DR: A survey on Extreme learning machine (ELM) and its variants, especially on (1) batch learning mode of ELM, (2) fully complex ELm, (3) online sequential ELM; and (4) incremental ELM and (5) ensemble ofELM.
Journal ArticleDOI
Trends in extreme learning machines
TL;DR: In this paper, the authors report the current state of the theoretical research and practical advances on this subject and provide a comprehensive view of these advances in ELM together with its future perspectives.
Journal ArticleDOI
An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels
TL;DR: An insight into ELMs in three aspects, viz: random neurons, random features and kernels is provided and it is shown that in theory ELMs (with the same kernels) tend to outperform support vector machine and its variants in both regression and classification applications with much easier implementation.
Book
Extreme Learning Machine
Erik Cambria,Guang-Bin Huang,Liyanaarachchi Lekamalage Chamara Kasun,Hongming Zhou,Chi-Man Vong,Jiarun Lin,Jianping Yin,Zhiping Cai,Qiang Liu,Kuan Li,Victor C. M. Leung,Liang Feng,Yew-Soon Ong,Meng-Hiot Lim,Anton Akusok,Amaury Lendasse,Francesco Corona,Rui Nian,Yoan Miche,Paolo Gastaldo,Rodolfo Zunino,Sergio Decherchi,Xuefeng Yang,Kezhi Mao,Beom-Seok Oh,Jehyoung Jeon,Kar-Ann Toh,Andrew Beng Jin Teoh,Jaihie Kim,Hanchao Yu,Yiqiang Chen,Junfa Liu +31 more
TL;DR: This special issue includes eight original works that detail the further developments of ELMs in theories, applications, and hardware implementation.
References
More filters
Journal ArticleDOI
LIBSVM: A library for support vector machines
Chih-Chung Chang,Chih-Jen Lin +1 more
TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Book
Neural Networks: A Comprehensive Foundation
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Book
Neural networks for pattern recognition
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Journal ArticleDOI
Multilayer feedforward networks are universal approximators
TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.