scispace - formally typeset
Search or ask a question

Showing papers by "Amaury Lendasse published in 2013"


Book
30 May 2013
TL;DR: This special issue includes eight original works that detail the further developments of ELMs in theories, applications, and hardware implementation.
Abstract: This special issue includes eight original works that detail the further developments of ELMs in theories, applications, and hardware implementation. In "Representational Learning with ELMs for Big Data," Liyanaarachchi Lekamalage Chamara Kasun, Hongming Zhou, Guang-Bin Huang, and Chi Man Vong propose using the ELM as an auto-encoder for learning feature representations using singular values. In "A Secure and Practical Mechanism for Outsourcing ELMs in Cloud Computing," Jiarun Lin, Jianping Yin, Zhiping Cai, Qiang Liu, Kuan Li, and Victor C.M. Leung propose a method for handling large data applications by outsourcing to the cloud that would dramatically reduce ELM training time. In "ELM-Guided Memetic Computation for Vehicle Routing," Liang Feng, Yew-Soon Ong, and Meng-Hiot Lim consider the ELM as an engine for automating the encapsulation of knowledge memes from past problem-solving experiences. In "ELMVIS: A Nonlinear Visualization Technique Using Random Permutations and ELMs," Anton Akusok, Amaury Lendasse, Rui Nian, and Yoan Miche propose an ELM method for data visualization based on random permutations to map original data and their corresponding visualization points. In "Combining ELMs with Random Projections," Paolo Gastaldo, Rodolfo Zunino, Erik Cambria, and Sergio Decherchi analyze the relationships between ELM feature-mapping schemas and the paradigm of random projections. In "Reduced ELMs for Causal Relation Extraction from Unstructured Text," Xuefeng Yang and Kezhi Mao propose combining ELMs with neuron selection to optimize the neural network architecture and improve the ELM ensemble's computational efficiency. In "A System for Signature Verification Based on Horizontal and Vertical Components in Hand Gestures," Beom-Seok Oh, Jehyoung Jeon, Kar-Ann Toh, Andrew Beng Jin Teoh, and Jaihie Kim propose a novel paradigm for hand signature biometry for touchless applications without the need for handheld devices. Finally, in "An Adaptive and Iterative Online Sequential ELM-Based Multi-Degree-of-Freedom Gesture Recognition System," Hanchao Yu, Yiqiang Chen, Junfa Liu, and Guang-Bin Huang propose an online sequential ELM-based efficient gesture recognition algorithm for touchless human-machine interaction.

705 citations


Journal ArticleDOI
TL;DR: A cascade of L"1 penalty (LARS) and L"2 penalty (Tikhonov regularization) on ELM (TROP-ELM) to regularize the matrix computations and hence makes the MSE computation more reliable, and estimates the expected pairwise distances between samples directly on incomplete data so that it offers the ELM a solution to solve the missing data issues.

112 citations


Journal ArticleDOI
TL;DR: An algorithm is introduced, which adds an additional layer to standard extreme learning machines in order to optimise the subset of selected features.

84 citations


Journal ArticleDOI
TL;DR: A novel scheme for the fast face recognition is presented via extreme learning machine (ELM) and sparse coding that could be comparable to the state-of-the-art techniques at a much higher speed.
Abstract: Most face recognition approaches developed so far regard the sparse coding as one of the essential means, while the sparse coding models have been hampered by the extremely expensive computational cost in the implementation. In this paper, a novel scheme for the fast face recognition is presented via extreme learning machine (ELM) and sparse coding. The common feature hypothesis is first introduced to extract the basis function from the local universal images, and then the single hidden layer feedforward network (SLFN) is established to simulate the sparse coding process for the face images by ELM algorithm. Some developments have been done to maintain the efficient inherent information embedding in the ELM learning. The resulting local sparse coding coefficient will then be grouped into the global representation and further fed into the ELM ensemble which is composed of a number of SLFNs for face recognition. The simulation results have shown the good performance in the proposed approach that could be comparable to the state-of-the-art techniques at a much higher speed.

57 citations


Journal ArticleDOI
TL;DR: It is shown how directly estimating distances tends to result in more accurate results than calculating distances from an imputed data set, and an algorithm to calculate the estimated distances is presented.

46 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed approach can identify the inherent distribution and the dependence structure for each 3D object along multiple view angles by evaluating the local topological segments with a dipole topology model and developing the relevant mathematical criterion with ELM algorithm.
Abstract: In this paper, one geometrical topology hypothesis is present based on the optimal cognition principle, and the single-hidden layer feedforward neural network with extreme learning machine (ELM) is used for 3D object recognition. It is shown that the proposed approach can identify the inherent distribution and the dependence structure for each 3D object along multiple view angles by evaluating the local topological segments with a dipole topology model and developing the relevant mathematical criterion with ELM algorithm. The ELM ensemble is then used to combine the individual single-hidden layer feedforward neural network of each 3D object for performance improvements. The simulation results have shown the excellent performance and the effectiveness of the developed scheme.

30 citations


Book ChapterDOI
12 Jun 2013
TL;DR: The Minimal Learning Machine is able to achieve accuracies that are comparable to many de facto standard methods for regression and it offers a computationally valid alternative to such approaches.
Abstract: In this work, a novel supervised learning method, the Minimal Learning Machine (MLM), is proposed. Learning a MLM consists in reconstructing the mapping existing between input and output distance matrices and then estimating the response from the geometrical configuration of the output points. Given its general formulation, the Minimal Learning Machine is inherently capable to operate on nonlinear regression problems as well as on multidimensional response spaces. In addition, an intuitive extension of the MLM is proposed to deal with classification problems. On the basis of our experiments, the Minimal Learning Machine is able to achieve accuracies that are comparable to many de facto standard methods for regression and it offers a computationally valid alternative to such approaches.

21 citations


Book ChapterDOI
17 Oct 2013
TL;DR: Experiments on time series forecasting show that including the constraints in the training phase particularly reduces the risk of overfitting in challenging situations with missing values or a large number of Gaussian components.
Abstract: Gaussian mixture models provide an appealing tool for time series modelling. By embedding the time series to a higher-dimensional space, the density of the points can be estimated by a mixture model. The model can directly be used for short-to-medium term forecasting and missing value imputation. The modelling setup introduces some restrictions on the mixture model, which when appropriately taken into account result in a more accurate model. Experiments on time series forecasting show that including the constraints in the training phase particularly reduces the risk of overfitting in challenging situations with missing values or a large number of Gaussian components.

21 citations


Book ChapterDOI
12 Jun 2013
TL;DR: The original (basic) Extreme Learning Machine (ELM) is described and several extensions of the original ELM are presented and compared, including Tikhonov-Regularized Optimally-Pruned Extreme Learning machine and a Methodology to Linearly Ensemble ELM.
Abstract: In this paper is described the original (basic) Extreme Learning Machine (ELM). Properties like robustness and sensitivity to variable selection are studied. Several extensions of the original ELM are then presented and compared. Firstly, Tikhonov-Regularized Optimally-Pruned Extreme Learning Machine (TROP-ELM) is summarized as an improvement of the Optimally-Pruned Extreme Learning Machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM. Secondly, a Methodology to Linearly Ensemble ELM (ELM-ELM) is presented in order to improve the performance of the original ELM. These methodologies (TROP-ELM and ELM-ELM) are tested against state of the art methods such as Support Vector Machines or Gaussian Processes and the original ELM and OP-ELM, on ten different data sets. A specific experiment to test the sensitivity of these methodologies to variable selection is also presented.

18 citations



Proceedings Article
01 Jan 2013
TL;DR: This article proposes a method to identify market states integrating two classification algorithms: a Ro- bust Kohonen Self-Organising Maps one and a CART one, which is used to study the market's states separation and compute the conditional probabilities of related market states.
Abstract: The financial market dynamics can be characterized by macro-economic, micro-financial and market risk indicators, used as lead- ing indicators by market professionals. In this article, we propose a method to identify market states integrating two classification algorithms: a Ro- bust Kohonen Self-Organising Maps one and a CART one. After studying the market's states separation using the former, we use the latter to char- acterize the economic conditions over time and to compute the conditional probabilities of related market states.

Book ChapterDOI
12 Jun 2013
TL;DR: The Extreme Learning Machine is considered for accurate regression estimation and the related problem of selecting the appropriate number of neurons for the model is considered, where Jackknife Model Averaging is a combination method based on leave-one-out residuals of linear models.
Abstract: We consider the Extreme Learning Machine model for accurate regression estimation and the related problem of selecting the appropriate number of neurons for the model. Selection strategies that choose "the best" model from a set of candidate network structures neglect the issues of model selection uncertainty. To alleviate the problem, we propose to remove this selection phase with a combination layer that takes into account all considered models. The proposed method in this paper is the Extreme Learning Machine(Jackknife Model Averaging), where Jackknife Model Averaging is a combination method based on leave-one-out residuals of linear models. The combination approach is shown to have better predictive performance on several real-world data sets.

Journal ArticleDOI
TL;DR: This special issue considers the application of the models to improve the classification or prediction accuracy but also presents papers where the data are analysed properly, and it is hoped that the reading of this special issue will help medicine researchers to be aware of new methods and machine-learning techniques as well as to see how they could be applied.
Abstract: Machine-learning disciplines including model design and data preprocessing are crucial in order to obtain a good performance in terms of accurate results and interpretability. However, they are not usually treated simultaneously and, when a model is evaluated, the origin and preprocessing of the data are ignored. Medicine and biomedical research provide a wide variety of problems where machine-learning can be very helpful in decision support, telemedicine, and the discovery of interactions. These facts motivated the elaboration of this special issue; therefore, it is focused on methods and applications where machine learning could be applied holistically encompassing all stages to solve the problem. The papers included in the special issue go through the intersection between the medical field of application and theoretical models. For example, generalized estimating equations which are a common approach are compared against quadratic inference functions when applied to a lipid and glucose study. It is common in the field of medicine to be suspicious to predictions made by models, so it is interesting also to read another paper presenting the application of machine-learning techniques as a support decision tool that will not replace the expert judgment. This special issue not only considers the application of the models to improve the classification or prediction accuracy but also presents papers where the data are analysed properly. In medicine problems, it is quite common to have continuous and discrete variables in order to show how to deal with these situations; the paper entitled “Let continuous outcome variables remain continuous” shows how to apply a popular regression method without dichotomising the variables as this procedure could end up in the lost information. We hope that the reading of this special issue will help medicine researchers to be aware of new methods and machine-learning techniques as well as to see how they could be applied. We also hope that the machine learning community can see here a wide variety of problems where the models and algorithms they create could be applied providing useful results. Alberto Guillen Amaury Lendasse Guilherme Barreto


Proceedings ArticleDOI
08 Sep 2013
TL;DR: An extension of the Minimal Learning Machine to classification tasks, thus providing a unified framework for multiresponse regression and classification problems and achieves results that are comparable to many de facto standard methods for classification with the advantage of offering a computationally lighter alternative to such approaches.
Abstract: The Minimal Learning Machine (MLM) has been recently proposed as a novel supervised learning method for regression problems aiming at reconstructing the mapping between input and output distance matrices Estimation of the response is then achieved from the geometrical configuration of the output points Thanks to its comprehensive formulation, the MLM is inherently capable of dealing with nonlinear problems and multidimensional output spaces In this paper, we introduce an extension of the MLM to classification tasks, thus providing a unified framework for multiresponse regression and classification problems On the basis of our experiments, the MLM achieves results that are comparable to many de facto standard methods for classification with the advantage of offering a computationally lighter alternative to such approaches



Proceedings Article
01 Jan 2013
TL;DR: This paper proposes to assess feature correlations of spectral data by an overlay of prior dependencies due to the functional nature and its similarity as measured by mutual information, enabling a quick overall assessment of the relationships between features.
Abstract: The curse of dimensionality leads to problems in machine learning when dealing with high dimensionality. This aspect is particularly pronounced if intrinsically infinite dimensionality is faced such as present for spectral or functional data. Feature selection constitutes one possibility to deal with this problem. Often, it relies on mutual information as an evaluation tool for the feature importance, however, it might be overlaid by intrinsic biases such as a high correlation of neighbored function values for functional data. In this paper we propose to assess feature correlations of spectral data by an overlay of prior dependencies due to the functional nature and its similarity as measured by mutual information, enabling a quick overall assessment of the relationships between features. By integrating the Nyström approximation technique, the usually time consuming step to compute all pairwise mutual informations can be reduced to only linear complexity in the number of features.