scispace - formally typeset
Journal ArticleDOI

A comprehensive evaluation of random vector functional link networks

Reads0
Chats0
TLDR
Surprisingly, it is found that the direct link plays an important performance enhancing role in RVFL, while the bias term in the output neuron had no significant effect and the ridge regression based closed-form solution was better than those with Moore-Penrose pseudoinverse.
Citations
More filters
Journal ArticleDOI

A review on neural networks with random weights

TL;DR: This paper objectively reviews the advantages and disadvantages of N NRW model, tries to reveal the essence of NNRW, and provides some useful guidelines for users to choose a mechanism to train a feed-forward neural network.
Journal ArticleDOI

A survey of randomized algorithms for training neural networks

TL;DR: A comprehensive survey of the earliest work and recent advances on network training is presented as well as some suggestions for future research.
Journal ArticleDOI

Randomness in neural networks: an overview

TL;DR: An overview of the different ways in which randomization can be applied to the design of neural networks and kernel functions is provided to clarify innovative lines of research, open problems, and foster the exchanges of well‐known results throughout different communities.
Journal ArticleDOI

Random vector functional link network for short-term electricity load demand forecasting

TL;DR: The RVFL network overall outperforms the non-ensemble methods, namely the persistence method, seasonal autoregressive integrated moving average (sARIMA), artificial neural network (ANN).
Journal ArticleDOI

Prediction of surface roughness in extrusion-based additive manufacturing with machine learning

TL;DR: Experimental results have shown that the proposed predictive modeling approach is capable of predicting the surface roughness of 3D printed components with high accuracy.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Multilayer feedforward networks are universal approximators

TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Journal Article

Statistical Comparisons of Classifiers over Multiple Data Sets

TL;DR: A set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers is recommended: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparisons of more classifiers over multiple data sets.
Related Papers (5)