scispace - formally typeset
Search or ask a question

Showing papers in "Neural Networks in 2018"


Journal ArticleDOI
TL;DR: The effect of class imbalance on classification performance is detrimental; the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; and thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest.

1,777 citations


Journal ArticleDOI
TL;DR: This study proposes two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU), and suggests the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection can be competitive with DQN, without the need for a separate target network.

696 citations


Journal ArticleDOI
TL;DR: The results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption.

510 citations


Journal ArticleDOI
TL;DR: In this paper, a convolutional neural network (CNN) was applied to different EEG datasets and proposed a generalized retrospective and patient-specific seizure prediction method, which automatically generates optimized features for each patient to best classify preictal and interictal segments.

362 citations


Journal ArticleDOI
TL;DR: It is proved that one cannot approximate a general function f∈Eβ(Rd) using neural networks that are less complex than those produced by the construction, which partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions.

307 citations


Journal ArticleDOI
TL;DR: This survey aims at covering the state-of-the-art on state representation learning in the most recent years by reviewing different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real).

274 citations


Journal ArticleDOI
TL;DR: This work proposes a novel method to predict the future motion of a pedestrian given a short history of their, and their neighbours, past behaviour using a combined attention model which utilises both "soft attention" as well as "hard-wired" attention in order to map the trajectory information from the local neighbourhood to the future positions of the pedestrian of interest.

242 citations


Journal ArticleDOI
TL;DR: The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements.

182 citations


Journal ArticleDOI
TL;DR: A novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification that pursues that the transformed samples have a common sparsity structure in each class and achieves the best performance in comparison with other methods.

163 citations


Journal ArticleDOI
TL;DR: This paper addresses a fundamental open issue in deep learning, namely the question of how to establish the number of layers in recurrent architectures in the form of deep echo state networks (DeepESNs), and provides a novel approach to the architectural design of deep Recurrent Neural Networks using signal frequency analysis.

162 citations


Journal ArticleDOI
TL;DR: Compared with existing recurrent neural networks, the proposed two nonlinear recurrent networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time.


Journal ArticleDOI
TL;DR: DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering and DMF is applicable to large matrices.

Journal ArticleDOI
TL;DR: A variety of inspiring ideas are brought together that define the field of Evolved Plastic Artificial Neural Networks, which may include a large variety of different neuron types and dynamics, network architectures, plasticity rules, and other factors.

Journal ArticleDOI
TL;DR: The results demonstrate that the proposed CTWS algorithm significantly improved the system performance when compared to directly using feature extraction approaches, and suggest that it holds promise as a general feature extraction approach for MI-based BCIs.

Journal ArticleDOI
TL;DR: This paper decomposes LRR into latent clustered orthogonal representation via low-rank matrix factorization, to encode the more flexible cluster structures than LRR over primal data objects and converts the problem of L RR into that of simultaneously learning Orthogonal clustered representation and optimized local graph structure for each view.

Journal ArticleDOI
TL;DR: This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks subject to Markovian switching, mixed time delay, and actuator saturation by utilizing a simple linear transformation and derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set.

Journal ArticleDOI
TL;DR: The proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision, shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications.

Journal ArticleDOI
TL;DR: By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived and the globally exponential attractive sets and positive invariant sets are presented.

Journal ArticleDOI
TL;DR: Gated XNOR-Nets as mentioned in this paper subsume binary and ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https://github.com/AcrossV/Gated-XNOR.

Journal ArticleDOI
TL;DR: Two new fractional-order inequalities are established by using the theory of complex functions, Laplace transform and Mittag-Leffler functions, which generalize traditional inequalities with the first-order derivative in the real domain.

Journal ArticleDOI
TL;DR: This paper talks about the stability and synchronization problems of fractional-order quaternion-valued neural networks (FQVNNs) with linear threshold neurons and derives several sufficient criteria ensuring the global Mittag-Leffler stability for the unique equilibrium point of the FQVnns by applying the Lyapunov direct method.

Journal ArticleDOI
TL;DR: This paper primarily concentrates on the issue of bifurcation for high-order fractional bidirectional associative memory(BAM) neural networks involving leakage delay, and detects that the stability performance of the proposed high- order fractional neural networks is critically weakened by leakage delay.

Journal ArticleDOI
TL;DR: In this paper, an end-to-end Fully Convolutional Deep Neural Network (FCDNN) was proposed to perform the iris segmentation task for lower-quality iris images.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed graph learning method can significantly improve the clustering performance and a novel rank constraint is further introduced to the model, which encourages the learned graph to have very clear clustering structures.

Journal ArticleDOI
TL;DR: Using Homomorphic mapping theorem, Lyapunov method and inequality techniques, sufficient condition to guarantee the boundedness of networks and the existence, uniqueness and global robust stability of equilibrium point is derived for the considered uncertain neural networks.

Journal ArticleDOI
TL;DR: This work addresses Biased Dropout and Crossmap Dropout, two novel approaches of Dropout extension based on the behavior of hidden units in CNN model that provide better generalization than the regular Dropout in convolution layer.

Journal ArticleDOI
TL;DR: It is constructively proved that SLFNs with the fixed weight 1 and two neurons in the hidden layer can approximate any continuous function on a compact subset of the real line and it is shown that SL FNs with fixed weights cannot approximate all continuous multivariate functions.

Journal ArticleDOI
TL;DR: A new generative adversarial network (GAN) based model is proposed to calculate for each large transfer a probability that it is fraudulent, such that the bank can take appropriate measures to prevent potential fraudsters to take the money if the probability exceeds a threshold.

Journal ArticleDOI
TL;DR: A method that uses an adaptive learning rate that increases or decreases the learning rate adaptively so that the training loss decreases as much as possible, which provides a wider search range for solutions and thus a lower test error rate.