Topic
Activation function
About: Activation function is a research topic. Over the lifetime, 3971 publications have been published within this topic receiving 92011 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This paper rigorously proves that standard single-hidden layer feedforward networks with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples with zero error.
Abstract: It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (x/sub i/,t/sub i/) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen "almost" arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function. This paper rigorously proves that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples (x/sub i/,t/sub i/) with zero error. The previous method of arbitrarily choosing weights is not feasible for any SLFN. The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFNs with any such bounded nonlinear activation function as opposed to iterative training algorithms in the literature.
515 citations
•
01 Jan 2014TL;DR: It is found that it is always best to train using the dropout algorithm--the drop out algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes.
Abstract: Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models "forget" how to perform the first task. This is widely believed to be a serious problem for neural networks. Here, we investigate the extent to which the catastrophic forgetting problem occurs for modern neural networks, comparing both established and recent gradient-based training algorithms and activation functions. We also examine the effect of the relationship between the first task and the second task on catastrophic forgetting. We find that it is always best to train using the dropout algorithm--the dropout algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes. We find that different tasks and relationships between tasks result in very different rankings of activation function performance. This suggests the choice of activation function should always be cross-validated.
507 citations
•
02 Oct 2019
TL;DR: A novel neural activation function called Mish, similar to Swish along with providing a boost in performance and its simplicity in implementation makes it easier for researchers and developers to use Mish in their Neural Network Models.
Abstract: The concept of non-linearity in a Neural Network is introduced by an
activation function which serves an integral role in the training and
performance evaluation of the network. Over the years of theoretical research,
many activation functions have been proposed, however, only a few are widely
used in mostly all applications which include ReLU (Rectified Linear Unit),
TanH (Tan Hyperbolic), Sigmoid, Leaky ReLU and Swish. In this work, a novel
neural activation function called as Mish is proposed. The experiments show
that Mish tends to work better than both ReLU and Swish along with other
standard activation functions in many deep networks across challenging
datasets. For instance, in Squeeze Excite Net- 18 for CIFAR 100 classification,
the network with Mish had an increase in Top-1 test accuracy by 0.494% and
1.671% as compared to the same network with Swish and ReLU respectively. The
similarity to Swish along with providing a boost in performance and its
simplicity in implementation makes it easier for researchers and developers to
use Mish in their Neural Network Models.
485 citations
••
03 Oct 2017TL;DR: An approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function and infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.
Abstract: We present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function. Such networks are often used in deep learning and have been shown to be hard to verify for modern satisfiability modulo theory (SMT) and integer linear programming (ILP) solvers.
474 citations
••
TL;DR: Simulation results substantiate the theoretical analysis and demonstrate the efficacy of the neural model on time-varying matrix inversion, especially when using a power-sigmoid activation function.
Abstract: Following the idea of using first-order time derivatives, this paper presents a general recurrent neural network (RNN) model for online inversion of time-varying matrices. Different kinds of activation functions are investigated to guarantee the global exponential convergence of the neural model to the exact inverse of a given time-varying matrix. The robustness of the proposed neural model is also studied with respect to different activation functions and various implementation errors. Simulation results, including the application to kinematic control of redundant manipulators, substantiate the theoretical analysis and demonstrate the efficacy of the neural model on time-varying matrix inversion, especially when using a power-sigmoid activation function.
466 citations