scispace - formally typeset

Topic

Sigmoid function

About: Sigmoid function is a(n) research topic. Over the lifetime, 2228 publication(s) have been published within this topic receiving 59557 citation(s). The topic is also known as: S curve.
Papers
More filters

Journal ArticleDOI
George Cybenko1Institutions (1)
TL;DR: It is demonstrated that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube.
Abstract: In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.

10,615 citations


Journal ArticleDOI
01 May 1989-Neural Networks
TL;DR: It is proved that any continuous mapping can be approximately realized by Rumelhart-Hinton-Williams' multilayer neural networks with at least one hidden layer whose output functions are sigmoid functions.
Abstract: In this paper, we prove that any continuous mapping can be approximately realized by Rumelhart-Hinton-Williams' multilayer neural networks with at least one hidden layer whose output functions are sigmoid functions. The starting point of the proof for the one hidden layer case is an integral formula recently proposed by Irie-Miyake and from this, the general case (for any number of hidden layers) can be proved by induction. The two hidden layers case is proved also by using the Kolmogorov-Arnold-Sprecher theorem and this proof also gives non-trivial realizations.

3,794 citations


Journal ArticleDOI
Jooyoung Park1, Irwin W. Sandberg1Institutions (1)
01 Jun 1991-Neural Computation
TL;DR: It is proved thatRBF networks having one hidden layer are capable of universal approximation, and a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.
Abstract: There have been several recent studies concerning feedforward networks and the problem of approximating arbitrary functionals of a finite number of real variables. Some of these studies deal with cases in which the hidden-layer nonlinearity is not a sigmoid. This was motivated by successful applications of feedforward networks with nonsigmoidal hidden-layer units. This paper reports on a related study of radial-basis-function (RBF) networks, and it is proved that RBF networks having one hidden layer are capable of universal approximation. Here the emphasis is on the case of typical RBF networks, and the results show that a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.

3,344 citations


Journal ArticleDOI
Andrew R. Barron1Institutions (1)
TL;DR: The approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings and the integrated squared approximation error cannot be made smaller than order 1/n/sup 2/d/ uniformly for functions satisfying the same smoothness assumption.
Abstract: Approximation properties of a class of artificial neural networks are established. It is shown that feedforward networks with one layer of sigmoidal nonlinearities achieve integrated squared error of order O(1/n), where n is the number of nodes. The approximated function is assumed to have a bound on the first moment of the magnitude distribution of the Fourier transform. The nonlinear parameters associated with the sigmoidal nodes, as well as the parameters of linear combination, are adjusted in the approximation. In contrast, it is shown that for series expansions with n terms, in which only the parameters of linear combination are adjusted, the integrated squared approximation error cannot be made smaller than order 1/n/sup 2/d/ uniformly for functions satisfying the same smoothness assumption, where d is the dimension of the input to the function. For the class of functions examined, the approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings. >

2,519 citations


Journal ArticleDOI
Abstract: We generalize a result of Gao and Xu [4] concerning the approximation of functions of bounded variation by linear combinations of a fixed sigmoidal function to the class of functions of bounded φ-variation (Theorem 2.7). Also, in the case of one variable, [1: Proposition 1] is improved. Our proofs are similar to that of [4].

1,282 citations


Network Information
Related Topics (5)
Artificial neural network

207K papers, 4.5M citations

69% related
Deep learning

79.8K papers, 2.1M citations

68% related
Linear regression

21.3K papers, 1.2M citations

68% related
Convolutional neural network

74.7K papers, 2M citations

67% related
Sampling (statistics)

65.3K papers, 1.2M citations

67% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20224
2021120
2020158
2019166
2018134
201788

Top Attributes

Show by:

Topic's top 5 most impactful authors

Nikolay Kyurkchiev

14 papers, 304 citations

Pravin Chandra

10 papers, 128 citations

George A. Anastassiou

7 papers, 148 citations

Yao-qun Xu

7 papers, 24 citations

Olga Krestinskaya

6 papers, 8 citations