scispace - formally typeset
Search or ask a question
JournalISSN: 0129-0657

International Journal of Neural Systems 

World Scientific
About: International Journal of Neural Systems is an academic journal published by World Scientific. The journal publishes majorly in the area(s): Artificial neural network & Computer science. It has an ISSN identifier of 0129-0657. Over the lifetime, 2050 publications have been published receiving 43654 citations. The journal is also known as: Int. J. Neur. Syst. & Int J Neural Syst.


Papers
More filters
Journal ArticleDOI
TL;DR: A single neuron with Hebbian-type learning for the connection weights, and with nonlinear internal feedback, has been shown to extract the statistical principal components of its stationary input pattern sequence, which yields a multi-dimensional, principal component subspace.
Abstract: A single neuron with Hebbian-type learning for the connection weights, and with nonlinear internal feedback, has been shown to extract the statistical principal components of its stationary input pattern sequence. A generalization of this model to a layer of neuron units is given, called the Subspace Network, which yields a multi-dimensional, principal component subspace. This can be used as an associative memory for the input vectors or as a module in nonsupervised learning of data clusters in the input space. It is also able to realize a powerful pattern classifier based on projections on class subspaces. Some classification results for natural textures are given.

858 citations

Journal ArticleDOI
TL;DR: A new method for the identification of signal peptides and their cleavage sites based on neural networks trained on separate sets of prokaryotic and eukaryotic sequences that performs significantly better than previous prediction schemes, and can easily be applied to genome-wide data sets.
Abstract: We have developed a new method for the identification of signal peptides and their cleavage sites based on neural networks trained on separate sets of prokaryotic and eukaryotic sequences. The method performs significantly better than previous prediction schemes, and can easily be applied to genome-wide data sets. Discrimination between cleaved signal peptides and uncleaved N-terminal signal-anchor sequences is also possible, though with lower precision. Predictions can be made on a publicly available WWW server: .

819 citations

Journal ArticleDOI
TL;DR: In this article, a fast fixed-point type algorithm that is capable of separating complex valued, linearly mixed source signals is presented and its computational efficiency is shown by simulations and the local consistency of the estimator given by the algorithm is proved.
Abstract: Separation of complex valued signals is a frequently arising problem in signal processing. For example, separation of convolutively mixed source signals involves computations on complex valued signals. In this article, it is assumed that the original, complex valued source signals are mutually statistically independent, and the problem is solved by the independent component analysis (ICA) model. ICA is a statistical method for transforming an observed multidimensional random vector into components that are mutually as independent as possible. In this article, a fast xed-point type algorithm that is capable of separating complex valued, linearly mixed source signals is presented and its computational eciency is shown by simulations. Also, the local consistency of the estimator given by the algorithm is proved.

788 citations

Journal ArticleDOI
TL;DR: Since the ultimate goal is accuracy in the prediction, it is found that sigmoid networks trained with the weight-elimination algorithm outperform traditional nonlinear statistical approaches.
Abstract: We investigate the effectiveness of connectionist architectures for predicting the future behavior of nonlinear dynamical systems. We focus on real-world time series of limited record length. Two examples are analyzed: the benchmark sunspot series and chaotic data from a computational ecosystem. The problem of overfitting, particularly serious for short records of noisy data, is addressed both by using the statistical method of validation and by adding a complexity term to the cost function ("back-propagation with weight-elimination"). The dimension of the dynamics underlying the time series, its Liapunov coefficient, and its nonlinearity can be determined via the network. We also show why sigmoid units are superior in performance to radial basis functions for high-dimensional input spaces. Furthermore, since the ultimate goal is accuracy in the prediction, we find that sigmoid networks trained with the weight-elimination algorithm outperform traditional nonlinear statistical approaches.

775 citations

Journal ArticleDOI
TL;DR: This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning and shows up precise connections to other "kernel machines" popular in the community.
Abstract: Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.

752 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202351
202299
202191
2020190
2019137
201868