Conference

# International conference on Mathematics of neural networks : models, algorithms and applications: models, algorithms and applications

About: International conference on Mathematics of neural networks : models, algorithms and applications: models, algorithms and applications is an academic conference. The conference publishes majorly in the area(s): Artificial neural network & Types of artificial neural networks. Over the lifetime, 69 publications have been published by the conference receiving 319 citations.

Topics: Artificial neural network, Types of artificial neural networks, Deep learning, Radial basis function network, Recurrent neural network

##### Papers published on a yearly basis

##### Papers

More filters

••

Aston University

^{1}TL;DR: This paper investigates the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis to be carried out exactly using matrix operations.

Abstract: The Bayesian analysis of neural networks is difficult because the prior over functions has a complex form, leading to implementations that either make approximations or use Monte Carlo integration techniques. In this paper I investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis to be carried out exactly using matrix operations. The method has been tested on two challenging problems and has produced excellent results.

55 citations

••

01 Oct 1997TL;DR: In this paper, the authors present a method for the study of stochastic neurodynamics in the master equation framework and obtain a statistical description of the dynamics of fluctuations and correlations of neural activity in large neural networks.

Abstract: We present here a method for the study of stochastic neurodynamics in the master equation framework. Our aim is to obtain a statistical description of the dynamics of fluctuations and correlations of neural activity in large neural networks. We focus on a macroscopic description of the network via a master equation for the number of active neurons in the network. We present a systematic expansion of this equation using the “system size expansion”. We obtain coupled dynamical equations for the average activity and of fluctuations around this average. These equations exhibit non-monotonic approaches to equilibrium, as seen in Monte Carlo simulations.

30 citations

••

01 Oct 1997

TL;DR: A theory for measuring generalisation is developed by combining Bayesian decision theory with information geometry, which unifies the majority of error measures currently in use.

Abstract: Neural networks are statistical models and learning rules are estimators In this paper a theory for measuring generalisation is developed by combining Bayesian decision theory with information geometry The performance of an estimator is measured by the information divergence between the true distribution and the estimate, averaged over the Bayesian posterior This unifies the majority of error measures currently in use The optimal estimators also reveal some intricate interrelationships among information geometry, Banach spaces and sufficient statistics

29 citations

••

01 Oct 1997TL;DR: Information geometry gives an answer, giving the Riemannian metric and a dual pair of affine connections in the manifold of neural networks.

Abstract: The set of all the neural networks of a fixed architecture forms a geometrical manifold where the modifable connection weights play the role of coordinates. It is important to study all such networks as a whole rather than the behavior of each network in order to understand the capability of information processing of neural networks. What is the natural geometry to be introduced in the manifold of neural networks? Information geometry gives an answer, giving the Riemannian metric and a dual pair of affine connections. An overview is given to information geometry of neural networks.

21 citations

••

01 Oct 1997TL;DR: A new method for supervised training based on a recently proposed root finding procedure for the numerical solution of systems of non-linear algebraic and/or transcendental equations in IR n reduces the dimensionality of the problem in such a way that it can lead to an iterative approximate formula for the computation of n−1 connection weights.

Abstract: In this contribution a new method for supervised training is presented This method is based on a recently proposed root finding procedure for the numerical solution of systems of non-linear algebraic and/or transcendental equations in IR n This new method reduces the dimensionality of the problem in such a way that it can lead to an iterative approximate formula for the computation of n−1 connection weights The remaining connection weight is evaluated separately using the final approximations of the others This reduced iterative formula generates a sequence of points in IR n−1 which converges quadratically to the proper n−1 connection weights Moreover, it requires neither a good initial guess for one connection weight nor accurate error function evaluations The new method is applied on some test cases in order to evaluate its performance Subject classification: AMS(MOS) 65K10, 49D10, 68T05, 68G05

18 citations