scispace - formally typeset
Search or ask a question
Author

D. Roweth

Bio: D. Roweth is an academic researcher from University of Edinburgh. The author has contributed to research in topics: Quark & Quantum chromodynamics. The author has an hindex of 9, co-authored 11 publications receiving 3267 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a hybrid (molecular dynamics/Langevin) algorithm is used to guide a Monte Carlo simulation of lattice field theory, which is especially efficient for quantum chromodynamics which contain fermionic degrees of freedom.

3,377 citations

Journal ArticleDOI
B. M. Forrest1, D. Roweth1, N. Stroud1, D. J. Wallace1, Greg Wilson1 
TL;DR: This work reviews the implementation of a range of neural network models on SIMD and MIMD computers, and describes the strategies which have been used to implement the Durbin and Willshaw elastic net model on the Computing Surface.
Abstract: The remarkable processing capabilities of the nervous system must derive from the large numbers of neurons participating (roughly 10 10 ), since the time-scales involved are of the order of a millisecond, rather than the nanoseconds of modern computers. The neural network models which attempt to capture this behaviour are inherently parallel. We review the implementation of a range of neural network models on SIMD and MIMD computers. On the ICL Distributed Array Processor (DAP), a 4096-processor SIMD machine, we have studied training algorithms in the context of the Hopfield net, with specific applications including the storage of words and continuous text in contentaddressable memory. The Hopfield and Tank analogue neural net has been used for image restoration with the Geman and Geman algorithm. We compare the performance of this scheme on the DAP and on a Meiko Computing Surface, a reconfigurable MIMD array of transputers. We describe also the strategies which we have used to implement the Durbin and Willshaw elastic net model on the Computing Surface. Received June 1987

61 citations

Journal ArticleDOI
TL;DR: The Edinburgh Concurrent Supercomputer Project is built around a Meiko Computing Surface, with presently some 400 floating-point transputers and 1.6 Gbytes of memory.

30 citations

Journal ArticleDOI
K.C. Bowler1, C.B. Chalmers1, Richard Kenway1, G.S. Pawley1, D. Roweth1 
TL;DR: In this paper, the Susskind formulation of lattice fermions is used to construct the propagators for hadrons created by local lattice operators, and the inversion of the fermion matrix is accomplished using either the even/odd partitioned conjugate gradient algorithm, or block successive over-relaxation, depending on lattice size.

27 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that the success of acceleration for abelian gauge field dynamics need not depend on any choice of gauge and proposed a particular scheme for acceleration in non-abelian theories which is also gauge independent.

21 citations


Cited by
More filters
Proceedings Article
01 Jan 2014
TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Abstract: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.

20,769 citations

Book ChapterDOI
TL;DR: The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Abstract: Publisher Summary This chapter provides an account of different neural network architectures for pattern recognition. A neural network consists of several simple processing elements called neurons. Each neuron is connected to some other neurons and possibly to the input nodes. Neural networks provide a simple computing paradigm to perform complex recognition tasks in real time. The chapter categorizes neural networks into three types: single-layer networks, multilayer feedforward networks, and feedback networks. It discusses the gradient descent and the relaxation method as the two underlying mathematical themes for deriving learning algorithms. A lot of research activity is centered on learning algorithms because of their fundamental importance in neural networks. The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue. It closes with the discussion of performance and implementation issues.

13,033 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a new molecular dynamics algorithm for sampling the canonical distribution, where the velocities of all the particles are rescaled by a properly chosen random factor.
Abstract: The authors present a new molecular dynamics algorithm for sampling the canonical distribution. In this approach the velocities of all the particles are rescaled by a properly chosen random factor. The algorithm is formally justified and it is shown that, in spite of its stochastic nature, a quantity can still be defined that remains constant during the evolution. In numerical applications this quantity can be used to measure the accuracy of the sampling. The authors illustrate the properties of this new method on Lennard-Jones and TIP4P water models in the solid and liquid phases. Its performance is excellent and largely independent of the thermostat parameter also with regard to the dynamic properties.

11,327 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book
06 Oct 2003
TL;DR: A fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.
Abstract: Fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.

8,091 citations