scispace - formally typeset
Search or ask a question
Author

Simon Duane

Bio: Simon Duane is an academic researcher from National Physical Laboratory. The author has contributed to research in topics: Dosimetry & Absorbed dose. The author has an hindex of 19, co-authored 64 publications receiving 4391 citations. Previous affiliations of Simon Duane include University of Cambridge & University of Illinois at Urbana–Champaign.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a hybrid (molecular dynamics/Langevin) algorithm is used to guide a Monte Carlo simulation of lattice field theory, which is especially efficient for quantum chromodynamics which contain fermionic degrees of freedom.

3,377 citations

Journal ArticleDOI
TL;DR: It is argued that the new proposal always represents a significant improvement over a Langevin simulation, and may even improve over the microcanonical method, in which case only a trivial code modification is required.

120 citations

Journal ArticleDOI
TL;DR: In this article, the theory of hybrid stochastic algorithms is developed and a generalized Fokker-Planck equation is derived and used to prove that the correct equilibrium distribution is generated by the algorithm.

112 citations

Journal ArticleDOI
TL;DR: In this paper, hybrid stochastic differential equations are applied to the thermodynamics of lattice gauge theory with dynamical fermions, and the method is applied to quantum chromodynamics and the abrupt finite-temperature crossover between hadronic matter and the quark-gluon plasma is elucidated.
Abstract: Hybrid stochastic differential equations are applied to the thermodynamics of lattice gauge theory with dynamical fermions. The tuned algorithm is much more efficient than pure Langevin or molecular-dynamics equations. The method is applied to quantum chromodynamics and the abrupt finite-temperature crossover between hadronic matter and the quark-gluon plasma is elucidated.

94 citations

Journal ArticleDOI
TL;DR: It is demonstrated that the failure to meet classical cavity theory requirements, such as CPE, is not the reason for significant quality correction factors and that what matters most, apart from volume averaging effects, is the relationship between the lack of CPE in the small field itself and the density of the detector cavity.
Abstract: Purpose: To explain the reasons for significant quality correction factors in megavoltage small photon fields and clarify the underlying concepts relevant to dosimetry under such conditions. Methods: The validity of cavity theory and the requirement of charged particle equilibrium (CPE) are addressed from a theoretical point of view in the context of nonstandard beams. Perturbation effects are described into four main subeffects, explaining their nature and pointing out their relative importance in small photon fields. Results: It is demonstrated that the failure to meet classical cavity theory requirements, such as CPE, is not the reason for significant quality correction factors. On the contrary, it is shown that the lack of CPE alone cannot explain these corrections and that what matters most, apart from volume averaging effects, is the relationship between the lack of CPE in the small field itself and the density of the detector cavity. The density perturbation effect is explained based on Fano’s theorem, describing the compensating effect of two main contributions to cavity absorbed dose. Using the same approach, perturbation effects arising from the difference in atomic properties of the cavity medium and the presence of extracameral components are explained. Volume averaging effects are also discussed in detail. Conclusions: Quality correction factors of small megavoltage photon fields are mainly due to differences in electron density between water and the detector medium and to volume averaging over the detector cavity. Other effects, such as the presence of extracameral components and differences in atomic properties of the detection medium with respect to water, can also play an accentuated role in small photon fields compared to standard beams.

93 citations


Cited by
More filters
Proceedings Article
01 Jan 2014
TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Abstract: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.

20,769 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a new molecular dynamics algorithm for sampling the canonical distribution, where the velocities of all the particles are rescaled by a properly chosen random factor.
Abstract: The authors present a new molecular dynamics algorithm for sampling the canonical distribution. In this approach the velocities of all the particles are rescaled by a properly chosen random factor. The algorithm is formally justified and it is shown that, in spite of its stochastic nature, a quantity can still be defined that remains constant during the evolution. In numerical applications this quantity can be used to measure the accuracy of the sampling. The authors illustrate the properties of this new method on Lennard-Jones and TIP4P water models in the solid and liquid phases. Its performance is excellent and largely independent of the thermostat parameter also with regard to the dynamic properties.

11,327 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book
06 Oct 2003
TL;DR: A fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.
Abstract: Fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.

8,091 citations

Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations