scispace - formally typeset
Search or ask a question
Author

Mark D. McDonnell

Bio: Mark D. McDonnell is an academic researcher from University of South Australia. The author has contributed to research in topics: Stochastic resonance & Noise (signal processing). The author has an hindex of 28, co-authored 163 publications receiving 6477 citations. Previous affiliations of Mark D. McDonnell include University of Adelaide & Chemnitz University of Technology.


Papers
More filters
Book
01 Jan 2008
TL;DR: In this article, a theoretical approach based on linear response theory (LRT) is described, and two new forms of stochastic resonance, predicted on the basis of LRT and subsequently observed in analogue electronic experiments, are described.
Abstract: Stochastic resonance (SR) - a counter-intuitive phenomenon in which the signal due to a weak periodic force in a nonlinear system can be {\it enhanced} by the addition of external noise - is reviewed A theoretical approach based on linear response theory (LRT) is described It is pointed out that, although the LRT theory of SR is by definition restricted to the small signal limit, it possesses substantial advantages in terms of simplicity, generality and predictive power The application of LRT to overdamped motion in a bistable potential, the most commonly studied form of SR, is outlined Two new forms of SR, predicted on the basis of LRT and subsequently observed in analogue electronic experiments, are described

2,403 citations

Book
01 Jan 2009
TL;DR: This work challenges neuroscientists and biologists to embrace a very broad definition of stochastic resonance in terms of signal-processing “noise benefits”, and to devise experiments aimed at verifying that random variability can play a functional role in the brain, nervous system, or other areas of biology.
Abstract: Stochastic resonance is said to be observed when increases in levels of unpredictable fluctuations— e.g., random noise—cause an increase in a metric of the quality of signal transmission or detection performance, rather than a decrease. This counterintuitive effect relies on system nonlinearities and on some parameter ranges being ''suboptimal''. Stochastic resonance has been observed, quantified, and described in a plethora of physical and biological systems, including neurons. Being a topic of widespread multidisciplinary interest, the definition of stochastic resonance has evolved significant- ly over the last decade or so, leading to a number of debates, misunderstandings, and controversies. Perhaps the most important debate is whether the brain has evolved to utilize random noise in vivo, as part of the ''neural code''. Surprisingly, this debate has been for the most part ignored by neuroscientists, despite much indirect evidence of a positive role for noise in the brain. We explore some of the reasons for this and argue why it would be more surprising if the brain did not exploit randomness provided by noise—via stochastic resonance or otherwise—than if it did. We also challenge neurosci- entists and biologists, both computational and experi- mental, to embrace a very broad definition of stochastic resonance in terms of signal-processing ''noise benefits'', and to devise experiments aimed at verifying that random variability can play a functional role in the brain, nervous system, or other areas of biology.

686 citations

Journal ArticleDOI
TL;DR: Understanding the diverse roles of noise in neural computation will require the design of experiments based on new theory and models, into which biologically appropriate experimental detail feeds back at various levels of abstraction.
Abstract: Although typically assumed to degrade performance, random fluctuations, or noise, can sometimes improve information processing in non-linear systems One such form of 'stochastic facilitation', stochastic resonance, has been observed to enhance processing both in theoretical models of neural systems and in experimental neuroscience However, the two approaches have yet to be fully reconciled Understanding the diverse roles of noise in neural computation will require the design of experiments based on new theory and models, into which biologically appropriate experimental detail feeds back at various levels of abstraction

641 citations

Proceedings ArticleDOI
28 Sep 2016
TL;DR: In this article, the authors investigate the benefit of augmenting data with synthetically created samples when training a machine learning classifier, and they find that if plausible transforms for the data are known then augmentation in data-space provides a greater benefit for improving performance and reducing overfitting.
Abstract: In this paper we investigate the benefit of augmenting data with synthetically created samples when training a machine learning classifier. Two approaches for creating additional training samples are data warping, which generates additional samples through transformations applied in the data-space, and synthetic over-sampling, which creates additional samples in feature-space. We experimentally evaluate the benefits of data augmentation for a convolutional backpropagation-trained neural network, a convolutional support vector machine and a convolutional extreme learning machine classifier, using the standard MNIST handwritten digit dataset. We found that while it is possible to perform generic augmentation in feature-space, if plausible transforms for the data are known then augmentation in data-space provides a greater benefit for improving performance and reducing overfitting.

541 citations

Journal ArticleDOI
TL;DR: In this article, the problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem, where the multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve.
Abstract: The problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem. The multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve. The task is to reserve a subset of sites that best meet this objective. We use data on the distribution of habitats in the Northern Territory, Australia, to show how simulated annealing and a greedy heuristic algorithm can be used to generate good solutions to such large reserve design problems, and to compare the effectiveness of these methods.

269 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
TL;DR: The effect of the added white noise is to provide a uniform reference frame in the time–frequency space; therefore, the added noise collates the portion of the signal of comparable scale in one IMF.
Abstract: A new Ensemble Empirical Mode Decomposition (EEMD) is presented. This new approach consists of sifting an ensemble of white noise-added signal (data) and treats the mean as the final true result. Finite, not infinitesimal, amplitude white noise is necessary to force the ensemble to exhaust all possible solutions in the sifting process, thus making the different scale signals to collate in the proper intrinsic mode functions (IMF) dictated by the dyadic filter banks. As EEMD is a time–space analysis method, the added white noise is averaged out with sufficient number of trials; the only persistent part that survives the averaging process is the component of the signal (original data), which is then treated as the true and more physical meaningful answer. The effect of the added white noise is to provide a uniform reference frame in the time–frequency space; therefore, the added noise collates the portion of the signal of comparable scale in one IMF. With this ensemble mean, one can separate scales naturall...

6,437 citations

Journal ArticleDOI
TL;DR: This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing DataAugmentation, a data-space solution to the problem of limited data.
Abstract: Deep convolutional neural networks have performed remarkably well on many Computer Vision tasks. However, these networks are heavily reliant on big data to avoid overfitting. Overfitting refers to the phenomenon when a network learns a function with very high variance such as to perfectly model the training data. Unfortunately, many application domains do not have access to big data, such as medical image analysis. This survey focuses on Data Augmentation, a data-space solution to the problem of limited data. Data Augmentation encompasses a suite of techniques that enhance the size and quality of training datasets such that better Deep Learning models can be built using them. The image augmentation algorithms discussed in this survey include geometric transformations, color space augmentations, kernel filters, mixing images, random erasing, feature space augmentation, adversarial training, generative adversarial networks, neural style transfer, and meta-learning. The application of augmentation methods based on GANs are heavily covered in this survey. In addition to augmentation techniques, this paper will briefly discuss other characteristics of Data Augmentation such as test-time augmentation, resolution impact, final dataset size, and curriculum learning. This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing Data Augmentation. Readers will understand how Data Augmentation can improve the performance of their models and expand limited datasets to take advantage of the capabilities of big data.

5,782 citations