scispace - formally typeset
Search or ask a question
Author

Derek Abbott

Bio: Derek Abbott is an academic researcher. The author has contributed to research in topics: Stochastic resonance. The author has an hindex of 1, co-authored 1 publications receiving 2402 citations.

Papers
More filters
Book
01 Jan 2008
TL;DR: In this article, a theoretical approach based on linear response theory (LRT) is described, and two new forms of stochastic resonance, predicted on the basis of LRT and subsequently observed in analogue electronic experiments, are described.
Abstract: Stochastic resonance (SR) - a counter-intuitive phenomenon in which the signal due to a weak periodic force in a nonlinear system can be {\it enhanced} by the addition of external noise - is reviewed A theoretical approach based on linear response theory (LRT) is described It is pointed out that, although the LRT theory of SR is by definition restricted to the small signal limit, it possesses substantial advantages in terms of simplicity, generality and predictive power The application of LRT to overdamped motion in a bistable potential, the most commonly studied form of SR, is outlined Two new forms of SR, predicted on the basis of LRT and subsequently observed in analogue electronic experiments, are described

2,403 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The effect of the added white noise is to provide a uniform reference frame in the time–frequency space; therefore, the added noise collates the portion of the signal of comparable scale in one IMF.
Abstract: A new Ensemble Empirical Mode Decomposition (EEMD) is presented. This new approach consists of sifting an ensemble of white noise-added signal (data) and treats the mean as the final true result. Finite, not infinitesimal, amplitude white noise is necessary to force the ensemble to exhaust all possible solutions in the sifting process, thus making the different scale signals to collate in the proper intrinsic mode functions (IMF) dictated by the dyadic filter banks. As EEMD is a time–space analysis method, the added white noise is averaged out with sufficient number of trials; the only persistent part that survives the averaging process is the component of the signal (original data), which is then treated as the true and more physical meaningful answer. The effect of the added white noise is to provide a uniform reference frame in the time–frequency space; therefore, the added noise collates the portion of the signal of comparable scale in one IMF. With this ensemble mean, one can separate scales naturall...

6,437 citations

Journal ArticleDOI
TL;DR: Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production.
Abstract: Stochastic thermodynamics as reviewed here systematically provides a framework for extending the notions of classical thermodynamics such as work, heat and entropy production to the level of individual trajectories of well-defined non-equilibrium ensembles. It applies whenever a non-equilibrium process is still coupled to one (or several) heat bath(s) of constant temperature. Paradigmatic systems are single colloidal particles in time-dependent laser traps, polymers in external flow, enzymes and molecular motors in single molecule assays, small biochemical networks and thermoelectric devices involving single electron transport. For such systems, a first-law like energy balance can be identified along fluctuating trajectories. For a basic Markovian dynamics implemented either on the continuum level with Langevin equations or on a discrete set of states as a master equation, thermodynamic consistency imposes a local-detailed balance constraint on noise and rates, respectively. Various integral and detailed fluctuation theorems, which are derived here in a unifying approach from one master theorem, constrain the probability distributions for work, heat and entropy production depending on the nature of the system and the choice of non-equilibrium conditions. For non-equilibrium steady states, particularly strong results hold like a generalized fluctuation–dissipation theorem involving entropy production. Ramifications and applications of these concepts include optimal driving between specified states in finite time, the role of measurement-based feedback processes and the relation between dissipation and irreversibility. Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production. (Some figures may appear in colour only in the online journal) This article was invited by Erwin Frey.

2,834 citations

Journal ArticleDOI
TL;DR: The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games.

2,548 citations

Journal ArticleDOI
Peter Reimann1
TL;DR: In this paper, the main emphasis is put on directed transport in so-called Brownian motors (ratchets), i.e. a dissipative dynamics in the presence of thermal noise and some prototypical perturbation that drives the system out of equilibrium without introducing a priori an obvious bias into one or the other direction of motion.

2,098 citations

Journal ArticleDOI
TL;DR: A high-order stochastic collocation approach is proposed, which takes advantage of an assumption of smoothness of the solution in random space to achieve fast convergence and requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods.
Abstract: Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling methods. However, when the governing equations take complicated forms, numerical implementations of stochastic Galerkin methods can become nontrivial and care is needed to design robust and efficient solvers for the resulting equations. On the other hand, the traditional sampling methods, e.g., Monte Carlo methods, are straightforward to implement, but they do not offer convergence as fast as stochastic Galerkin methods. In this paper, a high-order stochastic collocation approach is proposed. Similar to stochastic Galerkin methods, the collocation methods take advantage of an assumption of smoothness of the solution in random space to achieve fast convergence. However, the numerical implementation of stochastic collocation is trivial, as it requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods. The computational cost of the collocation methods depends on the choice of the collocation points, and we present several feasible constructions. One particular choice, based on sparse grids, depends weakly on the dimensionality of the random space and is more suitable for highly accurate computations of practical applications with large dimensional random inputs. Numerical examples are presented to demonstrate the accuracy and efficiency of the stochastic collocation methods.

1,637 citations