scispace - formally typeset
Search or ask a question
Author

Raoul Dillenschneider

Bio: Raoul Dillenschneider is an academic researcher from Kaiserslautern University of Technology. The author has contributed to research in topics: Landauer's principle & Maxwell's demon. The author has an hindex of 2, co-authored 3 publications receiving 903 citations.

Papers
More filters
Journal ArticleDOI
08 Mar 2012-Nature
TL;DR: It is established that the mean dissipated heat saturates at the Landauer bound in the limit of long erasure cycles, demonstrating the intimate link between information theory and thermodynamics and highlighting the ultimate physical limit of irreversible computation.
Abstract: In 1961, Rolf Landauer argued that the erasure of information is a dissipative process. A minimal quantity of heat, proportional to the thermal energy and called the Landauer bound, is necessarily produced when a classical bit of information is deleted. A direct consequence of this logically irreversible transformation is that the entropy of the environment increases by a finite amount. Despite its fundamental importance for information theory and computer science, the erasure principle has not been verified experimentally so far, the main obstacle being the difficulty of doing single-particle experiments in the low-dissipation regime. Here we experimentally show the existence of the Landauer bound in a generic model of a one-bit memory. Using a system of a single colloidal particle trapped in a modulated double-well potential, we establish that the mean dissipated heat saturates at the Landauer bound in the limit of long erasure cycles. This result demonstrates the intimate link between information theory and thermodynamics. It further highlights the ultimate physical limit of irreversible computation.

1,019 citations

Journal ArticleDOI
TL;DR: In this article, the location of the solid to supersolid phase transition line is predicted from the effective model for both positive and negative hopping parameters, for positive hopping parameters the calculations agree very accurately with numerical quantum Monte Carlo simulations.
Abstract: Hard-core bosons on a triangular lattice with nearest-neighbor repulsion are a prototypical example of a system with supersolid behavior on a lattice. We show that in this model the physical origin of the supersolid phase can be understood quantitatively and analytically by constructing quasiparticle excitations of defects that are moving on an ordered background. The location of the solid to supersolid phase transition line is predicted from the effective model for both positive and negative (frustrated) hopping parameters. For positive hopping parameters the calculations agree very accurately with numerical quantum Monte Carlo simulations. The numerical results indicate that the supersolid to superfluid transition is first order.

33 citations

Journal ArticleDOI
TL;DR: In this paper, the entanglement of hard core bosons in square and honeycomb lattices with nearest-neighbor interactions is estimated by means of quantum Monte Carlo simulations and spin-wave analysis.
Abstract: The entanglement of hard-core bosons in square and honeycomb lattices with nearest-neighbor interactions is estimated by means of quantum Monte Carlo simulations and spin-wave analysis. The particular U(1)-invariant form of the concurrence is used to establish a connection with observables such as density and superfluid density. For specific regimes the concurrence is expressed as a combination of boson density and superfluid density.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production.
Abstract: Stochastic thermodynamics as reviewed here systematically provides a framework for extending the notions of classical thermodynamics such as work, heat and entropy production to the level of individual trajectories of well-defined non-equilibrium ensembles. It applies whenever a non-equilibrium process is still coupled to one (or several) heat bath(s) of constant temperature. Paradigmatic systems are single colloidal particles in time-dependent laser traps, polymers in external flow, enzymes and molecular motors in single molecule assays, small biochemical networks and thermoelectric devices involving single electron transport. For such systems, a first-law like energy balance can be identified along fluctuating trajectories. For a basic Markovian dynamics implemented either on the continuum level with Langevin equations or on a discrete set of states as a master equation, thermodynamic consistency imposes a local-detailed balance constraint on noise and rates, respectively. Various integral and detailed fluctuation theorems, which are derived here in a unifying approach from one master theorem, constrain the probability distributions for work, heat and entropy production depending on the nature of the system and the choice of non-equilibrium conditions. For non-equilibrium steady states, particularly strong results hold like a generalized fluctuation–dissipation theorem involving entropy production. Ramifications and applications of these concepts include optimal driving between specified states in finite time, the role of measurement-based feedback processes and the relation between dissipation and irreversibility. Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production. (Some figures may appear in colour only in the online journal) This article was invited by Erwin Frey.

2,834 citations

Journal ArticleDOI
TL;DR: The latest generations of sophisticated synthetic molecular machine systems in which the controlled motion of subcomponents is used to perform complex tasks are discussed, paving the way to applications and the realization of a new era of “molecular nanotechnology”.
Abstract: The widespread use of molecular machines in biology has long suggested that great rewards could come from bridging the gap between synthetic molecular systems and the machines of the macroscopic world. In the last two decades, it has proved possible to design synthetic molecular systems with architectures where triggered large amplitude positional changes of submolecular components occur. Perhaps the best way to appreciate the technological potential of controlled molecular-level motion is to recognize that nanomotors and molecular-level machines lie at the heart of every significant biological process. Over billions of years of evolution, nature has not repeatedly chosen this solution for performing complex tasks without good reason. When mankind learns how to build artificial structures that can control and exploit molecular level motion and interface their effects directly with other molecular-level substructures and the outside world, it will potentially impact on every aspect of functional molecule and materials design. An improved understanding of physics and biology will surely follow. The first steps on the long path to the invention of artificial molecular machines were arguably taken in 1827 when the Scottish botanist Robert Brown observed the haphazard motion of tiny particles under his microscope.1,2 The explanation for Brownian motion, that it is caused by bombardment of the particles by molecules as a consequence of the kinetic theory of matter, was later provided by Einstein, followed by experimental verification by Perrin.3,4 The random thermal motion of molecules and its implications for the laws of thermodynamics in turn inspired Gedankenexperiments (“thought experiments”) that explored the interplay (and apparent paradoxes) of Brownian motion and the Second Law of Thermodynamics. Richard Feynman’s famous 1959 lecture “There’s plenty of room at the bottom” outlined some of the promise that manmade molecular machines might hold.5,6 However, Feynman’s talk came at a time before chemists had the necessary synthetic and analytical tools to make molecular machines. While interest among synthetic chemists began to grow in the 1970s and 1980s, progress accelerated in the 1990s, particularly with the invention of methods to make mechanically interlocked molecular systems (catenanes and rotaxanes) and control and switch the relative positions of their components.7−24 Here, we review triggered large-amplitude motions in molecular structures and the changes in properties these can produce. We concentrate on conformational and configurational changes in wholly covalently bonded molecules and on catenanes and rotaxanes in which switching is brought about by various stimuli (light, electrochemistry, pH, heat, solvent polarity, cation or anion binding, allosteric effects, temperature, reversible covalent bond formation, etc.). Finally, we discuss the latest generations of sophisticated synthetic molecular machine systems in which the controlled motion of subcomponents is used to perform complex tasks, paving the way to applications and the realization of a new era of “molecular nanotechnology”. 1.1. The Language Used To Describe Molecular Machines Terminology needs to be properly and appropriately defined and these meanings used consistently to effectively convey scientific concepts. Nowhere is the need for accurate scientific language more apparent than in the field of molecular machines. Much of the terminology used to describe molecular-level machines has its origins in observations made by biologists and physicists, and their findings and descriptions have often been misinterpreted and misunderstood by chemists. In 2007 we formalized definitions of some common terms used in the field (e.g., “machine”, “switch”, “motor”, “ratchet”, etc.) so that chemists could use them in a manner consistent with the meanings understood by biologists and physicists who study molecular-level machines.14 The word “machine” implies a mechanical movement that accomplishes a useful task. This Review concentrates on systems where a stimulus triggers the controlled, relatively large amplitude (or directional) motion of one molecular or submolecular component relative to another that can potentially result in a net task being performed. Molecular machines can be further categorized into various classes such as “motors” and “switches” whose behavior differs significantly.14 For example, in a rotaxane-based “switch”, the change in position of a macrocycle on the thread of the rotaxane influences the system only as a function of state. Returning the components of a molecular switch to their original position undoes any work done, and so a switch cannot be used repetitively and progressively to do work. A “motor”, on the other hand, influences a system as a function of trajectory, meaning that when the components of a molecular motor return to their original positions, for example, after a 360° directional rotation, any work that has been done is not undone unless the motor is subsequently rotated by 360° in the reverse direction. This difference in behavior is significant; no “switch-based” molecular machine can be used to progressively perform work in the way that biological motors can, such as those from the kinesin, myosin, and dynein superfamilies, unless the switch is part of a larger ratchet mechanism.14

1,434 citations

Journal ArticleDOI
TL;DR: In this paper, the role of pertubative renormalization group (RG) approaches and self-consistent renormalized spin fluctuation (SCR-SF) theories to understand the quantum-classical crossover in the vicinity of the quantum critical point with generalization to the Kondo effect in heavy-fermion systems is discussed.
Abstract: We give a general introduction to quantum phase transitions in strongly-correlated electron systems. These transitions which occur at zero temperature when a non-thermal parameter $g$ like pressure, chemical composition or magnetic field is tuned to a critical value are characterized by a dynamic exponent $z$ related to the energy and length scales $\Delta$ and $\xi$. Simple arguments based on an expansion to first order in the effective interaction allow to define an upper-critical dimension $D_{C}=4$ (where $D=d+z$ and $d$ is the spatial dimension) below which mean-field description is no longer valid. We emphasize the role of pertubative renormalization group (RG) approaches and self-consistent renormalized spin fluctuation (SCR-SF) theories to understand the quantum-classical crossover in the vicinity of the quantum critical point with generalization to the Kondo effect in heavy-fermion systems. Finally we quote some recent inelastic neutron scattering experiments performed on heavy-fermions which lead to unusual scaling law in $\omega /T$ for the dynamical spin susceptibility revealing critical local modes beyond the itinerant magnetism scheme and mention new attempts to describe this local quantum critical point.

1,347 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a theoretical framework for the thermodynamics of information based on stochastic thermodynamics and fluctuation theorems, review some recent experimental results, and present an overview of the state of the art in the field.
Abstract: By its very nature, the second law of thermodynamics is probabilistic, in that its formulation requires a probabilistic description of the state of a system. This raises questions about the objectivity of the second law: does it depend, for example, on what we know about the system? For over a century, much effort has been devoted to incorporating information into thermodynamics and assessing the entropic and energetic costs of manipulating information. More recently, this historically theoretical pursuit has become relevant in practical situations where information is manipulated at small scales, such as in molecular and cell biology, artificial nano-devices or quantum computation. Here we give an introduction to a novel theoretical framework for the thermodynamics of information based on stochastic thermodynamics and fluctuation theorems, review some recent experimental results, and present an overview of the state of the art in the field. The task of integrating information into the framework of thermodynamics dates back to Maxwell and his infamous demon. Recent advances have made these ideas rigorous—and brought them into the laboratory.

879 citations

01 Jan 1952

748 citations