scispace - formally typeset
Open AccessPosted ContentDOI

Counterdiabatic control of biophysical processes

TLDR
In this article, the authors acknowledge support from the U.S. National Science Foundation (NSF) under Grant No. MCB-1651650 and the Labex CelTisPhyBioBioNitrome(ANR-11-LABX-0038, ANR-10-IDEX-0001-02).
Abstract
The authors would like to thank the stimulating environment provided by the Telluride Science Research Center, where this project was conceived. M.H. acknowledges support from the U.S. National Science Foundation (NSF) under Grant No. MCB-1651650. E.I. acknowledges support from the Labex CelTisPhyBio (ANR-11-LABX-0038, ANR-10-IDEX-0001-02).

read more

Content maybe subject to copyright    Report

Counterdiabatic control of biophysical processes
Efe Ilker,
1, 2, 3,
¨
Ozen¸c G¨ung¨or,
4
Benjamin Kuznets-Speck,
4, 5
Joshua Chiel,
4, 6
Sebastian Deffner,
7, 8,
and Michael Hinczewski
4,
1
Laboratoire Physico-Chimie Curie, Institut Curie,
PSL Research University, CNRS UMR 168, Paris, France
2
Sorbonne Universit´es, UPMC Univ. Paris 06, Paris, France
3
Max Planck Institute for the Physics of Complex Systems, 01187 Dresden, Germany
4
Department of Physics, Case Western Reserve University, Cleveland, OH, 44106, USA
5
Biophysics Graduate Group, University of California, Berkeley, CA 94720, USA
6
Department of Physics, University of Maryland, College Park, Maryland 20742, USA
7
Department of Physics, University of Maryland, Baltimore County, Baltimore, MD 21250, USA
8
Instituto de F´ısica ‘Gleb Wataghin’, Universidade Estadual de Campinas, 13083-859, Campinas, ao Paulo, Brazil
The biochemical reaction networks that regulate living systems are all stochastic to varying de-
grees. The resulting randomness affects biological outcomes at multiple scales, from the functional
states of single proteins in a cell to the evolutionary trajectory of whole populations. Controlling
how the distribution of these outcomes changes over time—via external interventions like time-
varying concentrations of chemical species—is a complex challenge. In this work, we show how
counterdiabatic (CD) driving, first developed to control quantum systems, provides a versatile tool
for steering biological processes. We develop a practical graph-theoretic framework for CD driving
in discrete-state continuous-time Markov networks. We illustrate the formalism with examples from
gene regulation and chaperone-assisted protein folding, demonstrating the possibility that nature
can exploit CD driving to accelerate response to sudden environmental changes. We generalize the
method to continuum Fokker-Planck models, and apply it to study AFM single-molecule pulling
experiments in regimes where the typical assumption of adiabaticity breaks down, as well as an evo-
lutionary model with competing genetic variants subject to time-varying selective pressures. The
AFM analysis shows how CD driving can eliminate non-equilibrium artifacts due to large force
ramps in such experiments, allowing accurate estimation of biomolecular properties.
A fundamental dichotomy for biological processes is
that they are both intrinsically stochastic and tightly
controlled. The stochasticity arises from the random
nature of the underlying biochemical reactions, and has
significant consequences in a variety of contexts: gene
expression [1], motor proteins [2], protein folding [3], all
the way up to the ecological interactions and evolution
of entire populations of organisms [4, 5]. Theories for
such systems often employ discrete state Markov models
(or continuum analogues like Fokker-Planck equations)
which describe how the probability distribution of sys-
tem states evolves over time. On the other hand, biology
utilizes a wide array of control knobs to regulate such dis-
tributions, most often through time-dependent changes
in the concentration of chemical species that influence
state transition rates. In many cases these changes oc-
cur due to environmental cues—either threatening or
beneficial—and the system response must be sufficiently
fast to avoid danger or gain advantage.
The interplay of randomness and regulation naturally
leads us to ask about the limits of control: to what extent
can a biological system be driven through a prescribed
trajectory of probability distributions over a finite time
interval? Beyond curiosity over whether nature actu-
ally tests these limits in vivo, this question also arises
in experimental contexts. Certain biophysical methods
ilker@pks.mpg.de
deffner@umbc.edu
michael.hinczewski@case.edu
like optical tweezers or atomic force microscopy (AFM)
apply perturbations (e.g. mechanical force) to alter the
state distribution of single biomolecules in order to ex-
tract their intrinsic properties [6]. Controlling the dis-
tribution can facilitate interpretation of the data. In
synthetic biology [7] one may want to precisely specify
the probabilistic behavior of genetic switches or other
regulatory circuit components in response to a stimulus.
Control of a system is generally easiest to describe
and quantify if the perturbation is applied slowly. For
example, some tweezer or AFM experiments use an in-
creasing force ramp to unfold single molecules or rup-
ture molecular complexes [8]. Theoretical treatments of
this process typically assume the force changes slowly
enough (adiabatically) that the system remains in quasi-
equilibrium [811]. The advantage of this assumption is
that, at each moment of the experimental protocol, the
approximate form of the state probability distribution is
known from equilibrium thermodynamics. Deriving re-
sults for faster pulling rates is more challenging [12], but
useful in order to compare experiments with molecular
dynamics simulations. In natural settings, responses to
rapid environmental changes may entail sharp changes
in the concentrations of biochemical components. For
instance, an ambient temperature increase of even a few
degrees can significantly increase the probability that
proteins misfold and aggregate. In response to such
“heat shock”, cells quickly upregulate the number of
chaperones—specialized proteins that facilitate unfold-
ing or disaggregating misfolded proteins [1318].
There is no guarantee that the quasi-equilibrium as-
.CC-BY-NC 4.0 International licenseavailable under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprintthis version posted June 14, 2021. ; https://doi.org/10.1101/2021.06.13.448255doi: bioRxiv preprint

2
sumption holds throughout such a process, and thus the
standard tools of equilibrium or near-equilibrium ther-
modynamics (i.e. linear response theory) do not neces-
sarily apply. If we are driving a system over a finite-
time interval, subject to fluctuations that take us far
from equilibrium, can we still attain a degree of con-
trol? In particular, can we force the system to mimic
quasi-equilibrium behavior, following a certain sequence
of known target distributions, but at arbitrarily fast
speeds?
Interestingly this situation strongly resembles ques-
tions from quantum control and quantum thermodynam-
ics [19], where a new line of research has been dubbed
“shortcuts to adiabaticity”. In recent years a great deal
of theoretical and experimental work has been dedicated
to mathematical tools and practical schemes to suppress
nonequilibrium excitations in finite-time, nonequilibrium
processes. To this end, a variety of techniques have
been developed: the use of dynamical invariants [20],
the inversion of scaling laws [21], the fast-forward tech-
nique [2229], optimal protocols from optimal control
theory [3033], optimal driving from properties of quan-
tum work statistics [34], “environment” assisted meth-
ods [35], using the properties of Lie algebras [36], and
approximate methods such as linear response theory
[3740], fast quasistatic dynamics [41], or time-rescaling
[42, 43], to name just a few. See Refs. [44, 45] and ref-
erences therein for comprehensive reviews of these tech-
niques.
Among this plethora of different approaches, counter-
diabatic (CD) or transitionless quantum driving stands
out, since it is the only method that suppresses ex-
citations away from the adiabatic manifold at all in-
stants. In this paradigm [4649] one considers a time-
dependent Hamiltonian H
0
(t) with instantaneous eigen-
values {
n
(t)} and eigenstates {|n(t)i}. In the adiabatic
limit no transitions between eigenstates occur [50], and
each eigenstate acquires only a time-dependent phase
that can be separated into a dynamical and a geomet-
ric contribution [51]. In other words, if we start in a
particular eigenstate |n(0)i at t = 0, we remain in the
corresponding instantaneous eigenstate |n(t)i at all later
times, up to a phase. The goal of CD driving is to make
the system follow the same target trajectory of eigen-
states as in the adiabatic case, but over a finite time.
To accomplish this, a CD Hamiltonian H(t) can be
constructed, such that the adiabatic approximation as-
sociated with H
0
(t) is an exact solution of the dy-
namics generated by H(t) under the time-dependent
Schr¨odinger equation. It is reasonably easy to derive
that time-evolution under [4648],
H(t) = H
0
(t) + H
1
(t)
= H
0
(t) + i~
X
n
(|
t
ni hn| hn|
t
ni |ni hn|) ,
(1)
maintains the system on the adiabatic manifold. Note
that it is the auxiliary Hamiltonian H
1
(t) that enforces
evolution along the adiabatic manifold of H
0
(t): if a
system is prepared in an eigenstate |n(0)i of H
0
(0) and
subsequently evolves under H(t), then the term H
1
(t)
effectively suppresses the non-adiabatic transitions out
of |n(t)i that would arise in the absence of this term.
To date, a few dozen experiments have implemented
and utilized such shortcuts to adiabaticity to, for in-
stance, transport ions or load BECs into an optical trap
without creating parasitic excitations [45]. However, due
to the mathematical complexity of the auxiliary Hamil-
tonian (1), counterdiabatic driving has been restricted to
“simple” quantum systems. Note that in order to com-
pute H
1
(t) one requires the instantaneous eigenstates of
the unperturbed Hamiltonian, which is practically, con-
ceptually, and numerically a rather involved task.
On the other hand, the scope of CD driving is not lim-
ited to the quantum realm. Because of the close math-
ematical analogies between classical stochastic systems
and quantum mechanics, it was recently recognized that
the CD paradigm can also be formalized for classical sce-
narios [29, 49, 5257]. The classical analogue of driving
a system along a target trajectory of eigenstates is a tra-
jectory of instantaneous stationary distributions. Last
year, our group and collaborators developed the first bi-
ological application of CD driving: controlling the dis-
tribution of genotypes in an evolving cellular population
via external drug protocols [58]. This type of “evolution-
ary steering” has various potential applications, most no-
tably in designing strategies to combat drug resistance
in bacterial diseases and tumors. The CD formalism in
this case was built around a multi-dimensional Fokker-
Planck model, generalizing the one-dimensional Fokker-
Planck approach of Ref. [55].
To date however, there does not exist a universal
framework for calculating CD strategies that covers the
wide diversity of stochastic models used in biology, in-
cluding both discrete state and continuum approaches.
In the following, we develop such a framework, taking
advantage of graph theory to construct a general CD
algorithm that can be applied to systems of arbitrary
complexity. The usefulness of this method is of course
not confined to biology, but is relevant to other classi-
cal systems described by Markovian transitions between
states. However biology provides a singularly fascinating
context in which to explore CD driving, both because
it sheds light on the possibility of control in complex
stochastic systems with many interacting components,
and provides an accessible platform for future experi-
mental tests of these ideas.
Outline: In Sec. I we formulate a theory of CD driv-
ing for any discrete state Markov model. By looking
at the properties of the probability current graph asso-
ciated with the master equation of the model, we can
express CD solutions in terms of spanning trees and fun-
damental cycles of the graph. Beyond its practical util-
ity, the graphical approach highlights the degeneracy of
CD driving: the potential existence of many distinct,
physically realizable CD protocols that drive a system
through the same target trajectory of probability distri-
butions. The graphical approach is schematically sum-
marized in Fig. 1, highlighting the components in the
.CC-BY-NC 4.0 International licenseavailable under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprintthis version posted June 14, 2021. ; https://doi.org/10.1101/2021.06.13.448255doi: bioRxiv preprint

3
most general form for CD solutions, Eq. (26).
In Sec. II we apply our formalism to two biologi-
cal examples, a repressor-corepressor genetic regulatory
switch, and a chaperone protein that catalyzes the un-
folding of a misfolded protein in response to a heat shock.
The examples allow us to investigate the physical con-
straints and thermodynamic costs associated with spe-
cific CD solutions, and the usefulness of the CD approach
even in cases where an exact CD protocol cannot be
physically implemented.
In Sec. III we show how CD driving in contin-
uum systems (i.e. Fokker-Planck models with position-
dependent diffusivity) is a special limiting case of our
discrete state approach. We then apply the contin-
uum theory to analysis of AFM pulling experiments
on biomolecules, and show how CD driving can com-
pensate for non-equilibrium artifacts, allowing us to
extract molecular information even in non-adiabatic,
fast pulling scenarios. This Fokker-Planck example is
one-dimensional, but the Supplementary Information
[SI] shows how our approach can be used on higher-
dimensional continuum systems as well. We demonstrate
a numerical CD solution for a two-dimensional Fokker-
Planck equation describing an evolving cell population
with three competing genetic variants, where the distri-
bution of variants is driven along a target trajectory by
time-varying selective pressures.
The examples in Secs. II, III, and the SI are self-
contained, so after going over the general solution in
Sec. I the reader is free to jump to any one that may be
of particular interest. The diversity of the examples—
from biochemical networks describing individual genes
and proteins to the evolution of entire populations of
cells—is meant to provide a practical guide on how to
apply our theory to the kinds of models that regularly
appear in biophysical contexts.
Sec. IV concludes with connections to other areas of
nonequilibrium thermodynamics and questions for fu-
ture work.
I. GENERAL THEORY OF
COUNTERDIABATIC DRIVING IN DISCRETE
STATE MARKOV MODELS
A. Setting up the counterdiabatic driving problem
1. Master equation and the CD transition matrix
Consider an N-state Markov system described by a
vector p(t) whose component p
i
(t), i = 1, . . . , N , is the
probability of being in state i at time t. The distribution
p(t) evolves under the master equation [59, 60],
t
p(t) = Ω(λ
t
)p(t). (2)
The off-diagonal element
ij
(λ
t
), i 6= j, of the N × N
matrix Ω(λ
t
) represents the conditional probability per
unit time to transition to state i, given that the sys-
tem is currently in state j. The diagonal elements
ii
(λ
t
) =
P
j6=i
ji
(λ
t
) ensure each column of the ma-
trix sums to zero [59]. The transition rates
ij
(λ
t
) de-
pend on a control protocol: a set of time-varying exter-
nal parameters, denoted collectively by λ(t) λ
t
. Ω(t)
plays the role of the Hamiltonian H
0
(t) in the classical
analogy.
The instantaneous stationary probability ρ(λ
t
) associ-
ated with Ω(λ
t
) is the right eigenvector with eigenvalue
zero,
Ω(λ
t
)ρ(λ
t
) = 0. (3)
When λ
t
has a non-constant time dependence, ρ(λ
t
) in
general is not a solution to Eq. (2), except in the adi-
abatic limit when the control parameters are varied in-
finitesimally slowly,
t
λ
t
0. The sequence of distribu-
tions ρ(λ
t
) as a function of λ
t
defines a target trajectory
for the system, analogous to the eigenstate trajectory
|n(t)i in the quantum version of CD.
Given an instantaneous probability trajectory ρ(λ
t
)
defined by Eq. (3), we would like to find a counterdia-
batic (CD) transition matrix
e
Ω(λ
t
,
˙
λ
t
) such that the new
master equation,
t
ρ(λ
t
) =
e
Ω(λ
t
,
˙
λ
t
)ρ(λ
t
), (4)
evolves in time with state probabilities described by
ρ(λ
t
). Here
˙
λ
t
t
λ
t
. We are thus forcing the sys-
tem to mimic adiabatic time evolution, even when
˙
λ
t
is
nonzero. As we will see below,
e
Ω(λ
t
,
˙
λ
t
) will in general
depend both on the instantaneous values of the control
parameters λ
t
and their rate of change
˙
λ
t
. In the limit
of adiabatic driving we should recover the original tran-
sition matrix,
e
Ω(λ
t
,
˙
λ
t
0) = Ω(λ
t
). Solving for the
CD protocol corresponds to determining the elements of
the
e
Ω(λ
t
,
˙
λ
t
) matrix in Eq. (4) given a certain ρ(λ
t
).
This corresponds to finding the CD Hamiltonian H(t) of
Eq. (1) in the quantum case.
We can look at the counterdiabatic problem as a spe-
cial case of a more general question: given a certain
time-dependent probability distribution that is our tar-
get, what is the transition matrix of the master equation
for which this distribution is a solution? In effect, this is
the inverse of the typical approach for the master equa-
tion, where we know the transition matrix and solve for
the distribution.
2. Representing the system via an oriented current graph
To facilitate finding CD solutions, we start by express-
ing the original master equation of Eq. (2) equivalently
in terms of probability currents between states,
t
p
i
(t) =
X
j
J
ij
(t), i = 1, . . . , N (5)
where the current from state j to i is given by:
J
ij
(t)
ij
(λ
t
)p
j
(t)
ji
(λ
t
)p
i
(t). (6)
.CC-BY-NC 4.0 International licenseavailable under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprintthis version posted June 14, 2021. ; https://doi.org/10.1101/2021.06.13.448255doi: bioRxiv preprint

4
We can interpret any pair of states (i, j) where either
ij
(λ
t
) 6= 0 or
ji
(λ
t
) 6= 0 at some point during the pro-
tocol as being connected via an edge on a graph whose
vertices are the states i = 1, . . . , N. Let E be the num-
ber of edges in the resulting graph, and define a num-
bering α = 1, . . . , E and an arbitrary orientation for the
edges such that each α corresponds to a specific edge
and choice of current direction. For example if edge α
was between states (i, j), and the choice of direction was
from j to i, then we can define current J
α
(t) J
ij
(t) for
that edge. Alternatively if the choice of direction was
from i to j, then J
α
(t) J
ji
(t) = J
ij
(t). In this way
we associate the master equation with a directed graph,
a simple example of which is illustrated in Fig. 2. Eq. (5)
can be rewritten in terms of the oriented currents J
α
(t)
as
t
p(t) = J(t), (7)
where J(t) is an E-dimensional vector with components
J
α
(t), and is an N × E dimensional matrix known as
the incidence matrix of the directed graph [61] (closely
related to the stoichiometric matrix of Ref. [62]). The
components of are given by
=
1 if the direction of edge α is toward i
1 if the direction of edge α is away from i
0 if edge α does not connect to i
.
(8)
The αth column of contains a single 1 and a single 1,
since each edge must have an origin and a destination
state. Conservation of probability is thus enforced by
summing over rows in Eq. (7), since
P
i
= 0, and so
P
N
i=1
t
p
i
(t) = 0. Since any given row of Eq. (7) is thus
linearly dependent on the other rows, it is convenient to
work in the reduced representation of the equation where
we leave out the row corresponding to a certain reference
state (taken to be state N),
t
b
p(t) =
b
J(t). (9)
Here
b
p(t) = (p
1
(t), . . . , p
N1
(t)) and the (N 1) × E
dimensional reduced incidence matrix
b
is equal to
with the Nth row removed. Our focus will be on systems
where there is a unique instantaneous stationary prob-
ability vector ρ(λ
t
) at every t. In this case the master
equation necessarily corresponds to a connected graph
in the oriented current picture [59]. By a well known re-
sult in graph theory, both the full and reduced incident
matrices and
b
of a connected, directed graph with N
vertices have rank N 1 [61]. This means that all N 1
rows of
b
are linearly independent for the systems we
consider.
Having described the original master equation of
Eq. (2) in terms of oriented currents, we can do the same
for Eqs. (3) and (4). Let us define the oriented station-
ary current J
α
(t) for the distribution ρ(λ
t
) as follows: if
the αth edge is oriented from j to i then
J
α
(t)
ij
(λ
t
)ρ
j
(λ
t
)
ji
(λ
t
)ρ
i
(λ
t
). (10)
The reduced representation of Eq. (3) corresponds to
b
J (t) = 0. (11)
Analogously for the CD master equation, Eq. (4), we
define the oriented current
e
J
α
(t)
e
ij
(λ
t
,
˙
λ
t
)ρ
j
(λ
t
)
e
ji
(λ
t
,
˙
λ
t
)ρ
i
(λ
t
). (12)
The time dependence of
e
J
α
is explicitly through λ
t
and
˙
λ
t
, but we write it in more compact form as
e
J
α
(t) to
avoid cumbersome notation. Then Eq. (4) can be ex-
pressed as
t
b
ρ(λ
t
) =
b
e
J (t). (13)
3. Counterdiabatic current equation
Subtracting Eq. (11) from Eq. (13) we find
t
b
ρ(λ
t
) =
b
δJ (t), (14)
where δJ (t)
e
J (t) J (λ
t
) is the difference between
the CD and stationary current vectors. For the CD prob-
lem, we are given the original matrix elements
ij
(λ
t
)
and thus also have the corresponding stationary distribu-
tion values ρ
i
(λ
t
) and stationary currents J
α
(λ
t
). What
we need to determine, via Eq. (14), are the CD currents
e
J (t). We can then use Eq. (12) to solve for the CD ma-
trix transition rates
e
ij
(λ
t
,
˙
λ
t
). By construction, these
satisfy Eq. (4), and hence define a CD protocol for the
system.
As a first step, let us consider the invertibility of
Eq. (14) to solve for δJ (t). The (N 1) × E dimen-
sional matrix
b
is generally non-square: N(N 1)/2
E N 1 for a connected graph. Only in the special
case of tree-like graphs (no loops) do we have E = N 1
and a square (N 1)×(N 1) matrix
b
. Since the rank
of
b
is N 1, as mentioned above, for tree-like graphs
b
is invertible and Eq. (14) can be solved without any
additional complications:
δJ (t) =
b
1
t
b
ρ(λ
t
) iff E = N 1. (15)
As described in the next section, the elements of
b
1
for a tree-like graph can be obtained directly through a
graphical procedure, without the need to do any explicit
matrix inversion.
In the case where E > N 1, the solution proce-
dure is more involved, but the end result has a relatively
straightforward form: the most general solution δJ (t)
can always be expressed as a finite linear combination of
a basis of CD solutions. How to obtain this basis, and its
close relationship to the spanning trees and fundamental
cycles of the graph, is the topic we turn to next.
.CC-BY-NC 4.0 International licenseavailable under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprintthis version posted June 14, 2021. ; https://doi.org/10.1101/2021.06.13.448255doi: bioRxiv preprint

5
Markov model with
transition matrix ,
CD currents
target probability
trajectory vector
probability
time
Eq. (3):
Eq. (26):graph: N states, E edges
Eq. (18): one of the
spanning tree CD solutions
fundamental cycle
basis vectors [Sec. I D]
control protocol
Eq. (4):
CD transition matrix
graphical approach:
solve for
FIG. 1. Overview of the graphical approach for deriving CD solutions. We start with a Markov model defined by a transition
matrix Ω(λ
t
) dependent on the control protocol λ
t
. Associated with this is a graph with N states, E edges, and a target
trajectory ρ(λ
t
) consisting of instantaneous stationary states of Ω(λ
t
). The eventual goal is to find the CD transition matrix
e
Ω(λ
t
,
˙
λ
t
) where ρ(λ
t
) is the solution to the associated master equation, Eq. (4). To facilitate this, we must first find the CD
currents
e
J (t), the main goal of the graphical approach. The most general form of the solution for
e
J (t) is given by Eq. (26),
and consists of two components: (i) a spanning tree CD solution δJ
(1)
(t), given by Eq. (18) and derived via the procedure
outlined in Sec. I B; (ii) a linear combination of the fundamental basis cycle vectors c
(γ)
, γ = 1, . . . , ∆, where = E N + 1,
as described in Sec. I D. The coefficient functions Φ
γ
(t) are arbitrary.
B. General graphical solution for the
counterdiabatic protocol
The graphical procedure described in this and the fol-
lowing two sections, culminating in the general solution
of Eq. (26), is summarized in Fig. 1. To illustrate the
procedure concretely, we will use the two-loop system
shown in Fig. 2A as an example, where N = 4, E = 5.
The solution for this case is relevant to the biophysical
model for chaperone-assisted protein folding discussed
later in the paper. Fig. 2A shows the rates k
i
(λ
t
) and
r
i
(λ
t
) that determine the transition matrix Ω(λ
t
), and
Fig. 2B labels the oriented stationary currents J
α
(t),
α = 1, . . . , E. Explicit expressions for ρ
i
(λ
t
) and J
α
(t)
in terms of the rates are given in Appendix A.
Every connected graph has a set of spanning trees:
subgraphs formed by removing E N + 1 edges
such that the remaining N 1 edges form a tree link-
ing together all the N vertices. The number T of
such spanning trees is related to the reduced incidence
matrix through Kirchhoff’s matrix tree theorem [61],
T = det
b
b
T
. For the current graph of Fig. 2B, this
matrix is
b
=
1 0 1 1 0
1 1 0 0 1
0 1 1 0 0
, (16)
and the number of trees is thus T = 8.
Let us select one spanning tree to label as the reference
tree. The choice is arbitrary, since any spanning tree can
be a valid starting point for constructing the basis. The
left side of Fig. 2C shows one such tree chosen for the
two-loop example. Here = 2, so we have removed
two edges: J
1
and J
5
. From this reference tree we can
derive other distinct spanning trees using the following
method: 1) Take one of the edges that were removed
to get the reference tree, and add it back to the graph. 2)
This creates a loop in the graph, known as a fundamental
cycle (highlighted in green in Fig. 2C) [61]. 3) Remove
one of the other edges in that loop (not the one just
added), such that the graph returns to being a spanning
tree. This new tree is distinct from the reference because
it contains one of the edges not present in the reference
tree. For example, in the top right of Fig. 2C, we added
back edge 1, forming the fundamental cycle on the left
loop. We then delete edge 2 from this loop, creating
spanning tree 2. A similar procedure is used to construct
.CC-BY-NC 4.0 International licenseavailable under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprintthis version posted June 14, 2021. ; https://doi.org/10.1101/2021.06.13.448255doi: bioRxiv preprint

Figures
Citations
More filters

Housekeeping and excess entropy production for general nonlinear dynamics

TL;DR: This work proposes a housekeeping/excess decomposition of entropy production for general nonlinear dynamics in a discrete space, including chemical reaction networks and discrete stochastic systems, and extends the optimal transport theory of discrete systems to nonlinear and nonconservative settings.
Journal ArticleDOI

Shortcuts to Thermodynamic Computing: The Cost of Fast and Faithful Information Processing

TL;DR: In this paper , it was shown that it is possible to compute with perfect fidelity in finite time with finite work and that this dissipated work is proportional to the computation rate as well as the square of the information-storing system's length scale.
Journal ArticleDOI

Limited-control optimal protocols arbitrarily far from equilibrium.

TL;DR: In this paper , the work-minimizing protocol problem in the standard form of an optimal control theory problem is shown to be equivalent to solving a system of Hamiltonian partial differential equations, which admit efficiently calculable numerical solutions.
Journal ArticleDOI

Geometric thermodynamics for the Fokker–Planck equation: stochastic thermodynamic links between information geometry and optimal transport

Sosuke Ito
- 01 Sep 2022 - 
TL;DR: In this paper , a geometric framework of non-equilibrium thermodynamics in terms of information geometry and optimal transport theory has been proposed, which is useful for obtaining thermodynamic trade-off relations between the thermodynamic cost and the fluctuation of the observable, optimal protocols for the minimum cost.
Journal ArticleDOI

Optimizing Brownian heat engine with shortcut strategy.

Jin-Fu Chen
- 17 Apr 2022 - 
TL;DR: In this paper , the authors employ the shortcut strategy to design and optimize Brownian heat engines, and formulate a geometric description of the EE with the thermodynamic length, and obtain a tight and reachable bound of the output power for shortcut-driven heat engines.
References
More filters
Journal ArticleDOI

Cluster analysis and display of genome-wide expression patterns

TL;DR: A system of cluster analysis for genome-wide expression data from DNA microarray hybridization is described that uses standard statistical algorithms to arrange genes according to similarity in pattern of gene expression, finding in the budding yeast Saccharomyces cerevisiae that clustering gene expression data groups together efficiently genes of known similar function.
Journal ArticleDOI

Quantal phase factors accompanying adiabatic changes

TL;DR: In this article, it was shown that the Aharonov-Bohm effect can be interpreted as a geometrical phase factor and a general formula for γ(C) was derived in terms of the spectrum and eigen states of the Hamiltonian over a surface spanning C.
Journal ArticleDOI

Models for the specific adhesion of cells to cells

TL;DR: The force required to separate two cells is shown to be greater than the expected electrical forces between cells, and of the same order of magnitude as the forces required to pull gangliosides and perhaps some integral membrane proteins out of the cell membrane.
Journal ArticleDOI

Stochastic thermodynamics, fluctuation theorems and molecular machines

TL;DR: Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production.
Journal ArticleDOI

Dynamic strength of molecular adhesion bonds.

TL;DR: How Brownian dynamics can help bridge the gap between molecular dynamics and probe tests is described, which shows that bond strength progresses through three dynamic regimes of loading rate.