scispace - formally typeset
Open AccessProceedings ArticleDOI

Transporting information and energy simultaneously

Reads0
Chats0
TLDR
The fundamental tradeoff between the rates at which energy and reliable information can be transmitted over a single noisy line is studied.
Abstract
The fundamental tradeoff between the rates at which energy and reliable information can be transmitted over a single noisy line is studied. Engineering inspiration for this problem is provided by powerline communication, RFID systems, and covert packet timing systems as well as communication systems that scavenge received energy. A capacity-energy function is defined and a coding theorem is given. The capacity-energy function is a non-increasing concave cap function. Capacity-energy functions for several channels are computed.

read more

Content maybe subject to copyright    Report

Transporting Information and Energy
Simultaneously
LavR.Varshney
Laboratory for Information and Decision Systems and Research Laboratory of Electronics
Massachusetts Institute of Technology
Abstract—The fundamental tradeoff between the rates at
which energy and reliable information can be transmitted over
a single noisy line is studied. Engineering inspiration for this
problem is provided by powerline communication, RFID systems,
and covert packet timing systems as well as communication
systems that scavenge received energy. A capacity-energy function
is dened and a coding theorem is given. The capacity-energy
function is a non-increasing concave function. Capacity-energy
functions for several channels are computed.
I. INTRODUCTION
The problem of communication is usually cast as one of
transmitting a message generated at one point to another
point. During the pre-history of information theory, a primary
accomplishment was the abstraction of the message to be
communicated from the communication medium. As noted,
“electricity in the wires became merely a carrier of messages,
not a source of power, and hence opened the door to new ways
of thinking about communications” [1]. This understanding
of signals independently from their physical embodiments led
to modern communication theory, but it also blocked other
possible directions. As Norbert Wiener said, “Information is
information, not matter or energy. No materialism which does
not admit this can survive at the present day” [2, p. 132]. This
separation of messages and media arguably led to the division
of electrical engineering i nto two distinct subelds, electric
power engineering and communication engineering.
Some have argued that the greatest inventions of civilization
either transform, store, and transmit energy or they transform,
store, and transmit information [3]. Although quite reasonable,
many engineering systems actually deal with both energy and
information. Representation of signals requires the modulation
of energy, matter, or some such thing. The separation of
messages and media is not always warranted.
Are there scenarios where one would want to transmit
energy and information simultaneously over a single line? If
there is a power-limited receiver that can harvest received
energy, then one should want both things. The earliest tele-
graphs, telephones, and crystal radios had no external power
sources [1], providing historical examples of such systems.
Modern communication systems that operate under severe
energy constraints may also benet from harvesting received
energy [4]. A powerful base station or other special node
[5], [6], may effectively be used to recharge mobile devices.
In RFID systems, the energy provided through the forward
Work supported in part by an NSF Graduate Research Fellowship.
channel is used to transmit over the backward channel [7].
There are also extant mudpulse telemetry systems in the oil
industry where energy and information are provisioned to
remote instruments over a single line [J. Kusuma, personal
communication]. For a truly space-age application, one might
consider furnishing photons to spacecraft with space sails [8]
and optical receivers for both information and propulsion.
Back on earth, power line communication has received
signicant attention [9], [10], but the literature has focused on
the informational aspect under the constraint that modulation
schemes not severely degrade power delivery. This need not
be the case in future engineering systems.
Except for papers on reversible computing [11], the fact
that matter/energy must go along with information does not
seem to have been considered in information theory. Sim-
ilarly, the information carried in power transmission seems
not to have been considered in power engineering. Though
not implemented in current systems, a receiver constructed
from reversible gates would allow received energy to perform
additional work and would need not be dissipated as heat [11].
Electricity, of course, is not the only commodity in which
signals can be modulated. Information can be physically
manifested in almost any substance. Examples include water,
railroad cars, and packets in communication networks (whose
timing is modulated [12]); the results presented apply equally
to these scenarios.
This work deals with the fundamental tradeoff between
transmitting energy and transmitting information over a single
noisy line. Although this tradeoff must be known to other
researchers, it does not seem to appear in the literature. A char-
acterization of communication systems that simultaneously
meet two goals:
1) large received energy per unit time, and
2) large information per unit time
is found. Notice that unlike traditional transmitter power
constraints, where small transmitted power is desired, here
large received power is desired. One previous study has looked
at maximum received power constraints [13].
II. C
APACITY-ENERGY FUNCTION
In order to achieve the rst goal, one would want the most
energetic symbol received all the time, whereas to achieve
the second goal, one would want to use the unconstrained
capacity-achieving input distribution. This intuition is formal-
ized for discrete memoryless channels, following [14].
ISIT 2008, Toronto, Canada, July 6 - 11, 2008
1612978-1-4244-2571-6/08/$25.00 ©2008 IEEE

A discrete memoryless channel (DMC) is characterized by
the input alphabet X, the output alphabet Y, and the transition
probability assignment Q
Y |X
(y|x). Furthermore, each output
letter y ∈Yhas an energy b(y), a nonnegative real number.
Channel inputs are described by random variables X
n
1
=
(X
1
,X
2
,...,X
n
) with distribution p
X
n
1
(x
n
1
); the correspond-
ing outputs are random variables Y
n
1
=(Y
1
,Y
2
,...,Y
n
) with
distribution p
Y
n
1
(y
n
1
). The average received energy is
E [b(Y
n
1
)] =
y
n
1
∈Y
n
b(y
n
1
)p(y
n
1
).
An optimization problem that precisely captures the tradeoff
between the two goals is as follows. Maximize information
rate under a minimum received power constraint. For each
n,thenth capacity-energy function C
n
(B) of the channel is
dened as
C
n
(B)= max
X
n
1
:E
[
b(Y
n
1
)
]
nB
I(X
n
1
; Y
n
1
).
An input vector X
n
1
is a test source; one that satises
E [b(Y
n
1
)] nB is B-admissible. The maximization is over
all n-dimensional B-admissible test sources. The set of B-
admissible p(x
n
1
) is a closed subset of R
|X|
n
and is bounded
since
p(x
n
1
)=1. Since the set is closed and bounded,
it is compact. Mutual information is a continuous function
of the input distribution and since continuous, real-valued
functions dened on compact subsets of metric spaces achieve
their supremums (see Theorem 5), dening the optimization
as a maximum is not problematic. The nth capacity-energy
functions are only dened for 0 B B
max
, where B
max
is the maximum element of b
T
Q; b is a column vector of the
b(y) and Q is Q
Y |X
.
The capacity-energy function of the channel is dened as
C(B)=sup
n
1
n
C
n
(B). (1)
A coding theorem can be proven that endows this informa-
tional denition with operational signicance.
A code is a pair of mappings (f, g) where f maps a message
alphabet M to X and g maps Y to M. The rate of an n-
length block code is
1
n
log |M|.Ann-length block code with
maximum probability of error bounded by is an (n, )-code.
Denition 1: Given 0 <1, a non-negative number R
is an -achievable rate for the channel Q
Y |X
with constraint
(b, B) if for every δ>0 and every sufciently large n there
exist (n, )-codes of rate exceeding R δ for which b(y
n
1
) <
B implies g(y
n
1
) /∈M. R is an achievable rate if it is -
achievable for all 0 <<1. The supremum of achievable
rates is called the capacity of the channel under constraint
(b, B) and is denoted C
O
(B).
Theorem 1: C
O
(B)=C(B).
Proof: Follows by reversing the output constraint inequal-
ity in the solution to [15, P20 on p. 117]. See also [13].
III. PROPERTIESOFTHECAPACITY-ENERGY FUNCTION
The coding theorem provides operational signicance to the
capacity-energy function. Some properties of this function may
also be developed.
It is immediate that C
n
(B) is non-increasing, since the fea-
sible set in the optimization becomes smaller as B increases.
The function is also concave .
Theorem 2: C
n
(B) is a concave function of B for 0
B B
max
.
Proof: Let α
1
2
0 with α
1
+ α
2
=1. The inequality
to be proven is that that for B
1
,B
2
B
max
,
C
n
(α
1
B
1
+ α
2
B
2
) α
1
C
n
(B
1
)+α
2
C
n
(B
2
).
Let X
1
and X
2
be n-dimensional test sources distributed
according to p
1
(x
n
1
) and p
2
(x
n
1
) that achieve C
n
(B
1
) and
C
n
(B
2
) respectively. Denote the corresponding channel out-
puts as Y
1
and Y
2
. It follows that E[b(Y
i
)] nB
i
and
I(X
i
; Y
i
)=C
n
(B
i
) for i =1, 2.Dene another source X
distributed according to p(x
n
1
)=α
1
p
1
(x
n
1
)+α
2
p
2
(x
n
1
) with
corresponding output Y . Then
E[b(Y )] = b
T
Qp = b
T
Q[α
1
p
1
+ α
2
p
2
] (2)
= α
1
b
T
Qp
1
+ α
2
b
T
Qp
2
= α
1
E[b(Y
1
)] + α
2
E[b(Y
2
)]
n(α
1
B
1
+ α
2
B
2
),
where b and Q have been suitably extended. Thus, X is
(α
1
B
1
+ α
2
B
2
)-admissible. Now, by denition of C
n
(·),
I(X; Y ) C
n
(α
1
B
1
+ α
2
B
2
). However, since I(X; Y ) is
a concave function of the input probability,
I(X; Y ) α
1
I(X
1
; Y
1
)+α
2
I(X
2
; Y
2
)
= α
1
C
n
(B
1
)+α
2
C
n
(B
2
).
Linking the two inequalities yields the desired result:
C
n
(α
1
B
1
+ α
2
B
2
) I(X; Y ) α
1
C
n
(B
1
)+α
2
C
n
(B
2
).
It can also be shown that C
1
(B)=C(B).
Theorem 3: For any DMC, C
n
(B)=nC
1
(B) for all n =
1, 2,... and 0 nB nB
max
.
Proof: Let X =(X
1
,...,X
n
) be a B-admissible test
source with corresponding output Y that achieves C
n
(B),so
E[b(Y )] nB and I(X; Y )=C
n
(B). Since the channel is
memoryless, I(X; Y )
n
i=1
I(X
i
; Y
i
). Let B
i
= E[b(Y
i
)],
then
n
i=1
B
i
=
n
i=1
E[b(Y
i
)] = E[b(Y )] nB.Bythe
denition of C
1
(B
i
), I(X
i
; Y
i
) C
1
(B
i
). Now since C
1
(B)
is a concave function of B, by Jensen’s inequality,
1
n
n
i=1
C
1
(B
i
) C
1
1
n
n
i=1
B
i
= C
1
1
n
E[b(Y )]
.
But since
1
n
E[b(Y )] B and C
1
(B) is a non-increasing
function of B,
1
n
n
i=1
C
1
(B
i
) C
1
1
n
E[b(Y )]
C
1
(B),
ISIT 2008, Toronto, Canada, July 6 - 11, 2008
1613

that is,
n
i=1
C
1
(B
i
) nC
1
(B).
Combining yields C
n
(B) nC
1
(B).
For the reverse, let X be a random variable with correspond-
ing output Y that achieves C
1
(B).Thatis,E[b(Y )] B
and I(X; Y )=C
1
(B).NowletX
1
,X
2
,...,X
n
be i.i.d.
according to p(X) with outputs Y
1
,...,Y
n
. Then
E[b(Y
n
1
)] =
n
i=1
E[b(Y
i
)] nB.
Moreover by memorylessness,
I(X
n
1
; Y
n
1
)=
n
i=1
I(X
i
; Y
i
)=nC
1
(B).
Thus, C
n
(B) nC
1
(B).SinceC
n
(B) nC
1
(B) and
C
n
(B) nC
1
(B), C
n
(B)=nC
1
(B).
The theorem implies that single-letterization is valid: C(B)=
C
1
(B).
IV. T
HREE BINARY CHANNELS
Closed form expressions of the capacity-energy function
for some particular channels may provide insight. Here, three
binary channels with output alphabet energy function b(0) = 0
and b(1) = 1 are considered. Such an energy function
corresponds to discrete particles and packets, among other
commodities.
Consider a noiseless binary channel. The optimization prob-
lem is solved by the maximum entropy method, hence the
capacity-achieving input distribution is in Gibbsian form. It is
easy to show that the capacity-energy function is
C(B)=
log(2), 0 B
1
2
h
2
(B),
1
2
B 1,
where h
2
(·) is the binary entropy function. The capacity-
energy functions for other discrete noiseless channels are
similarly easy to work out using maximum entropy methods.
Consider a binary symmetric channel with crossover prob-
ability ω. It can be shown that the capacity-energy function
is
C(B)=
log(2) h
2
(ω), 0 B
1
2
h
2
(B) h
2
(ω),
1
2
B 1 ω.
Recall that for the unconstrained problem, equiprobable inputs
are capacity-achieving, which yield output power
1
2
.ForB>
1
2
, the distribution must be perturbed so that the symbol 1 is
transmitted more frequently. The maximum power receivable
through this channel is 1 ω, when 1 is always transmitted.
A third worked example is the Z-channel; the unconstrained
capacity expression and associated capacity-achieving input
distribution [16] are used. Consider a Z-channel with 1 to 0
crossover probability ω. The capacity-energy function is
C(B)=
log
1 ω
1
1ω
+ ω
ω
1ω
, 0 B (1 ω)π
h
2
(B)
B
1ω
h
2
(ω), (1 ω)π
B 1 ω,
where
π
=
ω
ω
1ω
1+(1 ω)ω
ω
1ω
.
A Z-channel models quantal synaptic failure [17] and other
“stochastic leaky pipes” where the commodity may be lost en
route.
V. A G
AUSS IAN CHANNEL
Attention now turns to discrete-time, continuous-alphabet,
memoryless channels. The coding theorem (Theorem 1) can
be extended in the usual way [18]. Continuous additive noise
systems have the interesting property that for the goal of
received power, noise power is actually helpful, whereas for
the goal of information, noise power is hurtful. In discrete
channels, such an interpretation is not obvious. When working
with real-valued alphabets, some sort of transmitter constraint
must be imposed so as to disallow arbitrarily powerful signals.
Hard amplitude constraints that model rail limitations in power
circuits are suitable. Assume that the channel transition pdf
Q(y|x) exists.
Rather than working with the output energy constraint
directly, it is convenient to think of the output energy function
b(y) as inducing costs on the input alphabet X:
ρ(x)=
Q(y|x)b(y)dy.
By construction, this cost function preserves the constraint:
E[ρ(X)] =
ρ(x)dF (x)=
dF (x)
Q(y|x)b(y)dy
=

Q(y|x)b(y)dF (x)dy = E[b(Y )],
where F (x) is the input distribution function. Basically, ρ(x)
is the expected output energy provided by input letter x.
Consider the AWGN channel N(0
2
N
) and b(y)=y
2
. Then
ρ(x)=
−∞
y
2
σ
N
2π
exp
(yx)
2
2σ
2
N
dy = x
2
+ σ
2
N
,
that is, the output power is just the sum of the input power
and the noise power.
Since Theorem 3 extends directly to continuous alphabet
channels,
C(B)= sup
X:E[ρ(X)]B
I(X; Y ). (3)
Consider the AWGN channel, N(0
2
N
), with input alpha-
bet X =[A, A] R, and energy function b(y)=y
2
.
Denote the capacity-energy function as C(B; A). Following
lockstep with Smith [19], [20], it is shown that the capacity-
energy achieving input distribution consists of a nite number
of mass points. Before proceeding, two optimization theorems
are quoted [20]:
Theorem 4: Let Ω be a convex metric space, and f and
g concave functionals on Ω to R; assume there exists an
x
1
Ω such that g(x
1
) < 0 and let
D
sup
xΩ:g(x)0
f(x).
ISIT 2008, Toronto, Canada, July 6 - 11, 2008
1614

If D
is nite, then there exists a constant λ 0 such that
D
=sup
xΩ
[f(x) λg(x)].
Moreover if the supremum in the rst equation is achieved by
x
0
and g(x
0
) 0, then the supremum is achieved by x
0
in
the second equation and λg(x
0
)=0.
Theorem 5: Let f be a continuous, weakly-differentiable,
strictly concave map from a compact, convex, topological
space Ω to R.Dene
D sup
xΩ
f(x).
Then the following two properties hold:
1) D =maxf(x)=f(x
0
) for some unique x
0
Ω,and
2) A necessary and sufcient condition for f(x
0
)=D is
for f
x
0
(x) 0 for all x Ω, where f
x
0
is the weak
derivative.
Let F
A
be the space of input probability distribution
functions having all points of increase on the nite interval
[A, A].
Lemma1([19]): F
A
is convex and compact in the Levy
metric.
Since the channel is xed, mutual information can be written
as a function of the input distribution, I(F ).
Lemma2([19]): Mutual information I : F
A
R is a
concave , continuous, weakly differentiable functional.
Let us denote the input squared value under input distribu-
tion F as
σ
2
F
A
A
x
2
dF (x).
Recall the energy constraint
B E[ρ(X)] = E[x
2
+ σ
2
N
]=σ
2
N
+ σ
2
F
,
which is equivalent to B σ
2
N
σ
2
F
0.Nowdene the
functional J : F
A
R as
J(F ) B σ
2
N
A
A
x
2
dF (x).
Lemma 3: J is a concave , continuous, weakly differen-
tiable functional.
Proof: Clearly J is linear in F (see (2) for basic argu-
ment). Moreover, J is bounded as Bσ
2
N
A
2
J Bσ
2
N
.
Since J is linear and bounded, it is concave , continuous,
and weakly differentiable.
Returning to the optimization problem to be solved,
Theorem 6: There exists a constant λ 0 such that
C(B; A)= sup
F ∈F
A
[I(F ) λJ(F )].
Proof: The result follows from Theorem 4 since I is a
concave functional (Lemma 2), J is a concave functional
(Lemma 3), since capacity is nite whenever A< and
σ
2
N
> 0, and since there is obviously an F
1
∈F
A
such that
J(F
1
) < 0.
Theorem 7: There exists a unique capacity-energy achiev-
ing input X
0
with distribution function F
0
such that
C(B; A)= max
F ∈F
A
[I(F ) λJ(F )] = I(F
0
) λJ(F
0
).
Moreover, a necessary and sufcient condition for F
0
to
achieve capacity-energy is
I
F
0
(F ) λJ
F
0
(F ) 0 for all F ∈F
A
. (4)
Proof: Since I and J are both concave , continuous,
and weakly differentiable (Lemmas 2, 3), so is I λJ.Since
F
A
is a convex, compact space (Lemma 1), Theorem 5 applies
and yields the result.
For our function I λJ, the optimality condition (4) is,
A
A
[i(x; F
0
)+λx
2
]dF (x) I(F
0
)+λ
x
2
dF
0
(x),
for all F ∈F
A
, where i is
i(x; F )=
Q(y|x)log
Q(y|x)
p(y; F )
dy
and is variously known as the marginal information density
[20], the Bayesian surprise [21], or without name [22, Eq. 1].
This follows since the mutual information weak derivative is
I
F
1
(F
2
)=
A
A
i(x; F
1
)dF
2
(x) I(F
1
)
and the energy weak derivative is J
F
1
(F
2
)=J(F
2
) J(F
1
).
If
x
2
dF
0
(x) >Bσ
2
N
, then the moment constraint is trivial
and the constant λ is zero, thus the optimality condition can
be written as
A
A
[i(x; F
0
)+λx
2
]dF (x) I(F
0
)+λ[B σ
2
N
]. (5)
The optimality conditions (5) may be rejiggered to a con-
dition on the input alphabet.
Theorem 8: Let F
0
be an arbitrary distribution function in
F
A
satisfying the energy constraint. Let E
0
denote the points
of increase of F
0
on [A, A]. Then F
0
is optimal if and only
if, for some λ 0,
i(x; F
0
) I(F
0
)+λ[B σ
2
N
x
2
] for all x [A, A],
i(x; F
0
)=I(F
0
)+λ[B σ
2
N
x
2
] for all x E
0
.
Proof: If both conditions hold for some λ 0, F
0
must
be optimal and λ is the one from Theorem 7. This is because
integrating both sides of the conditions by an arbitrary F yield
satisfaction of condition (4).
For the converse, assume that F
0
is optimal but that the
inequality condition is not satised. Then there is some x
1
[A, A] and some λ 0 such that i(x
1
; F
0
) >I(F
0
)+λ[B
σ
2
N
x
2
1
].LetF
1
(x) be the unit step 1(x x
1
) ∈F
A
;but
then
A
A
[i(x; F
0
)+λx
2
]dF
1
(x)=i(x
1
; F
0
)+λx
2
1
>I(F
0
)+λ[B σ
2
N
].
ISIT 2008, Toronto, Canada, July 6 - 11, 2008
1615

This violates (5), thus the inequality condition must be valid
with λ from Theorem 7.
Now assume that F
0
is optimal but that the equality con-
dition is not satised, i.e. there is set E
E
0
such that the
following is true.
E
dF
0
(x)=δ>0 and
E
0
E
dF
0
(x)=1δ,
i(x; F
0
)+λx
2
<I(F
0
)+λ[B σ
2
N
] for all x E
,
i(x; F
0
)+λx
2
= I(F
0
)+λ[B σ
2
N
] for all x E
0
E
.
Then,
0=
[i(x; F
0
)+λx
2
]dF
0
(x) I(F
0
) λ[B σ
2
N
]
=
E
0
[i(x; F
0
)+λx
2
]dF
0
(x) I(F
0
) λ[B σ
2
N
]
[I(F
0
)+λ(B σ
2
N
)]+(1 δ)[I(F
0
)+λ(B σ
2
N
)]
I(F
0
) λ[B σ
2
N
]=0,
a contradiction. Thus the equality condition must be valid.
At a point like this, one might try to develop measure-
matching conditions like Gastpar et al. [22] for undetermined
b(·), but this path is not pursued here. To show that the input
distribution is supported on a nite number of mass points
requires Smith’s reductio ab absurdum argument (see [23] for
a slight correction).
Theorem 9: E
0
is a nite set of points.
The proof uses optimality conditions from Theorem 8 to derive
a contradiction using the analytic extension property of the
marginal entropy density h(x; F ),
h(x; F )=
Q(y|x)logp(y; F )dy.
Since the capacity-energy achieving input distribution is a
pmf, a nite numerical optimization algorithm may be used
[24]. Consider the AWGN channel N(0, 1) and nd the point
C(B =0;A =1.5). The capacity achieving input density is
p(x)=
1
2
δ(x +1.5) +
1
2
δ(x 1.5). The achieved rate is
C(0; 1.5) =
+1
1
2/3
2π(1y
2
)
e
(1(4/9) tanh
1
(y))
2
8/9
log(1+y)dy.
The achieved output power is E[Y
2
]=3.25.Infact,thisis
the maximum output power possible over this channel, since
E[Y
2
]=E[X
2
]+σ
2
N
,andE[X
2
] cannot be improved over
operating at the edges {−A, A}. Thus,
C(B;1.5) = C(0; 1.5), 0 B B
max
=3.25.
For this particular channel, there actually is no tradeoff be-
tween information and power: antipodal signaling should be
used all the time. This is not a general phenomenon, however.
This is not true for the same noise, but for say A 1.7 rather
than A =1.5 [19].
VI. C
ONCLUSION
Information is patterned matter-energy. In this work, the
fundamental tradeoff between the rate of transporting a com-
modity between one point and another and the rate of simulta-
neously transmitting information by modulating in that com-
modity has been studied. As extensions, one might consider a
“wideband regime” formulation [25], a multiterminal problem
where different users have different energy and information
requirements, or a deeper look into reversible decoding [11].
A
CKNOWLEDGEMENT
Comments from V. K. Goyal & S. K. Mitter are appreciated.
R
EFERENCES
[1] D. A. Mindell, “Opening Black’s box: Rethinking feedback’s myth of
origin, Technol. Cult., vol. 41, pp. 405–434, July 2000.
[2] N. Wiener, Cybernetics. MIT Press, 1961.
[3] J. Acz
´
el and Z. Dar
´
oczy, On Measures of Information and Their
Characterization. Academic Press, 1975.
[4] J. A. Paradiso and T. Starner, “Energy scavenging for mobile and
wireless electronics, IEEE Pervasive Comput., vol. 4, pp. 18–27, 2005.
[5] L. R. Varshney and S. D. Servetto, A distributed transmitter for the
sensor reachback problem based on radar signals, in Advances in
Pervasive Computing and Networking,B.K.SzymanskiandB.Yener,
Eds. Kluwer, 2005, pp. 225–245.
[6] B. Ananthasubramaniam and U. Madhow, “On localization performance
in imaging sensor nets, IEEE Trans. Signal Process., vol. 55, pp. 5044–
5057, Oct. 2007.
[7] R. Want, “Enabling ubiquitous sensing with RFID, IEEE Computer,
vol. 37, pp. 84–86, Apr. 2004.
[8] C. H. M. Jenkins, Recent Advances in Gossamer Spacecraft. American
Institute of Aeronautics and Astronautics, Inc., 2006.
[9] K. Dostert, Powerline Communications. Prentice Hall, 2001.
[10] Special Issue on Power Line Communications, IEEE J. Sel. Areas
Commun., vol. 24, July 2006.
[11] R. Landauer, “Computation, measurement, communication and energy
dissipation, in Selected Topics in Signal Processing, S. Haykin, Ed.
Prentice Hall, 1989, pp. 18–47.
[12] J. Giles and B. Hajek, An information-theoretic and game-theoretic
study of timing channels, IEEE Trans. Inf. Theory, vol. 48, pp. 2455–
2477, Sept. 2002.
[13] M. Gastpar, “On capacity under receive and spatial spectrum-sharing
constraints, IEEE Trans. Inf. Theory, vol. 53, pp. 471–487, Feb. 2007.
[14] R. J. McEliece, The Theory of Information and Coding. Addison-
Wesley, 1977.
[15] I. Csisz
´
ar and J. K
¨
orner, Information Theory: Coding Theorems for
Discrete Memoryless Systems, 3rd ed. Akad
´
emiai Kiad
´
o, 1997.
[16] S. W. Golomb, “The limiting behavior of the Z-channel, IEEE Trans.
Inf. Theory, vol. IT-26, p. 372, May 1980.
[17] W. B. Levy and R. A. Baxter, “Using energy efciency to make sense
out of neural information processing, in Proc. ISIT, 2002, p. 18.
[18] R. G. Gallager, Information Theory and Reliable Communication.Wi-
ley, 1968.
[19] J. G. Smith, “On the information capacity of peak and average power
constrained Gaussian channels, Ph.D. dissertation, Univ. California,
Berkeley, 1969.
[20] ——, “The information capacity of amplitude- and variance-constrained
scalar Gaussian channels, Inf. Control, vol. 18, pp. 203–219, Apr. 1971.
[21] L. Itti and P. Baldi, “Bayesian surprise attracts human attention, in Proc.
NIPS, 2005, pp. 547–554.
[22] M. Gastpar, B. Rimoldi, and M. Vetterli, “To code, or not to code: Lossy
source-channel communication revisited, IEEE Trans. Inf. Theory,
vol. 49, pp. 1147–1158, May 2003.
[23] A. Tchamkerten, “On the discreteness of capacity-achieving distribu-
tions, IEEE Trans. Inf. Theory, vol. 50, pp. 2773–2778, Nov. 2004.
[24] J. Huang and S. P. Meyn, “Characterization and computation of optimal
distributions for channel coding, IEEE Trans. Inf. Theory,vol.5,pp.
2336–2351, July 2005.
[25] S. Verd
´
u, “Spectral efciency in the wideband regime, IEEE Trans. Inf.
Theory, vol. 48, pp. 1319–1343, June 2002.
ISIT 2008, Toronto, Canada, July 6 - 11, 2008
1616
Citations
More filters
Journal ArticleDOI

MIMO Broadcasting for Simultaneous Wireless Information and Power Transfer

TL;DR: This paper studies a multiple-input multiple-output (MIMO) wireless broadcast system consisting of three nodes, where one receiver harvests energy and another receiver decodes information separately from the signals sent by a common transmitter, and all the transmitter and receivers may be equipped with multiple antennas.
Journal ArticleDOI

Information Theory and Reliable Communication

D.A. Bell
Journal ArticleDOI

Wireless Networks With RF Energy Harvesting: A Contemporary Survey

TL;DR: This paper presents an overview of the RF-EHNs including system architecture, RF energy harvesting techniques, and existing applications, and explores various key design issues according to the network types, i.e., single-hop networks, multiantenna networks, relay networks, and cognitive radio networks.
Journal ArticleDOI

Relaying Protocols for Wireless Energy Harvesting and Information Processing

TL;DR: The numerical analysis provides practical insights into the effect of various system parameters, such as energy harvesting time, power splitting ratio, source transmission rate, source to relay distance, noise power, and energy harvesting efficiency, on the performance of wireless energy harvesting and information processing using AF relay nodes.
Journal ArticleDOI

Wireless Information and Power Transfer: Architecture Design and Rate-Energy Tradeoff

TL;DR: A general receiver operation, namely, dynamic power splitting (DPS), which splits the received signal with adjustable power ratio for energy harvesting and information decoding, separately is proposed and the optimal transmission strategy is derived to achieve different rate-energy tradeoffs.
References
More filters
Book

Information Theory and Reliable Communication

TL;DR: This chapter discusses Coding for Discrete Sources, Techniques for Coding and Decoding, and Source Coding with a Fidelity Criterion.
Book

Information Theory: Coding Theorems for Discrete Memoryless Systems

TL;DR: This new edition presents unique discussions of information theoretic secrecy and of zero-error information theory, including the deep connections of the latter with extremal combinatorics.
Journal ArticleDOI

Energy scavenging for mobile and wireless electronics

TL;DR: A whirlwind survey of energy harvesting can be found in this article, where the authors present a survey of recent advances in energy harvesting, spanning historic and current developments in sensor networks and mobile devices.
Journal ArticleDOI

Information Theory and Reliable Communication

D.A. Bell
Journal ArticleDOI

Bayesian surprise attracts human attention.

TL;DR: A formal Bayesian definition of surprise is proposed to capture subjective aspects of sensory information and it is shown that Bayesian surprise is a strong attractor of human attention, with 72% of all gaze shifts directed towards locations more surprising than the average.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What are the contributions mentioned in the paper "Transporting information and energy simultaneously" ?

The fundamental tradeoff between the rates at which energy and reliable information can be transmitted over a single noisy line is studied. Engineering inspiration for this problem is provided by powerline communication, RFID systems, and covert packet timing systems as well as communication systems that scavenge received energy. 

Definition 1: Given 0 ≤ < 1, a non-negative number R is an -achievable rate for the channel QY |X with constraint (b, B) if for every δ > 0 and every sufficiently large n there exist (n, )-codes of rate exceeding R− δ for which b(yn1 ) < B implies g(yn1 ) /∈ M. R is an achievable rate if it is - achievable for all 0 < < 1. 

Thenρ(x) = ∫ ∞ −∞ y2 σN √ 2π exp { − (y−x)2 2σ2N } dy = x2 + σ2N ,that is, the output power is just the sum of the input power and the noise power. 

For their function The author− λJ , the optimality condition (4) is,∫ A−A [i(x; F0) + λx2]dF (x) ≤ I(F0) + λ∫ x2dF0(x),for all F ∈ FA, where i isi(x; F ) = ∫Q(y|x) log Q(y|x) p(y; F ) dyand is variously known as the marginal information density [20], the Bayesian surprise [21], or without name [22, Eq. 1]. 

The capacity-energy function isC(B) = ⎧⎨ ⎩log ( 1− ω 1 1−ω + ω ω 1−ω ) , 0 ≤ B ≤ (1− ω)π∗h2(B)− B1−ω h2(ω), (1− ω)π∗ ≤ B ≤ 1− ω,whereπ∗ = ωω 1−ω1 + (1− ω)ω ω 1−ω . 

The result follows from Theorem 4 since The authoris a concave ∩ functional (Lemma 2), J is a concave ∩ functional (Lemma 3), since capacity is finite whenever A < ∞ and σ2N > 0, and since there is obviously an F1 ∈ FA such that J(F1) < 0.Theorem 7: There exists a unique capacity-energy achieving input X0 with distribution function F0 such thatC(B; A) = max F∈FA[I(F )− λJ(F )] = I(F0)− λJ(F0). 

Returning to the optimization problem to be solved, Theorem 6: There exists a constant λ ≥ 0 such thatC(B; A) = sup F∈FA[I(F )− λJ(F )]. 

Channel inputs are described by random variables Xn1 = (X1, X2, . . . , Xn) with distribution pXn1 (x n 1 ); the corresponding outputs are random variables Y n1 = (Y1, Y2, . . . , Yn) with distribution pY n1 (y n 1 ). 

By construction, this cost function preserves the constraint:E[ρ(X)] = ∫ ρ(x)dF (x) = ∫ dF (x) ∫ Q(y|x)b(y)dy= ∫ ∫ Q(y|x)b(y)dF (x)dy = E[b(Y )],where F (x) is the input distribution function. 

0. Theorem 5: Let f be a continuous, weakly-differentiable, strictly concave ∩ map from a compact, convex, topological space Ω to R. DefineD sup x∈Ω f(x). 

In fact, this is the maximum output power possible over this channel, since E[Y 2] = E[X2] + σ2N , and E[X2] cannot be improved over operating at the edges {−A,A}.