Proceedings ArticleDOI

# A General Coding Scheme for Signaling Gaussian Processes Over Gaussian Decision Models

24 Aug 2018-pp 1-5

TL;DR: The n-finite transmission feedback information (FTFI) capacity of unstable Gaussian decision models with memory on past outputs is transformed into controllers-encoders-decoders that control the output process, encode a Gaussian process, reconstruct theGaussian process via a mean-square error (MSE) decoder, and achieve the n-FTFI capacity.

AbstractIn this paper, we transform the n-finite transmission feedback information (FTFI) capacity of unstable Gaussian decision models with memory on past outputs, subject to an average cost constraint of quadratic form derived in [1], into controllers-encoders-decoders that control the output process, encode a Gaussian process, reconstruct the Gaussian process via a mean-square error (MSE) decoder, and achieve the n-FTFI capacity. For a Gaussian RV message X N(0,σ2X) it is shown that the MSE decays according to E X-X' n n2= -2C 0, n (k)σ X 2, Kɞ(k min ,∞), where C 0, n (k) is the n-FTFI capacity, and k min is the threshold on the power to ensure convergence.

Topics: Gaussian (61%), Gaussian process (61%)

### I. INTRODUCTION It has been recently shown

• That randomized strategies in decision systems are operational, in the sense that not only they stabilize the system but they also encode information, which can be decoded at the output of the control system with arbitrary small probability of decoding error.
• Another application is that of signaling digital messages available to the controller, such as, values associated with actuating devices, for failure detection and monitoring applications.

### B. Main Problem

• It should be mentioned that controllers-encoders have contradictory goals.
• The controller aims at stabilization, while the encoder aims at communicating new information.
• A lowcost control strategy would want the state process to be kept near the origin, with as little randomness as feasible injected by the coding part, while a communication strategy requires informative deviations.

### A. Optimal Controllers-Encoders

• In the class of linear controller-encoders that encode the Gaussian Markov process X n defined by (7) and operate at the n−FTFI capacity, the optimal controllerencoder exists, the conditional mean decoder minimizes the MSE, and these are given below, also known as Theorem 3.2.
• Then the optimal filter estimates satisfy the following recursions.
• EQUATION EQUATION EQUATION EQUATION 1 where the filter gains are difined by EQUATION (c) Conditional Mean Decoder.
• Below a certain value, then the constraint set is not feasible, i.e., it is empty.

### B. Asymptotic Properties

• The asymptotic properties of the controller-encoderdecoder are obtained by analyzing (6) , under the following assumptions (see [14] on Linear Quadratic stochastic optimal control theory with complete information).
• A constructive procedure is developed to synthesize {controller-encoder-decoder} strategies, that encode Gaussian Markov processes, communicate them over unstable Gaussian recursive models to the decoder.
• Examples illustrate the convergence of the MSE to zero, as the number of transmissions tends to infinity.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail.
This material is protected by copyright and other intellectual property rights, and duplication or sale of all or
part of any of the repository collections is not permitted, except that material may be duplicated by you for
your research use or educational purposes in electronic or print form. You must obtain permission for any
other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not
an authorised user.
Charalambous, Charalambos D.; Kourtellaris, Christos K.; Charalambous, Themistoklis
A General Coding Scheme for Signaling Gaussian Processes over Gaussian Decision Models
Published in:
Proceedings of the IEEE 19th International Workshop on Signal Processing Advances in Wireless
Communications, SPAWC 2018
DOI:
10.1109/SPAWC.2018.8445982
Published: 24/08/2018
Document Version
Peer reviewed version
Charalambous, C. D., Kourtellaris, C. K., & Charalambous, T. (2018). A General Coding Scheme for Signaling
Gaussian Processes over Gaussian Decision Models. In Proceedings of the IEEE 19th International Workshop
on Signal Processing Advances in Wireless Communications, SPAWC 2018 (Vol. 2018-June). [8445982] (IEEE
International Workshop on Signal Processing Advances in Wireless Communications). IEEE.
https://doi.org/10.1109/SPAWC.2018.8445982

A General Coding Scheme for Signaling Gaussian Processes over
Gaussian Decision Models
Charalambos D. Charalambous, Christos K. Kourtellaris and Themistoklis Charalambous
Abstract In this paper, we transform the nﬁnite transmis-
sion feedback information (FTFI) capacity of unstable Gaussian
decision models with memory on past outputs, subject to an
average cost constraint of quadratic form derived in [1], into
controllers-encoders-decoders that control the output process,
encode a Gaussian process, reconstruct the Gaussian process via
a mean-square error (MSE) decoder, and achieve the nFTFI
capacity. For a Gaussian RV message X N (0, σ
2
X
) it is
shown that the MSE decays according to E|X
b
X
n|n
|
2
=
exp{−2C
0,n
(κ)}σ
2
X
, κ (κ
min
, ), where C
0,n
(κ) is the
nFTFI capacity, and κ
min
is the threshold on the power to
ensure convergence.
Index Terms Coding, nﬁnite transmission feedback infor-
mation, Gaussian decision models, capacity.
I. INTRODUCTION
It has been recently shown [2] that randomized strategies
in decision systems are operational, in the sense that not only
they stabilize the system but they also encode information,
which can be decoded at the output of the control system
with arbitrary small probability of decoding error. In other
words, the control system is used to communicate informa-
tion, with an operational meaning as deﬁned by Shannon’s
capacity of communication channels.
Our main objective of this paper is to synthesize controller-
encoder strategies for Gaussian recursive models (G-RMs),
with input process {A
t
: t = 0, 1, . . . , n} that simultaneously
control the output process {Y
t
: t = 0, 1, . . . , n}, encode an
information process {X
t
: t = 0, 1, . . . , n}, and synthesize
a decoder at the output {Y
t
: t = 0, 1, . . . , n}, such that the
process {X
t
: t = 0, 1, . . . , n} is transmitted reliably to the
decoder.
If the G-RM is a control system model, then the system,
as depicted in Figure 1, gives rise to several application.
One application is that of tracking the dynamics of {X
t
:
t = 0, 1, . . . , n} at the output of the decoder. If {X
t
:
t = 0, 1, . . . , n} is generated by a discrete deterministic
recursion, then we show that it is possible for the decoder
to track {X
t
: t = 0, 1, . . . , n} with arbitrary small MSE.
This is contrary to standard tracking design systems in
which the noise of the G-RM, i.e., {V
t
: t = 0, 1, . . . , n}
imposes limitations on the minimum tracking error. Another
application is that of signaling digital messages available
to the controller, such as, values associated with actuating
devices, for failure detection and monitoring applications.
C. D. Charalambous and C. K. Kourtellaris are with the Department
of Electrical Engineering, University of Cyprus, Nicosia, Cyprus. E-mails:
T. Charalambous is with the Department of Electrical Engineering and
Automation, School of Electrical Engineering, Aalto University, Finland.
E-mail: themistoklis.charalambous@aalto.ﬁ
II. PROBLEM STATEMENT AND RELATED WORK
A. Notation
, ·i denotes inner product of elements of linear spaces,
S
q×q
+
denotes the set of symmetric positive semi-deﬁnite q×q
matrices and S
q×q
++
the subset of positive deﬁnite matrices,
with real entries.
B. Main Problem
We consider a multiple-input multiple output (MIMO)
time-verying unstable G-RM given by
Y
i
= C
i1
Y
i1
+ D
i
A
i
+ V
i
, Y
1
= s, i = 0, . . . , n, (1)
subject to an average cost of quadratic form described by
1
n + 1
E
s
n
n
X
i=0
γ
i
(A
i
, Y
i1
)
o
κ, (2)
A
n
4
= {A
0
, A
1
, . . . , A
n
} is the input process,
Y
n
4
= {Y
0
, A
1
, . . . , Y
n
} is the output process,
V
n
4
= {V
0
, . . . , V
n
} is an independent zero mean Gaussian
noise process, denoted by V
i
N (0, K
V
i
), K
V
i
0, i =
0, . . . , n,
γ
i
(a
i
, y
i1
) , ha
i
, R
i
a
i
i + hy
i1
, Q
i1
y
i1
i,
(C
i1
, D
i
) R
p×p
× R
p×q
, (Q
i1
, R
i
) S
p×p
+
× S
q×q
++
.
The G-RM may correspond to a control system or a com-
munication channel, as shown in Figure 1. The distribution of
the G-RM is P
Y
i
|A
i
,Y
i1
,S
= Q
i
(dy
i
|y
i1
, a
i
), i = 1, . . . , n,
and for i = 0, the distribution is Q
0
(dy
0
|s, a
0
).
By [1], the characterization of the nﬁnite transmission
feedback information (FTFI) capacity is
C
0,n
(κ) = J
A
n
Y
n
|s
(π
, κ)
4
= sup
P
[0,n]
(κ)
E
π
s
n
n
X
i=0
log
Q
i
(·|Y
i1
, A
i
)
P
π
(·|Y
i1
)
(Y
i
)
o
(3)
P
π
(dy
i
|y
i1
) =
Z
A
i
Q
i
(dy
i
|y
i1
, a
i
) π
i
(da
i
|y
i1
), (4)
for i = 1, . . . , n, and
P
π
(dy
0
|s) =
Z
A
0
Q
0
(dy
0
|s, a
0
) π
0
(da
0
|s),
for i = 0. The set of randomized strategies
P
[0,n]
(κ) are
conditionally Gaussian and Markov deﬁned by
P
[0,n]
(κ)
4
=
n
π
i
(da
i
|y
i1
), i = 0, . . . , n :
1
n + 1
E
π
s
n
X
i=0
γ
i
(A
i
, Y
i1
)
κ
o
P
[0,n]
(κ). (5)

Source or
Tracking Signal
Encoder or
Control Object
Deterministic Part
(Stabilization)
Random Part
(Innovation)
Channel or
Controlled Object
Decoder or
Estimator
I(A
n
Y
n
)
Unit Delay
P
X
i
|X
i1
P
A
i
|A
i1
,Y
i1
P
Y
i
|Y
i1
,A
i
P
b
X
i
|
b
X
i1
,Y
i
Control
Process
Controlled
Process
X
i
A
i
Y
i
b
X
i
Fig. 1. Depicts Shannon’s communication block diagram and its analogy to stochastic control systems.
The corresponding joint process {A
0
, Y
0
, . . . , A
n
, Y
n
} and
output process {Y
0
, . . . , Y
n
}, for ﬁxed S = s, are Gaussian.
Further, under the standard detectability and stabilizability
conditions [1, Theorem 4.1] the feedback capacity is
C(κ)
4
= lim
n→∞
1
n + 1
J
A
n
Y
n
|s
(π
, κ), κ [κ
min
, ).. (6)
The main objective is to transform the distribution
π
i
(da
i
|y
i1
), i = 0, . . . , n that achieves the nFTFI capac-
ity and capacity C
0,n
(κ), into a controller-encoder and to
construct a decoder, such that
(1) the controller-encoder operates at C
o,n
(κ),
(2) the decoder is MSE optimal,
for Gaussian, Markov process, described by the recursion
X
i+1
= F
i
X
i
+ G
i
W
i
, X
0
= x X
i
4
= R
q
(7)
where W
i
N
0, K
W
i
, i = 0, . . . , n 1 are W
i
=
R
k
valued zero mean Gaussian, independent of V
i
, i =
0, 1, . . . , n, and X
0
N
0, K
X
0
.
It should be mentioned that controllers-encoders have con-
tradictory goals. The controller aims at stabilization, while
the encoder aims at communicating new information. A low-
cost control strategy would want the state process to be kept
near the origin, with as little randomness as feasible injected
by the coding part, while a communication strategy requires
informative deviations.
C. Related Work
The Shannon coding capacity For the memoryless addi-
tive Gaussian noise (AGN) channel
Y
i
= A
i
+ V
i
, i = 0, . . . , n,
1
n + 1
E
n
n
X
i=0
|A
i
|
2
o
κ (8)
with or without feedback, the capacity is given by [3]
C
Sh
(κ)
4
=
1
2
log
1 +
κ
K
V
. (9)
The input process that achieves it is independent and
identically distributed (IID) Gaussian A
i
P
A
(da)
N(0, κ), i = 0, 1, . . ..
Elias Coding Scheme of a Gaussian Message. Elias [4]
introduced a coding scheme to communicate a Gaussian RV
X N(0, σ
2
X
) reliably over the memoryless AGN channel
(8), that achieves C
Sh
(κ), given by
A
i
=
v
u
u
t
κ
E
XE
n
X
Y
i1
o
2
XE
n
X
Y
i1
o
. (10)
b
X
i1
4
= E
X
Y
i1
is linear in Y
i1
, i = 0, . . . , n, and it
is computed recursively, using the Kalman-ﬁlter.
MSE Decoder of Gaussian Messages. When the Elias
coding scheme is applied with a MSE decoder, the error is
Σ
n
4
= E|X
b
X
n
|
2
=
σ
2
X
1 +
κ
K
V
n+1
, n = 0, 1, . . . . (11)
Maximum Likelihood (ML) Decoder of Digital Mes-
sages. Schalkwijk and Kailath [5] showed that, when the
Elias coding scheme is applied to a set of equiprobable
messages
0, 1, . . . , M
n
4
= exp{(n + 1)R} : n = 0, 1, . . .
,
then the probability of ML decoding error at time n, decays
doubly exponentially.
Butman Conjecture. For an AGN channel with stable and
stationary noise V
n
4
= {V
0
, . . . , V
n
} (with limited memory),
Butman [6] “conjectured” that the Elias coding scheme of
transmitting the error achieves capacity. Cover and Pombra
[7] derived the characterization of feedback capacity for
the AGN channel (8), when the noise V
n
is nonstationary,
nonergodic, with distribution P
V
n
. Kim [8] revisited the
limited memory, stationary ergodic version of the Cover
and Pombra [7] AGN channel, and applied frequency do-
main methods to conclude that Butman’s conjecture is true.
Variations of the Elias and Schalkwijk-Kailath schemes for
network communication over memoryless AGN channels are
extensive and given in [9]–[12].
III. THE nFTFI CAPACITY AND
CONTROLLER-ENCODER-DECODER
By [1] the nFTFI capacity of the G-RM is achieved by
the input process
A
i
=e
i
(Y
i1
, Z
i
)=U
i
+Z
i
= Γ
i
Y
i1
+Z
i
(12)
(i) U
i
4
= Γ
i
Y
i1
, i = 0, . . . , n is the control process,
(ii) Z
i
independent of (A
i1
, Y
i1
), i = 0, . . . , n,
(iii) Z
i
independent of V
i
, for i = 0, . . . , n,

(iv) {Z
i
N(0, K
Z
i
) : i = 0, . . . , n} an independent
Gaussian process.
The corresponding output process is
Y
i
=
C
i1
+ D
i
Γ
i
Y
i1
+ D
i
Z
i
+ V
i
, Y
1
= s. (13)
Furthermore, the optimal control and innovations parts of the
strategy are found in [1, Section IV]. We include them below
for completeness.
(a) The optimal control part of the strategy {U
i
: i =
0, . . . , n}, is given by
U
i
= Γ
i
Y
i1
, i = 0, . . . , n, (14)
Γ
i
=
D
T
i
P (i+1)D
i
+R
i
1
D
T
i
P (i+1)C
i1
, (15)
where Γ
n
= 0,
P (i) : i = 0, . . . , n
is a solution of the
Riccati difference matrix equation
P (i) = C
T
i1
P (i + 1)C
i1
+ Q
i1
C
T
i1
P (i + 1)D
i
D
T
i
P (i + 1)D
i
+ R
i
1
C
T
i1
P (i + 1)D
i
T
, P (n) = Q
n1
(16)
(b) The optimal innovations part of the strategy
K
Z
i
0 :
i = 0, . . . , n
is the solution of the following problem.
J
A
n
Y
n
|s
(π
, κ) = C
0,n
(κ
0
, . . . , κ
n
)
n
X
i=0
C
i
(κ
i
) (17)
4
= sup
K
Z
i
0,i=0,...,n:
P
n
i=0
κ
i
(K
Z
i
)=κ(n+1)
n
X
i=0
C
i
(κ
i
) (18)
where
C
i
(κ
i
)
4
=
1
2
log
|D
i
K
Z
i
D
T
i
+ K
V
i
|
|K
V
i
|
, i = 0, . . . , n (19)
κ
i
κ
i
(K
Z
i
) (20)
4
=
tr
R
n
K
Z
n
, i = n
tr
P (i + 1)
D
i
K
Z
i
D
T
i
+ K
V
i
+R
i
K
Z
i
, i = 1, . . . , n 1
tr
P (1)
D
0
K
Z
0
D
T
0
+ K
V
0
+ R
0
K
Z
0
+hs, P (0)si, i = 0.
Example 3.1: Let p = q = 1. Then the solution of the
Riccati difference matrix equation (16) does not depend on
the covariance K
Z
i
, i = 0, . . . , n, and hence this simpliﬁes
the computation of the optimal K
Z
i
in the optimization
problem (18). By the Kuhn-Tucker conditions we obtain
K
Z
n
=
n
1
2λR
n
K
V
n
D
2
n
o
+
,
x
+
4
= max
0, x
, (21)
K
Z
i
=
n
1
2λ
P (i + 1)D
2
i
+ R
i
K
V
i
D
2
i
o
+
(22)
for i = n 1, n 2, . . . , 0, where λ = λ
n
(κ) 0 is chosen
to satisfy the average constraint with equality given by
n1
X
i=0
nn
1
2λ
P (i + 1)D
2
i
+ R
i
K
V
i
D
2
i
o
+
+P (i+1)K
V
i
o
+
n
1
2λ
R
n
K
V
n
D
2
n
o
+
+ s
2
P (0) = κ(n + 1). (23)
Upon substituting into (19) then we obtain
C
0,n
(κ)
4
= J
A
n
Y
n
|s
(π
, κ) =
1
2
n
X
i=0
log
|D
2
i
K
Z
i
+ K
V
i
|
|K
V
i
|
(24)
Clearly, in general, for each i, C
i
(κ
i
) > 0 provided κ
i
(κ
min,i
, ) and these critical values depend on whether the
coefﬁcients of the G-RM (13) lie outside or inside the unit
circle, i.e., |C
i1
| 1 or |C
i1
| < 1, for i = 0, . . . , n
(see [1]). Expressions (21), (22) are known as water-ﬁlling
in information theory. Clearly, the following hold.
If κ
4
= κ
min
[0, ) is such that (25)
λ
n
(κ
min
) >
D
2
n
2K
V
n
R
n
, (26)
λ
i
(κ
min
) >
D
2
i
2K
V
i
P (i + 1)D
2
i
+ R
i
, i = 0, . . . , n 1,
(27)
then C
0,n
(κ) = 0, and (28)
κ
min
4
= κ
0,n
(K
Z
)
K
Z
=0
=
n1
X
i=0
P (i + 1)K
V
i
+ s
2
P (0).
Hence, κ
min
is the minimum power above which a non-
negative information rate occurs, i.e., C
0,n
(κ) > 0, κ
(κ
min
, ). κ
min
is the solution of the Linear-Quadratic
Gaussian stochastic optimal control problem of minimizing
the average power, when A
i
= g
i
(Y
i1
, s), i = 0, . . . , n,
i.e., non-random; see the numerical example in Fig 2.
0 2 4 6 8 10
0
0.5
1
1.5
2
Fig. 2. Example in which the rate of a scalar system is shown for different
values of power level, κ, for n = 100 and n = 1000. Note that C
i
is
chosen such that |C
i
| > 1.

A. Optimal Controllers-Encoders
Now, we use the calculation of the nFTFI capacity to
show that linear controller-encoder strategies in (x
i
, y
i1
),
denoted by {µ
L
i
(x
i
, y
i1
) : i = 0, . . . , n}, achieve the
nFTFI, and that among such linear controller-encoders,
then the conditional mean decoder minimizes the MSE.
Theorem 3.2: In the class of linear controller-encoders
that encode the Gaussian Markov process X
n
deﬁned by (7)
and operate at the nFTFI capacity, the optimal controller-
encoder exists, the conditional mean decoder minimizes the
MSE, and these are given below.
Let
n
Γ
i
, K
Z
i
: i = 0, . . . , n
o
be the optimal strat-
egy corresponding to (15) and (18), and joint process
(A
i
, Y
i
, Z
i
) : i = 0, . . . , n
.
Deﬁne the ﬁlter estimates
1
and conditional covariances
b
X
i|i1
4
= E
s
n
X
i
Y
,i1
o
,
b
X
i|i
4
= E
s
n
X
i
Y
,i
o
,
Σ
i|i1
4
= E
s
n
X
i
b
X
i|i1

X
i
b
X
i|i1
T
Y
,i1
o
,
Σ
i|i
4
= E
s
n
X
i
b
X
i|i

X
i
b
X
i|i
T
Y
,i
o
, i = 0, . . . , n.
(a) Controller-Encoder. The mutual information between X
n
and Y
n
for ﬁxed S = s, denoted by I
µ
L,
(X
n
; Y
n
|s) of the
controller-encoder strategy
2
A
i
=µ
L,
i
(X
i
, Y
,i1
)=Γ
i
Y
i1
i
n
X
i
b
X
i|i1
o
, (29)
Θ
i
=K
,
1
2
Z
i
Σ
1
2
i|i1
, Θ
i
0, i = 0, . . . , n, (30)
Y
i
=
C
i1
+D
i
Γ
i
Y
i1
+D
i
Θ
i
n
X
i
b
X
i|i1
o
+V
i
, (31)
operates at the nFTFI capacity of the G-RM, i.e.,
I
µ
L,
(X
n
; Y
n
|s) = J
A
n
Y
n
|s
(π
, κ). (32)
Moreover, the following hold.
(b) Filter Estimates. Deﬁne the innovations process by
ν
i
4
=
Y
i
E
s
Y
i
Y
,i1
: i = 0, . . . , n
. Then the optimal
ﬁlter estimates satisfy the following recursions.
b
X
i+1|i
= F
i
b
X
i|i1
i|i1
n
Y
i
C
i1
+D
i
Γ
i
Y
i1
o
,
= F
i
b
X
i|i1
i|i1
ν
i
,
b
X
0|−1
= Given, (33)
b
X
i+1|i+1
= F
i
b
X
i|i
+ Ψ
i|i1
ν
i
, (34)
Σ
i+|i
= F
i
Σ
i|i1
F
T
i
+G
i
K
W
i
G
T
i
F
i
Σ
i|i1
D
i
Θ
i
T
h
D
i
K
Z
i
D
T
i
+ K
V
i
i
1
D
i
Θ
i
Σ
i|i1
F
T
i
, (35)
Σ
0|−1
= E
s
n
X
0
b
X
0|−1

X
0
b
X
0|−1
T
o
(36)
Σ
i|i
= Σ
i|i1
Ψ
i|i1
D
i
Θ
i
Σ
i|i1
(37)
Σ
i+1|i
=F
i
Σ
i|i
F
T
i
+ G
i
K
W
i
G
T
i
(38)
1
Without loss of generality we may assume σ{Y
1
} = σ{, ∅}, and
omit the dependence of E
s
on e.
2
For any square matrix M with real entries M
1
2
is its square root.
where the ﬁlter gains are diﬁned by
Ψ
i|i1
4
= F
i
Ψ
i|i1
,
Ψ
i|i1
4
= Σ
i|i1
D
i
Θ
i
T
h
D
i
K
Z
i
D
T
i
+ K
V
i
i
1
. (39)
(c) Conditional Mean Decoder. For the controller-encoder
µ
L,
i
(X
i
, Y
,i1
, s) = bµ
L,
i
(X
i
, Y
i1
,
b
X
i|i1
, s)
bµ
L,
i
(X
i
, Y
i1
,
b
X
i1|i1
, s), then the conditional mean
decoder
b
X
i|i
=
b
d
i
(Y
,i
, µ
L,
(·), s), i = 0, . . . , n is optimal,
in the sense of minimizing the MSE.
Proof: (a)-(c) are easily veriﬁed from [13].
Next, we give an illustrative example of Theorem 3.2 to
demonstrate the properties of the optimal controller-encoder,
and its relation to the MSE, which is a generalization of the
material discussed in Section II-C.
Example 3.3: Consider Theorem 3.2 with p = q = 1. By
solving Riccati equation (36) we obtain
Σ
i+1|i
= F
2
i
D
2
i
K
Z
i
+ K
V
i
K
V
i
1
Σ
i|i1
+ G
2
i
K
W
i
, Σ
0|−1
= F
2
i
e
2C
i
(κ
i
)
Σ
i|i1
+ G
2
i
K
W
i
, i = 0, . . . , n (40)
Σ
i|i
= F
2
i1
e
2C
i
(κ
i
)
Σ
i1|i1
+ e
2C
i
(κ
i
)
G
2
i1
K
W
i1
.
(41)
Σ
0|0
= e
2C
0
(κ
0
)
Σ
0|−1
(42)
Clearly, by the above solutions there is a direct rela-
tion between the sequence of the MSEs at each time
{Σ
i|i1
, Σ
i|i1
}, the information rates at each time instant
{C
i
(κ
i
) : i = 0, 1, . . . , n}, and the parameters of the
information process {(F
i
, G
i
, K
W
i
) : i = 0, . . . , n 1}.
Case 1. Information Process X
i+1
= F
i
X
i
, X
0
N(0, σ
2
X
0
), i = 0, . . . , n, that is, we set G
i
= 0. Then
Σ
n|n
=|F
0
|
2
|F
1
|
2
. . . |F
n1
|
2
e
2
P
n
j=0
C
j
(κ
j
)
Σ
0|−1
,
n = 0, 1, . . . , Σ
0|0
= e
2C
0
(κ
0
)
Σ
0|−1
,
Then Σ
n|n
, n = 0, 1, . . . converge monotonically to zero, i.e.,
If
n
X
i=0
C
i
(κ
i
) >
X
i∈{0,...,n1}:|F
i
|>1
log |F
i
|, n = 0, . . .
then lim
n→∞
Σ
n|n
= 0. (43)
Conditions (43) states that the larger the F
i
s, i.e., more
unstable, then the larger total power κ is needed to ensure
the MSE of the decoder converges to zero.
Case 2. RV X N (0, σ
2
X
). The MSE is obtained from
Case 1, by setting F
i
= 1, i = 0, . . . , n 1, giving
Σ
n|n
= e
2
P
n
i=0
C
i
(κ
i
)
Σ
0|−1
, n = 0, 1, . . . . (44)
This is the analog of Elias MSE decoding error (11); it is
identical to (11), if the G-RM is memoryless, i.e., if C
i
=
0, i = 0, . . . , n1 and Q
i
= 0, i = 0, . . . , n1, R
i
= 1, i =
0, . . . , n.
Numerical example. In Fig. 4 we observe that as we
increase the total power κ the estimation error Σ
n|n
given
iteratively in (41) is reduced and eventually it converges
exponentially to zero. However, below a certain value, then
the constraint set is not feasible, i.e., it is empty.

##### Citations
More filters

Proceedings ArticleDOI
07 Jul 2019
Abstract: Shannon’s coding capacity of memoryless additive Gaussian noise (AGN) channels with noiseless feedback, is known to be achieved by the Elias [1] coding scheme of communicating the mean square-error (MSE) of a Gaussian RV $X \sim N\left({0,\sigma _X^2}\right)$, from past channel outputs, and decoding it using a MSE decoder. Further, it is known that among all encoders, and decoders that minimize the MSE, then the Elias encoder and decoder are globally optimal.In this paper we derive analogous results, for communicating unstable Gaussian Markov processes over unstable multiple-input multiple-output (MIMO) Gaussian recursive models (G-RM) with memory, often called infinite impulse response (IIR) models, subject to an average cost of quadratic form. However, unlike memoryless AGNs, certain conditions are required, for such generalizations. Further, to show global optimality, we need to invoke Gorbunov and Pinsker [2] nonanticipatory entropy, instead of the classical rate distortion function of the source. Another important observation is that we need a two-parameter coding scheme, instead of the one-parameter coding scheme of memoryless AGN channels.

### Cites background or methods from "A General Coding Scheme for Signali..."

• ...Compared to [6], we wish to show global optimality of the tuple {controller-encoder, decoder} of strategies, according to the following definition....

[...]

• ...CONCLUSIONS The constructive procedure developed by the authors and collaborators [6] to synthesize {controller-encoder-decoder} strategies, that encode unstable Gaussian Markov processes, communicate them over unstable G-RMs to the decoder, is shown to be globally optimal and linear, among all strategies....

[...]

• ...Answer to the question: We consider the two-parameter coding scheme [6] of communicating X to the decoder...

[...]

More filters

Book
16 Nov 2021

1,610 citations

### "A General Coding Scheme for Signali..." refers background in this paper

• ...with or without feedback, the capacity is given by [3]...

[...]

Book
01 Jan 1986
TL;DR: The mathematics of filtering and ee/ise 556: stochastic systems fall 2013 usc search identification and system parameter estimation 1991 gbv is described.

1,037 citations

### "A General Coding Scheme for Signali..." refers methods in this paper

• ...Asymptotic Properties The asymptotic properties of the controller-encoderdecoder are obtained by analyzing (6), under the following assumptions (see [14] on Linear Quadratic stochastic optimal control theory with complete information)....

[...]

Journal ArticleDOI
J. Schalkwijk
TL;DR: This paper presents a coding scheme that exploits the feedback to achieve considerable reductions in coding and decoding complexity and delay over what would be needed for comparable performance with the best known (simplex) codes for the one-way channel.
Abstract: In some communication problems, it is a good assumption that the channel consists of an additive white Gaussian noise forward link and an essentially noiseless feedback link. In this paper, we study channels where no bandwidth constraint is placed on the transmitted signals. Such channels arise in space communications. It is known that the availability of the feedback link cannot increase the channel capacity of the noisy forward link, but it can considerably reduce the coding effort required to achieve a given level of performance. We present a coding scheme that exploits the feedback to achieve considerable reductions in coding and decoding complexity and delay over what would be needed for comparable performance with the best known (simplex) codes for the one-way channel. Our scheme, which was motivated by the Robbins-Monro stochastic approximation technique, can also be used over channels where the additive noise is not Gaussian but is still independent from instant to instant. An extension of the scheme for channels with limited signal bandwidth is presented in a companion paper (Part II).

551 citations

### "A General Coding Scheme for Signali..." refers methods in this paper

• ...Schalkwijk and Kailath [5] showed that, when the Elias coding scheme is applied to a set of equiprobable messages { 0, 1, . . . ,Mn 4 = exp{(n+ 1)R} : n = 0, 1, . . . } , then the probability of ML decoding error at time n, decays doubly exponentially....

[...]

• ...Schalkwijk and Kailath [5] showed that, when the Elias coding scheme is applied to a set of equiprobable messages { 0, 1, ....

[...]

• ...[5] J. P. M. Schalkwijk and T. Kailath, “A coding scheme for additive noise channels with feedback-I: no bandwidth constraints,” IEEE Transactions on Information Theory, vol. 12, no. 2, pp. 172–182, April 1966....

[...]

• ...Variations of the Elias and Schalkwijk-Kailath schemes for network communication over memoryless AGN channels are extensive and given in [9]–[12]....

[...]

Journal ArticleDOI
TL;DR: In this paper a deterministic feedback code is presented for the two-user Gaussian multiple access channel, which is shown to allow reliable communication at all points inside a region larger than any previously obtained.
Abstract: Since the appearance of [10] by Gaarder and Wolf, it has been well known that feedback can enlarge the capacity region of the multiple access channel. In this paper a deterministic feedback code is presented for the two-user Gaussian multiple access channel, which is shown to allow reliable communication at all points inside a region larger than any previously obtained. An outer bound is given which is shown to coincide with the achievable region, thus yielding the capacity region of this channel exactly.

349 citations

### "A General Coding Scheme for Signali..." refers background in this paper

• ...Variations of the Elias and Schalkwijk-Kailath schemes for network communication over memoryless AGN channels are extensive are given in [9]–[12]....

[...]

Journal ArticleDOI
TL;DR: An asymptotic equipartition theorem for nonstationary Gaussian processes is proved and it is proved that the feedback capacity C/sub FB/ in bits per transmission and the nonfeedback capacity C satisfy C > C >.
Abstract: The capacity of time-varying additive Gaussian noise channels with feedback is characterized. Toward this end, an asymptotic equipartition theorem for nonstationary Gaussian processes is proved. Then, with the aid of certain matrix inequalities, it is proved that the feedback capacity C/sub FB/ in bits per transmission and the nonfeedback capacity C satisfy C >

232 citations

### "A General Coding Scheme for Signali..." refers background or methods in this paper

• ...Kim [8] revisited the limited memory, stationary ergodic version of the Cover and Pombra [7] AGN channel, and applied frequency domain methods to conclude that Butman’s conjecture is true....

[...]

• ...[7] T. Cover and S. Pombra, “Gaussian feedback capacity,” IEEE Transactions on Information Theory, vol. 35, no. 1, pp. 37–43, Jan. 1989....

[...]

• ...Cover and Pombra [7] derived the characterization of feedback capacity for the AGN channel (8), when the noise V n is nonstationary, nonergodic, with distribution PV n ....

[...]