Proceedings ArticleDOI

# Joint source-channel coding with adaptation

27 Jul 2016-pp 77-81

TL;DR: A JSCC strategy which takes into account the unequiprobable source output to attain better data quality than traditional methods is proposed and finds significantly improved speech quality after transmission in both cases.

AbstractAccording to Shannon's source-channel separation theorem, the source coding and the channel coding operations should be performed independently. However, such a method does not exploit the nature of unequiprobable source output in a finite block regime so that it can not achieve good qualities in several scenarios in practical systems. In this study, we propose a JSCC strategy which takes into account the unequiprobable source output to attain better data quality than traditional methods. An adaptation is performed in order to map source code outputs to appropriate inputs of channel code. We also propose a greedy algorithm to construct parameters for the adaptation in polynomial time. Our simulation is performed on AMR compression to estimate the effect of our approach as compared to the traditional strategy. Using this approach we find significantly improved speech quality after transmission in both cases: transfer of only one AMR codec mode and transfer of all AMR codec modes.

Topics: , Variable-length code (63%), Shannon–Fano coding (58%), Huffman coding (58%)

### Introduction

• There are many approaches to improve the traditional method in different scenarios.
• UEP consists of allocating coding redundancy depending on the importance of the information bits.

### B. Definitions

• For practical systems where blocklengths are limited, with a specific source code and channel code equipped, the authors propose a method to take into account the nature of the channel and the unequiprobable source outputs to improve traditional JSCCs.
• The authors introduce here a new concept called adaptation, which is considered as a bijection between source code outputs and channel code inputs.
• At the receiver, a converse process is performed: before source decoding, output w of the channel decoder is put into the adaptation ad−1.
• If it is transferred over a noisy channel before being decompressed, errors caused by the channel make the rate distortion greater than that of decompressed data without transmission error.
• Based on this formula, the authors proposed 2 different algorithms to find appropriate adaptations in the next section.

• To find the optimal adaptation in this situation, the authors solve the optimization problem: min ad(w)=ŵ Eer(w) Proof.
• To find the optimal adaptation, the authors calculate over all possible assignments for ad(w) and find the minimum Eer(w) by applying this lemma.

### IV. SIMULATION AND RESULTS

• The authors have conducted some experiments with and without an adaptation in two scenarios.
• Parameters for the adaptation are calculated by Algorithm 1.
• In the second scenario, their algorithm minimizes the rate distortion of speech data for all codec modes with the adaptation parameters calculated by Algorithm 2.

### A. Simulation description

• The convolution code with rate 1/3 is used to add redundant data to the compressed versions of the speech data.
• Therefore, the important values of the AMR data are mapped to values which suffered less errors than others when they are transferred over noisy channels.
• For the sakes of clarity and brevity, the authors use binary asymetric channel.

### B. Simulation parameter

• The authors assume that the importance of each channel code input is proportional to its probability distribution.
• Interestingly, the highest probability distribution falls into the value of the 8-bit header which is the most important part of each AMR frame.
• In addition, the authors further assume that in each type, the rate distortion augmentation of 2 source output values is proportional to the Hamming distance between them.
• A coefficient Ctype is a multiple of the Hamming distance between 2 values of each type to calculate the rate distortion augmentation as follows: s(w1,w2) = (Chph(w1)+CApA(w1)+CBpB(w1)+CCpC(w1))H(w1,w2) where ph, pa, pb, pc and Ch, Ca, Cb, Cc are the probabilities of each value and the coefficients of header, class A, class B and class C values, respectively.

### C. Performance evaluation

• This scenario is to protect a specific output value from the source code, when the speech data is compressed with a particular compression rate.
• The performance of the adaptation for the first scenario is shows in Fig.
• As seen in the figure, while the traditional method is ineffective for transfer of data over a noisy channel, by using different adaptations on different scenarios, protection for header values of every AMR frames attains considerably high quality of speech each mode, with 6 of 7 received codec modes having MOS greater than 2.5.
• It was apparent beforehand that the quality of files encoded by lower bit-rates is smaller than that of files with greater bit-rates.
• In contrast, as can be seen in the figure, the JSCC system equipped with an appropriate adaptation attained considerably higher quality speech data than the traditional model, with 6 out of 7 received codec modes having MOS greater than 2.0.

### V. CONCLUSION

• To improve the quality of data transmission in JSCC strategy, the authors have shown a new method to exploit the unequal importance of source outputs and the nature of noisy channels.
• As evidenced by the numerical results, the adaptation, which applies to map the source outputs to appropriate channel code inputs, can outperform the traditional UEP method to transmit speech data over noisy channel.
• Unless the messages has the same effect that caused by the channels while they are transferred, their JSCC with adaptation offers significant advantage in the finite blocklength mode.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

HAL Id: hal-01446916
https://hal.archives-ouvertes.fr/hal-01446916
Submitted on 26 Jan 2017
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Minh Quang Nguyen, Hang Nguyen, Eric Renault, Phan Thuan Do
To cite this version:
Minh Quang Nguyen, Hang Nguyen, Eric Renault, Phan Thuan Do. Joint source-channel coding with
adaptation. ICCE 2016 :6th International Conference on Communications and Electronics, Jul 2016,
Halong Bay, Vietnam. pp.77 - 81, �10.1109/CCE.2016.7562616�. �hal-01446916�

Minh-Quang Nguyen
1,2
, Hang Nguyen
1
, Eric Renault
1
, Phan-Thuan Do
2
1
SAMOVAR, T
´
el
´
ecom SudParis, CNRS, Universit
´
e Paris-Saclay, 9 rue Charles Fourier - 91011 Evry Cedex
{minh quang.nguyen, hang.nguyen, eric.renault}@telecom-sudparis.eu
2
School of Information and Communication Technology, Hanoi University of Science and Technology
Email: thuandt@soict.hust.edu.vn
Abstract—According to Shannon’s source-channel separation
theorem, the source coding and the channel coding operations
should be performed independently. However, such a method
does not exploit the nature of unequiprobable source output in
a ﬁnite block regime so that it can not achieve good qualities in
several scenarios in practical systems. In this study, we propose
a JSCC strategy which takes into account the unequiprobable
source output to attain better data quality than traditional
methods. An adaptation is performed in order to map source
code outputs to appropriate inputs of channel code. We also
propose a greedy algorithm to construct parameters for the
adaptation in polynomial time. Our simulation is performed on
AMR compression to estimate the effect of our approach as
compared to the traditional strategy. Using this approach we
ﬁnd signiﬁcantly improved speech quality after transmission in
both cases: transfer of only one AMR codec mode and transfer
of all AMR codec modes.
Index Terms—joint source channel coding, Shannon thoery,
information theory, speech, unequal error protection
I. INTRODUCTION
Joint source-channel coding (JSCC) is considered as one
of the enabling technologies for reliable communications in
which a source code and a channel code are performed
sequentially on data before transmission over noisy channels.
During source coding, we choose the most effective repre-
sentation of data to remove all the redundancy and form the
most compressed version possible. Conversely, during channel
coding, we add redundant information to the compressed data
prior to data transmission, and a receiver can then recover
original data which contains no apparent errors. In practical
systems, a perennial question is how to design a source code
and a channel code that can transfer data effectively.
The relationship between the source code and the channel
code was ﬁrst mentioned in Shannon theory [?] which pro-
vided a remarkable insight for optimal transmission systems.
It includes two parts. The direct part stated that a source can be
transmitted over channels in a reliable way if the source coding
rate is strictly below the channel capacity. The converse part
stated that if the source coding rate is smaller than the channel
capacity, reliable transmission is impossible. Shannon’s theory
shows the limit of what is achievable in the case of inﬁnite time
to transmit information. According to the proof of Shannon
communication systems perform source coding and channel
coding separately and independently to transfer data.
Shannon theory shows the limits of what is achievable in
the case where time is inﬁnite, i.e. the blocklength of channel
code output goes to inﬁnite. In real world systems, especially
in modern real time applications, because of their ﬁnite block-
length, one cannot wait forever. Kostina et al. [?] show that
data in practical systems can be transferred more efﬁciently
than traditional methods by choosing an appropriate source
and channel code: a source code is chosen with knowledge of
the channel, and a channel code is chosen with knowledge of
the source’s distribution and distortion measurement. In their
work, a bounds to show that k, the maximum number of source
symbols transmissible using a given channel blocklength n,
must satisfy
nC kR(d) = nV +
p
kV (d)Q
1
() + O(logn)
under the ﬁdelity constraint of exceeding a given distor-
tion level d with probability . Where Q is the standard
Gaussian complementary cumulative distribution function,
C, V, R(d), V (d) are the channel capacity and dispersion, rate-
distortion and the rate-dispersion function respectively. This is
a remarkable insight, but to achieve the fundamental limit, a
lot of work must be done to design codes reaching that limit.
There are many approaches to improve the traditional
method in different scenarios. One of the main trends of JSCC
is using Unequal Error Protection (UEP). It is based on an
assumption: not all information bits are equally important, due
to the different sensitivity of the source decoder to errors. UEP
consists of allocating coding redundancy depending on the im-
portance of the information bits. In published literature, mainly
two different channel packet formats have been considered:
variable-length channel packets with ﬁxed-length information
block (ﬁxed-k approach) [?], ﬁxed-length channel packets with
variable length information block (ﬁxed-n approach) [?] and
variable k, n for different packets (variable-(n, k) approach)
[?], [?], [?]. However, little attention has been paid to exploit
the unequiprobable distribution and the unequal importances
of source code output values.
In this paper, we extend the concept of UEP: instead of
differing redundancy allocated to information bits, we map
important source output values to appropriate channel input
values which are less distorted by the noisy channel than
others. Hence, by adding the same redundancy to information
bits, important parts of source output are protected more
efﬁciently than others. Simulation results are compared in
terms of quality of speech data with the traditional method.

II. JOINT SOURCE CHANNEL CODING IN FINITE
BLOCKLENGTH
A. Asymptotic optimality
Given a channel blocklength n, a source blocklength k,
a rate distortion measurement R(d) and a channel capacity
C. The output of the most advantageous source encoder is
approximately equiprobable over a set of roughly exp(kR(d))
distinct messages (for large k). The use of this source encoder
enables to represent most of the source outcomes within
distortion d. On the other hand, the channel coding theorem
shows that there exists a channel code that can distinguish
M = exp(kR(d)) < exp(nC) messages with high probability
if a maximum likelihood decoder is equipped in the system.
Therefore, it is shown in the JSCC theorem that one can
achieve high-quality transmission (in which there is small
probability of distortion exceeding d) by a simple combination
of both source and channel codes. However, in practical
systems where n is ﬁnite, the output of the source encoder is
not always equiprobable. Hence schemes that perform source
decoding and channel decoding separately without exploiting
unequal message probabilities may not achieve near-optimal
non-asymptotic performance. In the nonasymptotic regime,
there is a need to design a intelligent code that take into
account the residual encoded source redundancy at the channel
decoder.
B. Deﬁnitions
For practical systems where blocklengths are limited, with a
speciﬁc source code and channel code equipped, we propose a
method to take into account the nature of the channel and the
unequiprobable source outputs to improve traditional JSCCs.
In source coding, there are several values in the compressed
data that are more important than others: values which appear
in headers regularly or values with high probability distri-
bution. On the other hand, there are several values of the
channel code output which are less distorted than others after
transfer over a noisy channel. We introduce here a new concept
called adaptation, which is considered as a bijection between
source code outputs and channel code inputs. The adaptation
is performed at the transmitter and the receiver in order to map
important source output values to channel code inputs which
are less distorted by the channel.
Unlike traditional UEP strategies where protection is
brought about based on the unequal importance of data posi-
tions in compressed data, our proposed adaptation capitalizes
on the unequal importance of source output values. Therefore,
the indispensable values are masked by those which are
less distorted while being transferred. Moreover, instead of
different redundancy amount allocated to compressed data, we
use the same channel code rate for every source output values.
As a consequence, we only need to use one channel code to
protect data.
Assume that we have a source that produces sequences X
X . The output messages of source code are indexed by the set
{1, .., M }. Before entering the channel decoder, each source
output w {1, .., M } is put into the adaptation which gives
before source decoding, output w of the channel decoder is
1
. The mathematical model of our
proposed JSCC is described in Deﬁnition 1.
Deﬁnition 1 Joint source-channel code with adaptation is a
code such that the encoder and decoder mappings satisfy:
f = f
(M)
c
(M)
s
g = g
(M)
c
1
× g
(M)
s
where
f
(M)
s
: M {1, .., M }
ad : {1, .., M } {1, .., M }
f
(M)
c
: {1, .., M } X
g
(M)
c
: Y {1, .., M }
1
: {1, .., M } {1, .., M }
g
(M)
s
: {1, .., M }
ˆ
M
For a channel characterized by P
Y |X
and a system equipped
maps source code outputs to channel code inputs to attain the
least distortion. Work of our proposed JSCC with adaptor is
described in Figure 1.
In lossy compression, if compressed data is decompressed
without transferring over channel (i.e. it does not suffer any
error) the rate distortion is greater than 0. If it is transferred
over a noisy channel before being decompressed, errors caused
by the channel make the rate distortion greater than that
of decompressed data without transmission error. We deﬁne
the quality degradation of data after transferring the noisy
channels as follow:
Deﬁnition 2 Let w
1
{1, ..., M } be the outputs of the source
code from an input m; w
2
{1, ..., M} be a received value
after channel coding and transfering w
2
over channel. With a
speciﬁc source decoder, the rate distortion augmentation of w
1
to w
2
is given by:
s(w
1
, w
2
) = d(g
s
(w
1
), m) d(g
s
(w
2
), m)

where d( ˆm, m) is the rate distortion between decompressed
message ˆm and original message m. Rate distortion augmen-
tation can be interpreted as the difference of the rate distortion
when source code output w
1
of m is transformed to w
2
by
the noisy channel.
Deﬁnition 3 With a JSCC in a ﬁnite blocklength mode,
we deﬁne the probability of receiving a message ˆw after
transfering over channel and channel decoding when the input
of the channel encoder is w :
p( ˆw |w ) = P
Y |X
(y|x = f
c
(w ))
where ˆw = g
(M)
c
(y).
There are many ways to map the set M to itself to form an
effect of an Adaptations, we calculate the expectation of the
rate distortion augmentation by the formula below:
E =
X
w ∈{1,...,M}
X
ˆw ∈{1,...,M}
p(w ).p( ˆw | ¯w ).s(w , ad
1
( ˆw ))
where w is the output of the source coder and ˆw is the output
of the channel decoder. Based on this formula, we proposed 2
different algorithms to ﬁnd appropriate adaptations in the next
section.
The purpose of an adaptation in this scenario is to minimize
the rate distortion augmentation which is caused by errors
appearing on a particular value w {1, ..., M }. If w is
assigned to
¯
¯
1
(
¯
w) = w, the
expectation of rate distortion augmentation caused by errors
on w becomes:
E
er(w)
=
X
ˆw∈{1,...,M }
p(w).p(
¯
1
( ˆw))
To ﬁnd the optimal adaptation in this situation, we solve the
optimization problem:
min
ˆ
w
E
er(w)
The minimization is over all possible assignments for
ad(w), w {1, .., M }, w 6= w.
minimize quality degradation caused by errors on w can be
determined by the following formula:
min
X
w
2
p(w
1
, ˆw
2
)s( ˆw
1
, w
1
) =
N
X
i=0
p(w
1
,
ˆ
w
i
2
)s(
ˆ
w
i
1
, w
1
)
in which, for 0 i < j N
p(w
1
, ˆw
2
i
) < p(w
1
, ˆw
2
j
)
and
s( ˆw
1
, w
1
) < s( ˆw
1
, w
1
)(i < j)
Proof. According to the rearrangement inequality:
N
X
i=1
x
i
y
i
N
X
i=1
x
σ(i)
y
i
for every choices of real number x
1
x
2
... x
N
and
y
1
y
2
... y
N
, and every permutation {x
σ( i)
} of {x
i
(i =
1, ..., N )}.
Applying the rearrangement inequality for x
i
= p(w
1
, ˆw
2
i
)
and y
i
= s( ˆw
1
, w
1
)(i = 0, ..., N ) we have the minimum of
P
w
2
p(w
1
, ˆw
2
)s( ˆw
1
, w
1
).
To ﬁnd the optimal adaptation, we calculate over all possible
assignments for ad(w) and ﬁnd the minimum E
er(w)
by
applying this lemma. In the case of one-value-protection, the
steps to ﬁnd the optimal adaptation in the case of one-value-
protection is described in the Algorithm 1.
Algorithm 1
1: Input: p(w
1
|w
2
), s(w
1
, w
2
) for all w
1
, w
2
{1, .., M}
and ˆw {1, .., M}
2: Output: Set S of M pairs (w
1
, w
2
3: procedure
4: min = 1
5: for i 1 to M do
6: S
t
=
7: Assign i to w
8: Sort p() in increased order to get array p()
9: Sort p() in increased order to get array p()
10: Assign w to w.
11: E =
12: if E < min then
13: min = E
14: S = S
t
15: Return S
B. Greedy algorithms for unequal error protection
This section describes an algorithm to ﬁnd an adaptation
to decrease the rate distortion in JSCC. Let i(w, w
) be the
indicator function of the adaptation, i.e.:
i(w, w
) =
(
1 if w is mapped to w
0 otherwise
The optimal adaptation can be determined by solving the
following problem:
min
M
X
w=1
M
X
w
=1
p(w)p( ˆw
|w
)s(w, ˆw
)i(w, w
)
subjects to:
M
X
w=1
i(w, w
) = 1
M
X
w=1
i(w, w
) = 1

Solving this problem by means of an complete search is
impossible since M ! cases have to be considered. Instead,
we propose this greedy algorithm with fast running time to
ﬁnd a feasible solution. The idea is as follow: from the list
of all w M in increased order of importance, at each step,
we ﬁnd the most appropriate value of w
and map it to the
most important value from the list. The pseudo code of this
greedy algorithm is described in Algorithm 2. The effect of
our JSSC with adaptation is estimated by comparing with the
traditional UEP strategy [?]. It is clear that the complexity of
Algorithm 2 Greedy algorithm
1: Input: p(w), p( ˆw
|w
), s(w, w
) for all w, w
, ˆw
{1, .., M }
2: Output: i(w, w
) for all w, w
{1, .., M }
3: procedure
4: Sort {w} in decreased order of importance: S =
{m
i
{1, .., M }|m
i
< m
j
with i < j}
5: for i 1 to M do
6: for j 1 to M do
7: i(m
i
, m
j
) = 0
8: for doi 1 to M
9: w
m
1
10: Sc
min
11: P e 0
12: for e doach w in T
13: P e+ = p(w|m
i
).s(w, m
i
)
14: if thenP e < Sc
min
15: i(i, w
min
) = 0
16: i(w
min
, i) = 0
17: i(w, i) = 1
18: i(i, w) = 1
19: P e = Sc
min
20: w
min
= w
21: Return
this greedy algorithm is O(M
2
) where M is the number of
source outputs.
IV. SIMULATION AND RESULTS
We have conducted some experiments with and without
an adaptation in two scenarios. In the ﬁrst scenario, we use
an adaptation to protect a particular value of compressed
speech data in a ﬁxed compression mode. Parameters for
the adaptation are calculated by Algorithm 1. In the second
scenario, our algorithm minimizes the rate distortion of speech
data for all codec modes with the adaptation parameters
calculated by Algorithm 2.
A. Simulation description
Our simulation is based on speech data with the Adaptive
Multirate (AMR) audio compression [?] as the source code
and the traditional convolution code [?] as the channel code.
Raw speech data is compressed by the AMR compression to
obtain 8 different compression rates. The convolution code
with rate 1/3 is used to add redundant data to the compressed
Fig. 2. Comparison of UEP and JSCC with adaptation in one particular AMR
codec mode
the AMR source code and the convolution code. Therefore, the
important values of the AMR data are mapped to values which
suffered less errors than others when they are transferred over
noisy channels. The performance of our proposed method is
compared to the traditional UEP strategy [?].
For the sakes of clarity and brevity, we use binary asymetric
channel. Other channels will be considered in future works. We
use Mean Opinion Score to calculate the quality of speech data
after transfer over the channel. Our simulation is described in
Fig. 1.
B. Simulation parameter
The blocklength of the channel code input is 8 bits. To
estimate the importance of each 8-bit value in an AMR frame,
we calculated the probability distribution of channel code input
values in a set of 556.300 frames. We remind that an AMR
frame includes 4 types of bits with different sensitivities to
error: header, class A, class B and class C. We assume that the
importance of each channel code input is proportional to its
probability distribution. Interestingly, the highest probability
distribution falls into the value of the 8-bit header which is
the most important part of each AMR frame. In addition,
we further assume that in each type, the rate distortion
augmentation of 2 source output values is proportional to the
Hamming distance between them. Consequently, a coefﬁcient
C
type
is a multiple of the Hamming distance between 2 values
of each type to calculate the rate distortion augmentation as
follows:
s(w
1
, w
2
) =
(C
h
p
h
(w
1
)+C
A
p
A
(w
1
)+C
B
p
B
(w
1
)+C
C
p
C
(w
1
))H(w
1
, w
2
)
where p
h
, p
a
, p
b
, p
c
and C
h
, C
a
, C
b
, C
c
are the probabilities of
each value and the coefﬁcients of header, class A, class B and
class C values, respectively. The parameters of the simulation
are presented in Table I.

##### References
More filters

Journal ArticleDOI
TL;DR: An intuitive shortcut to understanding the maximum a posteriori (MAP) decoder is presented based on an approximation to correspond to a dual-maxima computation combined with forward and backward recursions of Viterbi algorithm computations.
Abstract: An intuitive shortcut to understanding the maximum a posteriori (MAP) decoder is presented based on an approximation. This is shown to correspond to a dual-maxima computation combined with forward and backward recursions of Viterbi algorithm computations. The logarithmic version of the MAP algorithm can similarly be reduced to the same form by applying the same approximation. Conversely, if a correction term is added to the approximation, the exact MAP algorithm is recovered. It is also shown how the MAP decoder memory can be drastically reduced at the cost of a modest increase in processing speed.

741 citations

### "Joint source-channel coding with ad..." refers methods in this paper

• ...Our simulation is based on speech data with the Adaptive Multirate (AMR) audio compression [8] as the source code and the traditional convolution code [9] as the channel code....

[...]

Journal ArticleDOI
TL;DR: This work develops a framework for encoding based on embedded source codes and embedded error correcting and error detecting channel codes and shows that the unequal error/erasure protection policies that maximize the average useful source coding rate allow progressive transmission with optimal unequal protection at a number of intermediate rates.
Abstract: An embedded source code allows the decoder to reconstruct the source progressively from the prefixes of a single bit stream. It is desirable to design joint source-channel coding schemes which retain the capability of progressive reconstruction in the presence of channel noise or packet loss. Here, we address the problem of joint source-channel coding of images for progressive transmission over memoryless bit error or packet erasure channels. We develop a framework for encoding based on embedded source codes and embedded error correcting and error detecting channel codes. For a target transmission rate, we provide solutions and an algorithm for the design of optimal unequal error/erasure protection. Three performance measures are considered: the average distortion, the average peak signal-to-noise ratio, and the average useful source coding rate. Under the assumption of rate compatibility of the underlying channel codes, we provide necessary conditions for progressive transmission of joint source-channel codes. We also show that the unequal error/erasure protection policies that maximize the average useful source coding rate allow progressive transmission with optimal unequal protection at a number of intermediate rates.

228 citations

### "Joint source-channel coding with ad..." refers background in this paper

• ...In published literature, mainly two different channel packet formats have been considered: variable-length channel packets with fixed-length information block (fixed-k approach) [3], fixed-length channel packets with variable length information block (fixed-n approach) [4] and variable k, n for different packets (variable-(n, k) approach) [5]–[7]....

[...]

Journal ArticleDOI
TL;DR: It is shown that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the nonasymptotic regime.
Abstract: This paper finds new tight finite-blocklength bounds for the best achievable lossy joint source-channel code rate, and demonstrates that joint source-channel code design brings considerable performance advantage over a separate one in the nonasymptotic regime. A joint source-channel code maps a block of k source symbols onto a length-n channel codeword, and the fidelity of reproduction at the receiver end is measured by the probability e that the distortion exceeds a given threshold d. For memoryless sources and channels, it is demonstrated that the parameters of the best joint source-channel code must satisfy nC - kR(d) ≈ √(nV + k V(d)) Q-1(e), where C and V are the channel capacity and channel dispersion, respectively; R(d) and V(d) are the source rate-distortion and rate-dispersion functions; and Q is the standard Gaussian complementary cumulative distribution function. Symbol-by-symbol (uncoded) transmission is known to achieve the Shannon limit when the source and channel satisfy a certain probabilistic matching condition. In this paper, we show that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the nonasymptotic regime.

133 citations

Posted Content
Abstract: This paper finds new tight finite-blocklength bounds for the best achievable lossy joint source-channel code rate, and demonstrates that joint source-channel code design brings considerable performance advantage over a separate one in the non-asymptotic regime. A joint source-channel code maps a block of $k$ source symbols onto a length$-n$ channel codeword, and the fidelity of reproduction at the receiver end is measured by the probability $\epsilon$ that the distortion exceeds a given threshold $d$. For memoryless sources and channels, it is demonstrated that the parameters of the best joint source-channel code must satisfy $nC - kR(d) \approx \sqrt{nV + k \mathcal V(d)} Q(\epsilon)$, where $C$ and $V$ are the channel capacity and channel dispersion, respectively; $R(d)$ and $\mathcal V(d)$ are the source rate-distortion and rate-dispersion functions; and $Q$ is the standard Gaussian complementary cdf. Symbol-by-symbol (uncoded) transmission is known to achieve the Shannon limit when the source and channel satisfy a certain probabilistic matching condition. In this paper we show that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the non-asymptotic regime.

93 citations

Journal ArticleDOI
17 Nov 2003
TL;DR: An algorithm is given that accelerates the computation of the optimal strategy of Chande and Farvardin by finding an explicit formula for the number of occurrences of the same channel code.
Abstract: Embedded image codes are very sensitive to channel noise because a single bit error can lead to an irreversible loss of synchronization between the encoder and the decoder. P.G. Sherwood and K. Zeger (see IEEE Signal Processing Lett., vol.4, p.191-8, 1997) introduced a powerful system that protects an embedded wavelet image code with a concatenation of a cyclic redundancy check coder for error detection and a rate-compatible punctured convolutional coder for error correction. For such systems, V. Chande and N. Farvardin (see IEEE J. Select. Areas Commun., vol.18, p.850-60, 2000) proposed an unequal error protection strategy that maximizes the expected number of correctly received source bits subject to a target transmission rate. Noting that an optimal strategy protects successive source blocks with the same channel code, we give an algorithm that accelerates the computation of the optimal strategy of Chande and Farvardin by finding an explicit formula for the number of occurrences of the same channel code. Experimental results with two competitive channel coders and a binary symmetric channel showed that the speed-up factor over the approach of Chande and Farvardin ranged from 2.82 to 44.76 for transmission rates between 0.25 and 2 bits per pixel.

81 citations

### "Joint source-channel coding with ad..." refers background in this paper

• ...In published literature, mainly two different channel packet formats have been considered: variable-length channel packets with fixed-length information block (fixed-k approach) [3], fixed-length channel packets with variable length information block (fixed-n approach) [4] and variable k, n for different packets (variable-(n, k) approach) [5]–[7]....

[...]