scispace - formally typeset
Open AccessProceedings ArticleDOI

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Reads0
Chats0
TLDR
This work presents iterative soft-in soft-out (SISO) decoding algorithms in a common framework and presents a related algorithm - random redundant iterative decoding - that is both practically realizable and applicable to arbitrary linear block codes.
Abstract
A number of authors have recently considered iterative soft-in soft-out (SISO) decoding algorithms for classical linear block codes that utilize redundant Tanner graphs. Jiang and Narayanan presented a practically realizable algorithm that applies only to cyclic codes while Kothiyal et al. presented an algorithm that, while applicable to arbitrary linear block codes, does not imply a low-complexity implementation. This work first presents the aforementioned algorithms in a common framework and then presents a related algorithm - random redundant iterative decoding - that is both practically realizable and applicable to arbitrary linear block codes. Simulation results illustrate the successful application of the random redundant iterative decoding algorithm to the extended binary Golay code. Additionally, the proposed algorithm is shown to outperform Jiang and Narayanan's algorithm for a number of Bose-Chaudhuri-Hocquenghem (BCH) codes

read more

Content maybe subject to copyright    Report

Random Redundant Soft-In Soft-Out Decoding of
Linear Block Codes
Thomas R. Halford and Keith M. Chugg
Communication Sciences Institute
University of Southern California
Los Angeles, CA 90089-2565
AbstractA number of authors have recently considered
iterative soft-in soft-out (SISO) decoding algorithms for classical
linear block codes that utilize redundant Tanner graphs. Jiang
and Narayanan presented a practically realizable algorithm that
applies only to cyclic codes while Kothiyal et al. presented an
algorithm that, while applicable to arbitrary linear block codes,
does not imply a low-complexity implementation. This work first
presents the aforementioned algorithms in a common framework
and then presents a related algorithm - random redundant iter-
ative decoding - that is both practically realizable and applicable
to arbitrary linear block codes. Simulation results illustrate
the successful application of the random redundant iterative
decoding algorithm to the extended binary Golay code. Addi-
tionally, the proposed algorithm is shown to outperform Jiang
and Narayanan’s algorithm for a number of Bose-Chaudhuri-
Hocquenghem (BCH) codes.
I. INTRODUCTION
Graphical models of codes, and the SISO decoding algo-
rithms implied by those models, are of great current interest.
Whereas modern codes are designed with graphical models in
mind, classical linear block codes by and large were not. It is
well known that acyclic graphical code models (e.g. trellises)
imply optimal SISO algorithms whereas models with cycles
imply suboptimal, albeit often substantially less complex,
decoding algorithms [1]. For many classical codes, the Cut-
Set Bound precludes the existence of practically realizable
acyclic graphical models [1]. The search for good cyclic
graphical models of such codes - that is, models which
imply decoding algorithms with near-optimal performance and
practically realizable complexity - is thus an important open
problem.
A natural starting point in the search for good graphical
models of classical linear block codes are the Tanner graphs
[2] which are used to decode LDPCs. Although Tanner graphs
imply very low-complexity decoding algorithms, most classi-
cal linear block codes are defined by high-density, rather than
low-density, parity-check matrices and the performance of the
decoding algorithms implied by these models is poor.
Recently, a number of authors have studied the SISO decod-
ing algorithms implied by redundant Tanner graphs [3], [4].
Section II describes these algorithms in a unified framework
while Section III presents the proposed random redundant
iterative decoding algorithm (RRD). Section IV details the
application of the proposed algorithm to the extended Golay
code and to a number of BCH codes. Concluding remarks are
given in Section V.
II. REDUNDANT TANNER GRAPHS
As a motivating example, consider two parity check matri-
ces for the [8, 4, 4] extended Hamming code C
8
:
H
1
=
1 1 1 1 0 0 0 0
0 0 1 1 1 1 0 0
0 0 0 0 1 1 1 1
0 1 1 0 0 1 1 0
, H
2
=
0 1 0 1 0 1 0 1
1 1 0 0 0 0 1 1
1 0 1 0 1 0 1 0
0 1 0 1 1 0 1 0
(1)
These parity-check matrices can be used to construct the
redundant Tanner graph illustrated in Figure 1. Note that the
vertices corresponding to rows in H
1
and H
2
are labeled r
1,i
and r
2,i
for i = 1, 2, 3, 4, respectively. The Tanner graph
illustrated in Figure 1 is redundant in that only the checks
corresponding to H
1
or H
2
are needed to completely specify
the code. The degree of redundancy in a redundant Tanner
graph is defined as the number of parity-check matrices used to
construct the model; redundant Tanner graphs can be naturally
defined with any degree.
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
0
(10)
r
1
(11)
r
2
(12)
r
3
(13)
2 : (14)
3 : (15)
H
1
(16)
u
4
(17)
α
g +3
(18)
α
g +4
(19)
α
1
= α
g +1
(20)
α
2
= α
g +2
(21)
α
1
= α
5
(22)
α
3
= α
7
(23)
α
1
= α
3
(24)
α
2
= α
6
(25)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
0
(10)
r
1
(11)
r
2
(12)
r
3
(13)
2 : (14)
3 : (15)
H
1
(16)
u
4
(17)
α
g +3
(18)
α
g +4
(19)
α
1
= α
g +1
(20)
α
2
= α
g +2
(21)
α
1
= α
5
(22)
α
3
= α
7
(23)
α
1
= α
3
(24)
α
2
= α
6
(25)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
0
(10)
r
1
(11)
r
2
(12)
r
3
(13)
2 : (14)
3 : (15)
H
1
(16)
u
4
(17)
α
g +3
(18)
α
g +4
(19)
α
1
= α
g +1
(20)
α
2
= α
g +2
(21)
α
1
= α
5
(22)
α
3
= α
7
(23)
α
1
= α
3
(24)
α
2
= α
6
(25)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
0
(10)
r
1
(11)
r
2
(12)
r
3
(13)
2 : (14)
3 : (15)
H
1
(16)
u
4
(17)
α
g +3
(18)
α
g +4
(19)
α
1
= α
g +1
(20)
α
2
= α
g +2
(21)
α
1
= α
5
(22)
α
3
= α
7
(23)
α
1
= α
3
(24)
α
2
= α
6
(25)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
0
(10)
r
1
(11)
r
2
(12)
r
3
(13)
2 : (14)
3 : (15)
H
1
(16)
u
4
(17)
α
g +3
(18)
α
g +4
(19)
α
1
= α
g +1
(20)
α
2
= α
g +2
(21)
α
1
= α
5
(22)
α
3
= α
7
(23)
α
1
= α
3
(24)
α
2
= α
6
(25)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
0
(10)
r
1
(11)
r
2
(12)
r
3
(13)
2 : (14)
3 : (15)
H
1
(16)
u
4
(17)
α
g +3
(18)
α
g +4
(19)
α
1
= α
g +1
(20)
α
2
= α
g +2
(21)
α
1
= α
5
(22)
α
3
= α
7
(23)
α
1
= α
3
(24)
α
2
= α
6
(25)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
0
(10)
r
1
(11)
r
2
(12)
r
3
(13)
2 : (14)
3 : (15)
H
1
(16)
u
4
(17)
α
g +3
(18)
α
g +4
(19)
α
1
= α
g +1
(20)
α
2
= α
g +2
(21)
α
1
= α
5
(22)
α
3
= α
7
(23)
α
1
= α
3
(24)
α
2
= α
6
(25)
1
. . . (1)
H
1
(2)
H
2
(3)
H
3
(4)
c
4
(5)
c
5
(6)
c
6
(7)
c
0
(8)
V
8
(9)
C
1
(10)
C
2
(11)
0 : (12)
1 : (13)
2 : (14)
3 : (15)
H
1
(16)
u
4
(17)
α
g +3
(18)
α
g +4
(19)
α
1
= α
g +1
(20)
α
2
= α
g +2
(21)
α
1
= α
5
(22)
α
3
= α
7
(23)
α
1
= α
3
(24)
α
2
= α
6
(25)
1
. . . (1)
H
1
(2)
H
2
(3)
H
3
(4)
c
4
(5)
c
5
(6)
c
6
(7)
c
0
(8)
V
8
(9)
C
1
(10)
C
2
(11)
0 : (12)
1 : (13)
2 : (14)
3 : (15)
H
1
(16)
u
4
(17)
α
g +3
(18)
α
g +4
(19)
α
1
= α
g +1
(20)
α
2
= α
g +2
(21)
α
1
= α
5
(22)
α
3
= α
7
(23)
α
1
= α
3
(24)
α
2
= α
6
(25)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
1,0
(10)
r
1,1
(11)
r
1,2
(12)
r
1,3
(13)
r
2,0
(14)
r
2,1
(15)
r
2,2
(16)
r
2,3
(17)
2 : (18)
3 : (19)
H
1
(20)
u
4
(21)
α
g +3
(22)
α
g +4
(23)
α
1
= α
g +1
(24)
α
2
= α
g +2
(25)
α
1
= α
5
(26)
α
3
= α
7
(27)
α
1
= α
3
(28)
α
2
= α
6
(29)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
1,0
(10)
r
1,1
(11)
r
1,2
(12)
r
1,3
(13)
r
2,0
(14)
r
2,1
(15)
r
2,2
(16)
r
2,3
(17)
2 : (18)
3 : (19)
H
1
(20)
u
4
(21)
α
g +3
(22)
α
g +4
(23)
α
1
= α
g +1
(24)
α
2
= α
g +2
(25)
α
1
= α
5
(26)
α
3
= α
7
(27)
α
1
= α
3
(28)
α
2
= α
6
(29)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
1,0
(10)
r
1,1
(11)
r
1,2
(12)
r
1,3
(13)
r
2,0
(14)
r
2,1
(15)
r
2,2
(16)
r
2,3
(17)
2 : (18)
3 : (19)
H
1
(20)
u
4
(21)
α
g +3
(22)
α
g +4
(23)
α
1
= α
g +1
(24)
α
2
= α
g +2
(25)
α
1
= α
5
(26)
α
3
= α
7
(27)
α
1
= α
3
(28)
α
2
= α
6
(29)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
1,0
(10)
r
1,1
(11)
r
1,2
(12)
r
1,3
(13)
r
2,0
(14)
r
2,1
(15)
r
2,2
(16)
r
2,3
(17)
2 : (18)
3 : (19)
H
1
(20)
u
4
(21)
α
g +3
(22)
α
g +4
(23)
α
1
= α
g +1
(24)
α
2
= α
g +2
(25)
α
1
= α
5
(26)
α
3
= α
7
(27)
α
1
= α
3
(28)
α
2
= α
6
(29)
1
. . . (1)
c
0
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
1,0
(10)
r
1,1
(11)
r
1,2
(12)
r
1,3
(13)
r
2,0
(14)
r
2,1
(15)
r
2,2
(16)
r
2,3
(17)
2 : (18)
3 : (19)
H
1
(20)
u
4
(21)
α
g +3
(22)
α
g +4
(23)
α
1
= α
g +1
(24)
α
2
= α
g +2
(25)
α
1
= α
5
(26)
α
3
= α
7
(27)
α
1
= α
3
(28)
α
2
= α
6
(29)
1
. . . (1)
c
8
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
1,1
(10)
r
1,2
(11)
r
1,3
(12)
r
1,4
(13)
r
2,1
(14)
r
2,2
(15)
r
2,3
(16)
r
2,4
(17)
r
4
(18)
3 : (19)
H
1
(20)
u
4
(21)
α
g +3
(22)
α
g +4
(23)
α
1
= α
g +1
(24)
α
2
= α
g +2
(25)
α
1
= α
5
(26)
α
3
= α
7
(27)
α
1
= α
3
(28)
α
2
= α
6
(29)
1
. . . (1)
c
8
(2)
c
1
(3)
c
2
(4)
c
3
(5)
c
4
(6)
c
5
(7)
c
6
(8)
c
7
(9)
r
1,1
(10)
r
1,2
(11)
r
1,3
(12)
r
1,4
(13)
r
2,1
(14)
r
2,2
(15)
r
2,3
(16)
r
2,4
(17)
r
4
(18)
3 : (19)
H
1
(20)
u
4
(21)
α
g +3
(22)
α
g +4
(23)
α
1
= α
g +1
(24)
α
2
= α
g +2
(25)
α
1
= α
5
(26)
α
3
= α
7
(27)
α
1
= α
3
(28)
α
2
= α
6
(29)
1
Fig. 1. Redundant Tanner graph for the [8, 4, 4] extended Hamming code
defined by H
1
and H
2
.
Redundant Tanner graphs imply the following standard iter-
ative decoding algorithm (note that the message passing sched-
ule described below is reasonable but not unique). Consider a
degree n redundant Tanner graph. For each set of checks H
i
,
i = 1, . . . , n, denote by M
i
the vector of messages passed
from checks to variables and denote by M
i
the vector of
messages passed from variables to checks. Note that for all

i = 1, . . . , n, M
i
is initialized to zero
1
. For i = 1, . . . , n, the
M
i
messages are first updated at the variable nodes using
the channel observation and the M
j
messages (for j 6= i).
The M
i
messages are then updated at the check nodes using
M
i
. This updating procedure repeats for a prescribed number
of iterations, I. Note that this schedule can be modified so that
a number of decoding iterations are performed on each set of
checks before proceeding to the next.
From the viewpoint of practical low-complexity implemen-
tation, the decoding algorithm described above suffers from
two drawbacks:
1) For the standard message passing rules, the intermediate
messages vectors M
i
must be stored for i = 1, . . . , n
which results in an n-fold increase of the memory
required with respect to standard Tanner graph decoding.
2) Since each parity-check matrix defines a different set of
checks, there is either an n-fold increase in the number
of single parity-check (SPC) trellises which must be
implemented or the SPC trellises must be implemented
in a reconfigurable fashion.
The first drawback may be addressed by using a massively
redundant Tanner graph. If a large degree of redundancy
is used then I (the number of decoding iterations on the
aggregate model) can be set to one and the intermediate
message vectors M
i
need not be stored. Before addressing
the second drawback, permutation groups of codes must first
be defined and reviewed.
Let C be a block code of length n. The permutation group
of C, Per (C), is the set of permutations of coordinate places
which send C onto itself. By definition, Per (C) is a subgroup
of the symmetric group of order n, S
n
. Returning to the
[8, 4, 4] extended Hamming code, it can be shown that [5]
2
:
σ = (6, 4, 2, 8, 1, 7, 5, 3) Per (C
8
) (2)
Applying σ to the columns of H
1
yields H
2
.
From the above example, it is clear that redundant parity-
checks for a given code, C, can be generated by applying
permutations drawn from Per (C) to the columns of some
initial parity-check matrix H. Observe that decoding with
soft-input vector SI on TG (βH) (where β Per (C))
is equivalent to decoding with soft-input vector β
1
SI on
TG (H). It is this observation that allows for efficient imple-
mentation of redundant Tanner graph decoding: provided that
the redundant parity-checks are column permuted versions of
some base matrix H, redundant Tanner graph decoding can
be implemented by permuting soft information vectors and
decoding with a constant set of constraints.
From the above discussion, it is apparent that Kothiyal et
al.s adaptive belief propagation (ABP) algorithm [4] and Jiang
1
Throughout this work, decoding is assumed to be performed in the
log(·) domain, i.e. either min-sum or min
?
-sum processing is assumed
2
Throughout this work, permutations of n coordinate places are de-
scribed by n-tuples. For example, the application of the permutation
(2, 5, 1, 3, 4, 7, 6) to a 7 bit codeword (c
1
, c
2
, c
3
, c
4
, c
5
, c
6
, c
7
) yields the
permuted codeword: (c
3
, c
1
, c
4
, c
5
, c
2
, c
7
, c
6
). The identity permutation is
denoted and the inverse of a permutation β is denoted β
1
.
and Narayanan’s stochastic shifting based iterative decoding
(SSID) algorithm [3] are redundant Tanner graph decoding
algorithms. Kothiyal et al.s scheme adaptively chooses new
parity-check sets based on soft information reliability. Al-
though the ABP algorithm can be applied to arbitrary block
codes, it does not imply a practical low-complexity imple-
mentation because the check sets change with every iteration.
Furthermore, the ABP algorithm requires computationally ex-
pensive Gaussian elimination of potentially large parity-check
matrices at every iteration. Jiang and Narayanan’s scheme is
an example of a practical, low-complexity redundant Tanner
graph decoding algorithm for cyclic codes which uses the
permutation group approach described above. The random
redundant decoding algorithm proposed in this work is, in fact,
an extension of Jiang and Narayanan’s algorithm to arbitrary
block codes.
III. PROPOSED DECODING ALGORITHM
A. The Main Algorithm
Algorithm 1 describes the proposed random redundant
iterative decoding (RRD) algorithm. The inner for-loop of
Algorithm 1 describes an efficient redundant Tanner graph
decoding algorithm with the addition of a damping coefficient
α. The outer for-loop of Algorithm 1 iterates over different
values of α. By varying the damping coefficient α, the
algorithm avoids local minima in the solution space. Many
authors have considered the introduction of such damping
coefficients in iterative soft decoding algorithms to achieve
this end (see, for example, [6]). For practical implementations
where a large number of outer iterations is undesirable from
a time complexity standpoint, a single damping coefficient (or
a small set of coefficients) could be used depending on the
operating noise power.
Algorithm 1 takes as input a received soft information
vector, SI, a parity-check matrix for the code, H, and four
parameters:
1) α
0
: The initial damping coefficient.
2) I
1
: The number of Tanner graph decoding iterations to
perform per inner iteration.
3) I
2
: The maximum number of inner iterations to perform
per outer iteration. Each inner iteration considers a
different random permutation of the codeword elements.
4) I
3
: The maximum number of outer iterations to perform.
Each outer iteration uses a different damping coefficient.
Let s be the the sum of input soft information, SI, and the out-
put soft information produced by all previous inner iterations.
During the i
2
-th inner iteration, I
1
Tanner graph decoding
iterations are performed on TG (H) with damping coefficient
α and soft input s producing the soft output vector, s
0
, and
hard decision, c
0
. The cumulative soft information vector, s,
is then updated to include s
0
. The inner iteration concludes
by applying a random permutation, θ, from the permutation
group of the code to s. Decoding concludes when either a
valid codeword is returned by the Tanner graph decoding step
or when a maximum number of iterations is reached. Before

returning the final soft output, SO, and hard decision, HD,
the random permutations are undone by applying the inverse
of the product of the permutations that were applied to s.
Input: Length n soft-input vector SI.
n k × n binary parity-check matrix H.
Parameters I
1
, I
2
, I
3
, α
0
.
Output: Length n soft-output vector SO.
Length n hard-decision vector HD.
α α
0
;
for 1 i
3
I
3
do
Θ ;
s SI;
for 1 i
2
I
2
do
Perform I
1
decoding iterations of s on TG (H)
with damping coefficient α and place soft output
in s
0
and resulting hard decision in c
0
;
s s + s
0
;
if Hc
0
= 0 then
Apply Θ
1
to s and hd
0
;
SO s SI;
HD c
0
;
return SO and HD
end
θ random element of Per (C);
Apply θ to s;
Θ θΘ;
end
α α
0
+ (1 α
0
)
i
3
I
3
1
;
end
Algorithm 1: Random Redundant Decoding.
The following subsections describe supporting algorithms
required by random redundant decoding.
B. Initial Parity-Check Matrix Selection
As will be demonstrated empirically in Section IV, the
performance of the proposed decoding algorithm depends
heavily on the choice of parity-check matrix (and thus the
Tanner graph) used to represent the code. It is widely accepted
that the performance of the decoding algorithms implied by
Tanner graphs are adversely affected by short cycles (see for
example [7]). Algorithm 2 searches for a suitable parity-check
matrix by greedily performing row operations on an input
binary parity-check matrix H in order to reduce the number
of short cycles contained in the Tanner graph defined by H
(denoted TG (H)). Note that after every row operation, the
updated partiy check matrix H
0
defines the same code as H.
Note also that the operation of Algorithm 2 requires that short
cycles in bipartite graphs can be counted efficiently; such an
algorithm was described in [7].
C. Generation of Random Permutation Group Elements
Algorithm 1 requires the efficient generation of random
elements of the permutation group of a code. The algorithm
presented below, the product-replacement algorithm, generates
Input: n k × n binary parity-check matrix H.
Output: n k × n binary parity-check matrix H
0
.
H
0
H, r
?
1
1, r
?
2
1, g
?
girth of TG (H);
N
?
g
?
number of g
?
-cycles in TG (H
0
);
N
?
g
?
+2
number of g
?
+ 2-cycles in TG (H
0
);
repeat
if r
?
1
6= r
?
2
then Replace row r
?
2
in H
0
with binary
sum of rows r
?
1
and r
?
2
;
r
?
1
0, r
?
2
0;
for r
1
, r
2
= 1, . . . , n k, r
2
6= r
1
do
Replace row r
?
2
in H
0
with binary sum of rows
r
?
1
and r
?
2
;
g girth of TG (H
0
);
N
g
number of g-cycles in TG (H
0
);
N
g +2
number of g + 2-cycles in TG (H
0
);
if g < g
?
then
g
?
g, r
?
1
r
1
, r
?
2
r
2
, N
?
g
N
g
,
N
?
g +2
N
g +2
;
end
else if N
g
< N
?
g
then r
?
1
r
1
, r
?
2
r
2
,
N
?
g
N
g
, N
?
g +2
N
g +2
;
else if N
g
= N
?
g
then
if N
g +2
< N
?
g +2
then r
?
1
r
1
, r
?
2
r
2
,
N
?
g +2
N
g +2
;
end
Undo row replacement;
end
until r
?
1
= 0 & r
?
2
= 0;
return H
0
Algorithm 2: Tanner graph cycle reduction.
random elements of an arbitrary finite group and is due to
Celler et al. [8].
Let G be a finite group with generating set:
X = {x
1
, x
2
, . . . , x
k
} (3)
That is, every element g G can be expressed as a finite
product:
g = x
n
1
i
1
x
n
2
i
2
· · · x
n
t
i
t
(4)
where x
i
j
X and n
j
N (the set of natural numbers) for
all j. Let N > k be an integer.
Cellar et al.s algorithm constructs a vector S of length N
containing all of the elements of X with repeats. Algorithm
3 describes the basic operation of Cellar et al.s algorithm.
Generation of random group elements is initialized by exe-
cuting Algorithm 3 K times. After initialization, successive
executions of Algorithm 3 yield random elements of G. Note
that the execution of Algorithm 3 requires only one group
multiplication and is thus efficient (permutation multiplication
is particularly easy). Also note that after every execution, the
group elements contained in S generate G. Cellar et al. found
that setting N = max(2k + 1, 10) and K = 60 provides near-
uniform random generation of group elements in practice.

Input: Length N vector of group elements S.
Output: Random group element g.
Updated group element vector S.
(i, j) pair of random integers
i 6= j {1, 2, . . . , N };
S[i] S[j]S[i];
g S[i];
return g;
Algorithm 3: Random group element generation.
IV. SIMULATION RESULTS AND DISCUSSION
A. The Extended Golay Code
Consider the extended Golay code, C
G
, defined by the
parity-check matrix of Equation 5.
H
G
=
1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1 0 1 0 0 1 1
1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1 0 1 0 0 1
0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1 0 1 0 1
0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1 0 1 1
1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1 0 1
0 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1 1
1 0 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1
0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 1
0 0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 1
0 0 0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 1
0 0 0 0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
(5)
The permutation group of C
G
is generated by four permuta-
tions (see Ch. 20 of [5]).
The parity-check matrix of Equation 5 contains many short
cycles. The application of Algorithm 2 to this matrix yields:
H
0
G
=
1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1 0 1 0 0 1 1
0 1 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 1 0 0 1
1 1 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1
0 0 0 1 0 0 1 0 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1 0
1 0 0 0 0 1 0 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 1 1 0 1 0 1 0 1 0
1 0 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1
0 0 1 1 0 1 0 1 1 0 0 0 1 0 0 1 1 1 1 1 0 1 0 0
0 0 0 1 1 1 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1
0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 1 1 0 1
0 0 1 0 1 0 1 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0
1 1 0 1 0 1 1 0 0 1 1 0 0 1 0 1 0 0 0 0 1 1 1 0
(6)
Whereas the Tanner graphs defined by Equation 5 contains
1551 4-cycles and 65632 6-cycles, the Tanner graph defined
by Equation 6 contains 295 4-cycles and 6204 6-cycles. Note
that any Tanner graph representing C
G
necessarily contains
4-cycles [9].
Figure 2 compares the performance of five decoding algo-
rithms for C
G
: optimal SISO decoding via a trellis; standard
iterative decoding using the Tanner graphs implied by H
and H
0
(labeled TG(H) and TG(H
0
), respectively); and,
the proposed random redundant iterative decoding (RRD)
algorithm using H and H
0
as input parity-check matrices
(labeled RRD(H) and RRD(H
0
), respectively). One hundred
10
-5
10
-4
10
-3
10
-2
3 3.5 4 4.5 5 5.5 6 6.5 7
TG (H)
TG (H')
RRD (H)
RRD (H')
Trellis
Bit Error Rate
Eb/No (dB)
Fig. 2. Bit error rate performance comparison of different decoding
algorithms for the [24, 12, 8] extended Golay code.
decoding iterations were performed for the Tanner graph
decoders while the RRD algorithms both use input parameter
sets: α
0
= 0.08, I
1
= 2, I
2
= 30 and I
3
= 20. Flooding
message passing schedules were used for all Tanner graph
decoders and binary antipodal signaling over additive white
Gaussian noise (AWGN) channels is assumed throughout this
work.
Note first in Figure 2 that the performance of the RRD
algorithm is highly sensitive to the choice of parity-check
matrix: the decoder using H
0
outperforms the decoder using
H by approximately 1.75 dB at a bit error rate (BER) of 10
4
.
It is known that any optimal SISO decoder for the extended
Golay code must contain hidden variables with alphabet size
at least 256 [10]. Furthermore, there exists a well-known
tail-biting trellis for this code which contains 16-ary hidden
variables and has near-optimal performance [11]. The RRD
algorithm using H
0
, which contains only binary variables,
performs approximately 0.3 dB worse than optimal SISO
decoding at a bit error rate of 10
5
.
B. The [31, 21, 5] and [63, 39, 9] BCH Codes
Following the notation of Lu and Welch [12], define G
1,m
as the permutation group generated by m elements of the form:
σ
(j)
m
= (2, j + 2, 2j + 2, . . . , (2
m
2)j + 2) (7)
for 1 j m (where each permutation element is taken
modulo 2
m
). The full permutation group of the [31, 21, 5] BCH
code, C
5
, is G
1,5
[12]. The subgroup of cyclic permutations of
C
5
is generated by σ
(1)
5
alone and is denoted C
31
. Similarly,
the full permutation group of the [63, 39, 9] BCH code, C
6
,
is G
1,6
while the subgroup of cyclic permutations of C
6
is
generated by σ
(1)
6
alone and is denoted C
63
.

10
-5
10
-4
10
-3
10
-2
3 4 5 6 7 8
(31,21) HDD
(31,21) Cyclic-RRD
(31,21) RRD
(31,21) Trellis
(63,39) HDD
(63,39) Cyclic-RRD
(63,39) RRD
Bit Error Rate
Eb/No (dB)
Fig. 3. Bit error rate performance comparison of different decoding
algorithms for the [31, 21, 5] and [63, 39, 9] BCH codes.
Figure 3 compares the performance of four decoding algo-
rithms for C
5
: optimal SISO decoding via a trellis; random
redundant iterative decoding using the full permutation group
(labeled RRD); random redundant iterative decoding using
only permutations drawn randomly from C
31
(labeled cyclic-
RRD); and algebraic HIHO decoding. Figure 3 also illustrates
the performance of the analogous algorithms for C
6
(with the
exception of trellis decoding which is prohibitively complex
for this code). Note that the cyclic-RRD algorithms are equiv-
alent to Jiang and Narayanan’s algorithm [3]. The C
5
RRD
algorithms both use input parameter sets: α
0
= 0.08, I
1
= 2,
I
2
= 30 and I
3
= 20. The C
6
RRD algorithms both use input
parameter sets: α
0
= 0.08, I
1
= 2, I
2
= 50 and I
3
= 20.
It is known that any optimal SISO decoder for the [31, 21, 5]
BCH code must contain hidden variables with alphabet size
at least 1024 [13]. Furthermore, it is known that under the
standard cyclic bit ordering, the minimal tail-biting trellis for
this code also must contain 1024-ary hidden variables [14].
It is thus remarkable that the RRD and cyclic-RRD decoders,
which contain only binary variables, perform only 0.25 and 0.5
dB worse than the optimal SISO decoder at a BER of 10
5
.
Note that the RRD and cyclic-RRD algorithms outperform
algebraic HIHO decoding by 1.5 and 1.75 dB at a BER of
10
5
. The RRD and cyclic-RRD decoders outperform HIHO
decoding by similar margins.
The RRD algorithms outperform the corresponding cyclic-
RRD algorithms by approximately 0.25 dB at a BER of 10
5
for both C
5
and C
6
. The RRD algorithms consider all possible
permutations in Per (C
5
) and Per (C
6
), rather than the subset
of permutations corresponding to cyclic shifts, and are in
some sense more random than the cyclic-RRD algorithms.
For certain codes, subgroups of the permutation group may be
easier to specify than the full permutation group. For example,
the permutation groups of a length 2
m
Reed-Muller code is
the general affine group GA(m) whose subgroup GL(m) (the
general linear group) is generated by two elements. The results
of Figure 3 indicate that random redundant iterative decoding
algorithms which use such easily specified subgroups may
have performance characteristics that, although worse than
those RRD algorithms using the full permutation group, are
nonetheless interesting.
V. CONCLUSION
This work introduces random redundant iterative decoding
(RRD) which can be viewed as an extension of Jiang and
Narayanan’s algorithm [3] for cyclic codes to arbitrary block
codes. This work also demonstrated how a number of recently
proposed iterative SISO decoding algorithms for block codes
belong to a common class of algorithms defined by redundant
Tanner graphs. It was shown that RRD is an attractive example
of a redundant Tanner graph algorithm both in terms of its
complexity and its applicability to arbitrary linear block codes.
Furthermore, it was demonstrated empirically that the RRD
algorithm can outperform Jiang and Narayanan’s algorithm
when applied to cyclic codes.
REFERENCES
[1] N. Wiberg, “Codes and decoding on general graphs, Ph.D. dissertation,
Link
¨
oping University (Sweden), 1996.
[2] R. M. Tanner, A recursive approach to low complexity codes, IEEE
Trans. Information Theory, vol. IT-27, pp. 533–547, September 1981.
[3] J. Jiang and K. R. Narayanan, “Iterative soft decision decoding of Reed-
Solomon codes, IEEE Communications Letters, vol. 8, no. 4, pp. 244–
246, April 2004.
[4] A. Kothiyal, O. Y. Takeshita, W. Jin, and M. Fossorier, “Iterative
reliability-based decoding of linear block codes with adaptive belief
propagation, IEEE Communications Letters, vol. 9, no. 12, pp. 1067–
1069, December 2005.
[5] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting
Codes. North-Holland, 1978.
[6] J. Chen and M. P. C. Fossorier, “Near optimum universal belief
propagation based decoding of low-density parity check codes, IEEE
Trans. Communications, vol. 50, no. 3, pp. 406–414, March 2002.
[7] T. R. Halford and K. M. Chugg, An algorithm for counting short cycles
in bipartite graphs, IEEE Trans. Information Theory, vol. 52, no. 1, pp.
287–292, January 2006.
[8] F. Celler, C. R. Leedham-Green, S. H. Murray, A. C. Niemeyer,
and E. A. O’Brien, “Generating random elements of a finite group,
Communications in Algebra, vol. 23, pp. 4931–4948, 1995.
[9] T. R. Halford, A. J. Grant, and K. M. Chugg, “Which codes have 4-
cycle-free Tanner graphs?” January 2006, submitted to ISIT 2006.
[10] D. J. Muder, “Minimal trellises for block codes, IEEE Trans. Informa-
tion Theory, vol. 34, no. 5, pp. 1049–1053, September 1988.
[11] A. R. Calderbank, G. D. Forney, Jr., and A. Vardy, “Minimal tail-biting
trellises: the Golay code and more, IEEE Trans. Information Theory,
vol. 45, no. 5, pp. 1435–1455, July 1999.
[12] C.-C. Lu and L. R. Welch, “On automorphism groups of binary
primitive BCH codes, in Proc. IEEE Symposium on Information Theory,
Trondheim, Norway, June 1994, p. 51.
[13] A. Lafourcade and A. Vardy, “Lower bounds on trellis complexity of
block codes, IEEE Trans. Information Theory, vol. 41, no. 6, pp. 1938–
1954, November 1995.
[14] P. Shankar, P. N. A. Kumar, H. Singh, and B. S. Rajan, “Minimal tail-
biting trellises for certain cyclic block codes are easy to construct,
Lecture Notes in Computer Science, vol. 2076, pp. 627–638, 2001.
Citations
More filters
Journal ArticleDOI

Deep Learning Methods for Improved Decoding of Linear Codes

TL;DR: It is shown that deep learning methods can be used to improve a standard belief propagation decoder, and that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results.
Posted Content

RNN Decoding of Linear Block Codes

TL;DR: A recurrent neural network architecture for decoding linear block codes, which shows comparable bit error rate results compared to the feed-forward neural network with significantly less parameters and demonstrates improved performance over belief propagation on sparser Tanner graph representations of the codes.
Journal ArticleDOI

An Efficient Algorithm to Find All Small-Size Stopping Sets of Low-Density Parity-Check Matrices

TL;DR: An efficient algorithm to find all stopping sets, of size less than some threshold, of a fixed low-density parity-check (LDPC) matrix is introduced and simulation results of iterative decoding on the binary erasure channel show performance improvements for low-to-medium erasure probabilities when this redundant parity- check matrix is used for decoding.
Proceedings ArticleDOI

Belief propagation decoding of polar codes on permuted factor graphs

TL;DR: In this paper, a permuted factor graph-based decoder was proposed to improve the performance of iterative belief propagation decoding of polar codes with a genie-aided stopping condition.
Journal ArticleDOI

Multiple-bases belief-propagation decoding of high-density cyclic codes

TL;DR: It is shown that the inherent code property of a code has many structurally diverse parity-check matrices, capable of detecting different error patterns leads to decoding algorithms with significantly better performance when compared to standard belief-propagation decoding.
References
More filters
Book

The Theory of Error-Correcting Codes

TL;DR: This book presents an introduction to BCH Codes and Finite Fields, and methods for Combining Codes, and discusses self-dual Codes and Invariant Theory, as well as nonlinear Codes, Hadamard Matrices, Designs and the Golay Code.
Journal ArticleDOI

A recursive approach to low complexity codes

TL;DR: It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities.

Codes and Decoding on General Graphs

Niclas Wiberg
TL;DR: It is showed that many iterative decoding algorithms are special cases of two generic algorithms, the min-sum and sum-product algorithms, which also include non-iterative algorithms such as Viterbi decoding.
Journal ArticleDOI

Near optimum universal belief propagation based decoding of low-density parity check codes

TL;DR: A belief-propagation (BP)-based decoding algorithm which utilizes normalization to improve the accuracy of the soft values delivered by a previously proposed simplified BP-based algorithm is proposed.
Journal ArticleDOI

Minimal trellises for block codes

TL;DR: It is shown that minimal proper trellises exists for all block codes and bounds are shown to be exact for maximum distance separable codes and nearly so for perfect codes.
Related Papers (5)
Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "Random redundant soft-in soft-out decoding of linear block codes" ?

This work first presents the aforementioned algorithms in a common framework and then presents a related algorithm random redundant iterative decoding that is both practically realizable and applicable to arbitrary linear block codes.