scispace - formally typeset
Open AccessJournal ArticleDOI

Dimension/length profiles and trellis complexity of linear block codes

Jr. G.D. Forney
- 01 Nov 1994 - 
- Vol. 40, Iss: 6, pp 1741-1752
Reads0
Chats0
TLDR
This semi-tutorial paper discusses the connections between the dimension/length profile (DLP) of a linear code, which is essentially the same as its "generalized Hamming weight hierarchy", and the complexity of its minimal trellis diagram.
Abstract
This semi-tutorial paper discusses the connections between the dimension/length profile (DLP) of a linear code, which is essentially the same as its "generalized Hamming weight hierarchy", and the complexity of its minimal trellis diagram. These connections are close and deep. DLP duality is closely related to trellis duality. The DLP of a code gives tight bounds on its state and branch complexity profiles under any coordinate ordering; these bounds can often be met. A maximum distance separable (MDS) code is characterized by a certain extremal DLP, from which the main properties of MDS codes are easily derived. The simplicity and generality of these interrelationships are emphasized. >

read more

Content maybe subject to copyright    Report

1741
IEEE
TRANSACTIONS ON INFORMATION THEORY,
VOL.
40, NO.
6,
NOVEMBER 1994
Dimension/Length Profiles
and
Trellis
Complexity
of
Linear Block Codes
G.
David
Forney,
Jr.,
Fellow,
IEEE
Abstract-This semi-tutorial paper discusses the connections
between the dimension/length profile (DLP) of a linear code,
which is essentially the same as its “generalized Hamming
weight hierarchy”
111, and the complexity
of
its minimal trellis
diagram. These connections are close andtdeep. DLP duality is
closely related
to
trellis duality. The DLP
of
a code gives tight
bounds on its state and branch complexity profiles under any
coordinate ordering; these bounds can often be met.
A
maximum
distance separable (MDS) code is characterized by a certain
extrema1 DLP, from which the main properties of MDS codes
are easily derived. The simplicity and generality of these interre-
lationships are emphasized.
Index
Terms-Dimension
/
length profiles, generalized
Hamming weights, support weights, trellis diagrams, state com-
plexity, linear codes.
I. INTRODUCTION
HE dimension/length profile (DLP) of an
(n,k)
T
linear block code
C
is the sequence
k(C)
whose
component
k,(C)
is the maximum dimension
of
any sub-
code (shortened code) of
C
whose effective length (sup-
port size) is less than or equal to
i,
0
s
i
s
n.
The
length/dimension profile (LDP)
is
the sequence
m(C)
whose component
mj(C)
is the minimum effective length
of any subcode of
C
whose dimension is
j,
0
5
j
s
k.
Either of these
two
profiles contains equivalent informa-
tion about
C.
Considerable interest in LDP’s has been stimulated by
a paper by Wei [l]. Wei calls the LDP the “generalized
Hamming weight (GHW) hierarchy”
of
C, since
m,(C)
is
the minimum Hamming distance of
C.
In fact, we shall
see that the
LDP
has more to do with length and dimen-
sion than with distance, which accounts for our terminol-
ogy.
Also,
for us the DLP is somewhat more useful than
the LDP.
The DLP idea is actually much older, going back at
least to Helleseth
et
al.
[2], where GHWs appear as the
minimum nonzero elements of “support weight distribu-
tions” [3]. Simonis coined the term “effective length” [4].
It has found diverse uses; Wei’s application was to the
Type
I1
wire-tap channel [l].
Manuscript received October 29, 1993. This paper was presented in
part at the IEEE International Symposium on Information Theory,
Trondheim, Norway, June 1994.
The author
is
with Motorola, Inc.,
20
Cabot Boulevard, Mansfield,
MA
02048.
IEEE
Log
Number 9406119.
Quite independently, a sizable literature has developed
on trellis diagrams
of
linear codes.
A
linear code C has a
well-defined minimal trellis diagram, given a definite or-
dering of its coordinates
[5],
[6].
Theoretically, the “com-
plexity” of a linear code may be defined algebraically by
the complexity of its minimal trellis diagram. Practically,
trellis diagrams often lead to efficient trellis-based decod-
ing algorithms.
Kasami
et
al.
[7] connected these two topics by using
Wei’s results to prove that the standard coordinate order-
ing
of
a Reed-Muller code is, in fact, the ordering that
minimizes the size of each state space in its minimal trellis
diagram. Subsequently, Vardy and Be’ery
[SI
used GHW
results to develop lower bounds
on
the state complexity of
BCH codes. Ytrehus [91 has sharpened some of these
bounds.
The main purpose of this paper is to show that the
connections between the DLP concept and trellis com-
plexity are close and deep.
Given a coordinate ordering, the state and branch com-
plexity profiles of a linear code
C
are simple functions
of
the corresponding ordered DLP. Consequently, bounds
valid for any ordering can be derived from the unordered
DLP.
An
efficient ordering is one for which these bounds
are met with equality. For many codes, efficient orderings
are known.
There is a close relation between DLP’s and MDS
(maximum distance separable) codes, as has been recog-
nized by previous authors. MDS codes may be character-
ized as those codes whose DLP meets a certain outer
bound.
Wei [l] proved a striking duality theorem, relating the
LDP of a linear code to the LDP
of
its dual code. We give
a number
of
such duality relations. Probably the most
notable is a simple proof, using DLP duality, that the
state complexity profile of a linear code and that of its
dual are identical [5].
Also,
an ordering is efficient for a
code if and only if it is efficient for its dual [71.
This paper is semi-tutorial. Most of its results are not
new. However, we believe that the simplicity and general-
ity
of
these ideas are noteworthy and important, and that
this is not sufficiently apparent in the prior literature.
Therefore we have made an effort to present these ideas
as simply as possible, and to make clear their interrela-
tionships. We feel that none of this material would be out
of
place in a first course in coding theory.
0018-9448/94$04.00
0
1994 IEEE

1742
IEEE
TRANSACTIONS
ON
INFORMATION
THEORY,
VOL.
40,
NO.
6,
NOVEMBER
1994
Section
I1
develops basic duality relationships between
the subcodes (shortened codes) and projections (punc-
tured codes) of a linear code and its dual.
Section I11 introduces the DLP, and develops its basic
properties. Elementary proofs are given of the fundamen-
tal results of Wei
111,
including his duality theorem.
A
simple bound is given on the DLP of a linear code
C,
which is met if and only if
C
is MDS. From this result
follow the main properties of MDS codes.
Section IV introduces the DLP corresponding to a
definite coordinate ordering. It is shown that the state and
branch complexity profiles of C are simple functions of
this ordered DLP, which leads to DLP bounds on state
and branch complexity. These bounds can be computed
using trivial graphical manipulations. In turn, known dis-
tance bounds
on
linear codes yield DLP bounds.
A
num-
ber of examples show that these bounds are often met
with equality. This leads to such results as: the state
complexity profile of the Golay code with its standard
coordinate ordering
[51
is optimum
componentwise.
A
code is MDS if and only if its state complexity profile is
invariant under all coordinate permutations.
In Section V, we discuss how sectionalization can re-
duce apparent state complexity, and show that such a
reduction does not occur with branch complexity. We
propose that branch complexity ought to be regarded as
more significant than state complexity.
A
sectionalization
is called "efficient" if the maximum branch complexity is
as small as possible; examples of efficient sectionalizations
are given. Section V-C discusses an alternative definition
of branch complexity.
Section VI
is
a brief conclusion, with suggestions for
further research.
11.
PRELIMINARIES: PROJECTIONS, SUBCODES,
AND
DUALITY
An
(n,
k)
linear code
C
over a field
F
is a k-dimen-
sional subspace of the n-dimensional vector space
F".
The parameters
n
and k are the
length
and
dimension
of
C.
The difference
r
=
n
-
k
is the
redundancy
of
C.
We
sometimes denote the length, dimension, and redundancy
of
C
by
n(C),
k(C),
and
r(C),
respectively.
Both
k
and
r
lie in the same range, namely
[O,
nl.
The
unique
(n,
n)
code over
F
is
F".
The unique
(n,
0)
code is
(0).
Let
I
be an
indexset
for
F".
Then an element of
F"
is
an n-tuple
f
=
{fi,
i
E
I}.
Let
J
c
I
be any subset of
I;
the complementary subset will be denoted by
I
-
J.
Projections and subcodes defined on subsets
J
c
I
will
be our fundamental tools. We shall see that, in many
respects, they are dual concepts.
The
projection
P,(C)
of
C
onto
J
is the image of
C
under the projection operator
PJ,
namely, the map that
zeroes out components outside of
J:
The
restriction
of
a projection
P,(C)
to
J
is sometimes
called a "punctured code" of
C.
The
effectiue length
of a code C is the size of its
support
[4],
defined as
supp(~)
2
{i
E
I:
P,,,(c)
z
(0)).
We denote the effective length of
C
by
m(C).
Clearly,
m(C)
I
n(C).
A
projection
P,(C)
is a linear code with length
n[P,(C)]
=
n(C)
=
111,
effective length
m[P,(C)I
I
IJI,
and dimension
k[
P,(C)I
5
k.
The
subcode
C, of C
is
defined as the set of all code
sequences whose components are all zero outside of
J:
C,
A
{c
E
C: c,
=
0,
i
E
J}
The restriction of a subcode
C,
to
J
is sometimes called a
"shortened code" of
C.
Alternatively,
C,
may be defined as the intersection of
C and
P,(C).
Thus C, is a subcode not only
of
C,
but also
A
subcode
C,
is a linear code with length
n(C,)
=
n(C)
=
111,
effective length
m(C,)
I
m[P,(C)I
I
IJI,
and
dimension k(C,)
I
k[PJ(C)]
I
k.
The following relation between the dimensions
of
a
projection PJ(C) and the subcode
Cl-,
is a first indica-
tion of the duality of projections and subcodes.
Lemma
1
(Jirst duality lemma):
If
C
is an
(n,
k)
linear
code and
J
c
I,
then
of
PJ
(c).
k[P,(C)I
f
k(Cl-,)
=
k.
Proog
The projection map
P,:
C
+
P,(C)
is a homo-
morphism with image
P,(C)
and kernel
Cl_,,
so
P,(C>
=
Corollaiy (dimension lower bound):
If
C
is an
(n,
k)
c/cl-
J.
0
linear code and
J
c
I,
then
k(Cj)
2
k
-
(I
-
JI.
Proof
By Lemma
1,
k(C,)
=
k
-
k[P,-,(C)I
2
k
-
0
Thus shortening a code in
II
-
JI
places can reduce its
dimension by at most
I
I
-
J
I.
The
dual code
C
'
to a linear code
C
c
F"
is the set of
all elements of
F"
that are orthogonal to all elements of
C,
under the usual inner product over
F.
If
C
is an
(n,
k)
code, then
C'
is an
(n,
r)
linear code, where
r
=
n
-
k.
The dual to
C'
is
C.
C
is
self-dual
if
C
=
C'
,
which is
possible only if
k
=
r.
The following lemma shows that projections of
C
are
dual to subcodes of
C.
Lemma
2
(second duality lemma):
If
C
is an
(n,
k)
linear code and
J
_C
I,
then, as subspaces of
P,(F"),
C,
and P,(C
')
are dual codes. Consequently,
II
-
JI,
since
k[Pl-
,(C>l
2
II
-
JI.
k[P,(C')]
f
k(CJ)
=
IJI.
Proofi
C,
is the intersection of
C
and
PJ(F").
Obvi-
ously,
if
f~
P,(F")
and
g
E
F",
then
f
and
g
are

FORNEY: DIMENSION
/
LENGTH
PROFILES OF LINEAR
BLOCK CODES
1743
orthogonal if and only if
f
and
P,(g)
are orthogonal.
Every element
of
C,
is orthogonal to every element
of
C
and thus to every element
of
P,(C
').
Conversely, if
f
E
P,(F")
is orthogonal to every element of
P,(C'),
then
f
is orthogonal to every element of
C'
,
so
f
is in
C
and thus in
C,
=
C
n
P,(F").
Hence the dual to
P,(C
')
0
Corollary:
If
C
is self-dual, then
C,
and
P,(C)
are dual
codes.
In summary, given an
(n,k)
linear code
C
with index
set
I
and a subset
J
c
I
of size
Ill,
the dimensions of
C,,
P,-,(C), P,(C
I),
and
(C
'),-,
may be determined from
any one of them, say
k(C,).
Theorem
I
(duality):
Let
C
and
C'
be dual
(n,
k)
and
(n,
r)
linear codes, with
r
=
n
-
k.
Let
J
c
I,
and define
r(C,)
A
1JI
-
k(C,).
Then
(as a subspace of
PJF"))
is
C,.
dim
C,
=
k(C,),
dim
P,_,(C)
=
k
-
k(C,),
dim(C'1I-J
=
r
-
r(C,).
dim
P,(C')
=
r(C,),
Simonis [41, using punctured and shortened codes,
proves Lemmas
1
and 2, but does not state Theorem 1.
As
an immediate corollary, we may state a generaliza-
tion
of
Wei's Theorem 2
[l],
even though we have not yet
defined the "generalized Hamming weights"
m,(C).
are dual
(n,
k)
and
(n,
r)
linear
codes, then
Corollary:
If
C
and
C
m,(~)
2
min,
{IJI:
k(~,)
=
j}
=
min,
{IJI:
k
-
k[P,-,(C)]
=
j}
=
min,
{VI:
IJ'I
-
~[P,(c')]
=
j}
=
min,
{IJI:
~[(C'II-J]
-
r
+
IJI
=
j}.
Wei's Theorem
2
is the third of these identities.
111.
DIMENSION
/
LENGTH
PROFILES
The
dimension/length profile
(DLP) of
C
will be de-
fined as the sequence
k(C)
=
{k,(C),
0
I
i
I
n),
whose ith component
k,(C)
is the maximum dimension of
any subcode
C,
with
IJI
=
i:
k,(~)
max,
{ki~,):
IJI
=
i),
o
I
i
I
n.
Alternatively,
k,(C)
is the maximum dimension of any
subcode
C,
whose effective length
is
not greater than
i:
k,(C)
=
max,
{k(C,):
m(C,)
I
i},
0
I
i
I
n.
Obviously
k,(C)
is nondecreasing with
i,
and
k,(C)
=
0,
kJC)
=
k.
Furthermore, by the corollary to Lemma 1, the
increments
k,+,(C)
-
k,(C)
can be at most
1.
Therefore,
k(C) rises from
0
to
k
in
k
distinct unit steps.
Equivalently, we may define the
length /dimension pro-
file
(LDP)
of
C,
whose jth component is the minimum
effective length
of
any subcode whose dimension is j:
m(C)
=
{m,(C),
0
I
j
I
k]
m,(C>
2
min,
{IJI:
k(C,)
=
j},
0
I
j
I
k.
Alternatively,
m,(C)
=
min,
{m(C,):
k(C,)
=
j},
0
I
j
5
k.
Obviously,
m,(C)
is nondecreasing with j, with
m,(C)
=
0
and
m,(C)
=
m(C)
I
n(C).
The dimension/length profile of
C
determines the
length/dimension profile, and
vice versa:
m,(C)
is the
least
i
such that
k,(C)
2
j, and k,(C) is the greatest
j
such that
m,(C)
I
i.
The
k
distinct unit steps in the DLP
correspond to
k
distinct values of
m,(C)
in the LDP; if
k,+,(C)
-
k,(C)
=
1, then
m,(C)
=
i
+
1
for
j
=
kI+,(C).
Example
I:
The dimension/length profile of the (8,4)
binary extended Hamming code is
{O,O,
O,O,
1,1,2,3,4).
From this, it follows that its length/dimension profile is
{O,
4,6,7,
S},
as shown in Fig.
1.
Equally, the latter deter-
mines the former.
Wei [l] defines
m,(C)
as the jth
generalized Hamming
weight
(GHW)
of
C. The terminology arises from the
observation that if the minimum Hamming weight of a
nonzero codeword of
C
is
d,
then
m,(C)
=
d
(since
k,-,(C)
=
0 and k,(C)
=
1). Wei refers to the LDP as
the
GHWhierarchy
of
C.
A.
DLP Duality
with components
We define the
inverse DLP
of
C
as the sequence
B(C)
,C,(C)
A
min,
{K[P,(c)]:
JJI
=
i},
o
I
i
I
n.
Theorem
2:
The inverse DLP and DLP of an
(n,k)
linear code
C
are related by
k,(C)
+
k,-,(C)
=
k,
0
I
i
I
n.
Proofi
From the first duality lemma,
k(C,)
+
It follows that the inverse DLP
i(C)
can be obtained
from the DLP
k(C)
by a horizontal reflection about
i
=
n/2 and a vertical reflection about
j
=
k/2, or equiv-
alently, by a 180" rotation about (n/2, k/2), as illustrated
ifl
Fig. 2 for the example (8,4) code. The inverse DLP
k(C)
thus rises from
0
to
k
in
k
distinct unit steps, which
occur at the mirror images of the steps of
k(C).
Moreover, from the second duality lemma, the DLP
and inverse DLP of
C'
may be determined from the
DLP
of
C
via the following dual relationships.
Theorem
3
(dual DLP):
If
C
is an
(n,
k)
linear code
with dual
C'
,
then for
0
I
i
I
n,
k[P,-,(C)I
=
k.
0
k,(~)
+
,C,(C')
=
i.
Proofi
By Lemma 2,
k(C,)
+
k[P,(C
'11
=
IJI.
length
n,
then the inverse DLP of
C
and the DLP of
C
are related by
0
In other words, if
C
and
C'
are dual linear codes of
k(C)
+
B<c
'
=
i,
where
i
2
{i:
0
I
i
I
n}.
Corollary:
If
C
is self-dual, then
k(C)
+
k(C)
=
i.
It follows that the DLP
k(C')
is the difference
i
-
k(C),
as illustrated in Fig.
3
for the example (8,4) code.

1744
IEEE
TRANSACTIONS
ON
INFORMATION
THEORY,
VOL.
40,
NO.
6,
NOVEMBER
1994
12
8
4
0
0481216
Fig.
4.
DLP and inverse DLP
of
(16,5) code (solid) and (16,111 code
(dashed).
Since the unit steps
of
k(C)
and
k(C’)
occur at the
nonzero GHW’s
of
C
and
C
I
,
respectively, Wei’s duality
result thus follows from Theorems 2 and
3.
Theorem
4
[l]:
Given a linear code C with dual code
C
,
for
1
2
i
5
n,
either i
is a GHW of C
or
n
-
i
+
1
The
inverse DLP
of
C
is obtained
by
rotating the DLP
180”
about (n/2, k/2).
other words,
is
a GHW of
c,
but not both.
n
0
0
nJ2
Fig.
2.
8
4
0
Fig.
3.
The
DLP
{mj(C
‘>,I
i
j
i
r}
=
[I, nl
-{n
-
mj(C)
+
I,
1
i
j
I
k}.
While this is a striking duality result, it does not appear
to be directly related to the MacWilliams identities, as
was suggested by Wei [l]. However, Klove [3] and Simonis
[4] have been able to prove generalized MacWilliams
identities for “support weight distributions,” which deter-
mine GHW hierarchies.
0
4
8
B.
DLP
Bounds and MDS Codes
of
CL
is the difference between
i
and the inverse
DLP
of
C.
Graohicallv, the “slope” of the DLP or of the inverse
Thus
k(C’)
rises from
0
to
r
=
n
-
k
in
r
distjnct unit
steps, which occur whenever there is no step in
k(C).
Example
2:
The DLP of the (16,
5)
first-order
Reed-Muller (RM) code is given by [l]
k(C)
=
(O,O,
O,O,
O,O, O,O,
1,1,1,1,2,2,3,4,51.
By Theorem 2, its inverse DLP is
k(C)
=
{0,1,2,3,3,4,4,4,4,5,5,5,5,5,5,5,5}.
By Theorem 3, the DLP and inverse DLP of the dual
(16,111 second-order RM code are
k(C’)
=
(O,O, O,O,
1, 1,2,3,4,4,5,6,7,8,9, 10,
111,
k(C’)
=
{0,
1,2,3,4,5,6,7,7,
8,9,10,
IO,
11,11,11,111.
The unit steps
of
these profiles occur at the following
places, as shown in Fig. 4:
k(C):
(8,
12,14,15,16],
DLP
is
eithe;
0”
or 45’. Therefore, the DLP and inverse
DLP are bounded within the regions illustrated in Fig.
5
for high-rate
(r
<
k)
and low-rate
(r
2
k)
codes.
From these bounds and the fact that
m,(C)
is the
minimum Hamming distance
d
of
C,
we obtain the
Singleron bound: d
s
r
+
1.
Wei therefore calls the lower
bound of Fig.
5
a “generalized Singleton bound” [ll.
A
maximum distance separable (MDS) code
is an
(n,
k)
linear code C
(k
2
1) that meets the Singleton bound
with equality. In this paper, a trivial
(n,O)
code will also
be defined
as
MDS, even though its minimum distance is
conventionally defined as
CO.
The following theorem shows that the bounds
of
Fig.
5
are met everywhere with equality if and only
if
C
is MDS.
In turn, this result implies some of the most important
properties of MDS codes.
Theorem
5
(MDS bound):
The DLP and inverse DLP of
an
(n,
k)
linear code C are bounded by
k(C’):
(I,
2, 3,4,5, 6,7,9,
10,
11, 131,
i(C)
I
{0,1,2;..,
k;..,
k].
These bounds are met with equality everywhere if and
only
if
C
is MDS.
k(C>:
{1,2,3,5,
9},
k(C
I):
{4,6,7,8,10, 11, 12,13,14, 15,161.

FORNEY: DIMENSION
/
LENGTH PROFILES
OF
LINEAR
BLOCK
CODES
1745
C.
Distance Bounds on Profiles
If
C
is an
(n, k)
linear code and the minimum
Hamming distance between codewords is
d,
then
C
is
called an
(n, k, d) code.
If
C
is
an
(n, k, d)
code, then its minimum distance
d
imposes bounds on its DLP, or equivalently, on its GHW
hierarchy. For any subcode
C,
of
C
must have minimum
distance
d(C,)
2
d(C)
=
d.
Let
kmax(n, d)
denote the
minimum Hamming distance
d.
Then,
Fig.
5.
Bounds
on
DLP and inverse DLP
of
high-rate and low-rate
maximum dimension of a linear code with length
and
codes.
Pro08
The bounds follow from the elementary
bounds dim
P,(C)
I
IJI,
dim
P,(C)
_<
dim
C
=
k,
dim
C,
2
0,
and dim
C,
2
dim
C
-
II
-
JI
=
IJ1
-
r,
which are
connected by the first duality lemma, and which hold for
every subset
J
G
I.
If
the bounds are met with equality, then
d(C)
=
m,(C)
=
n
-
k
+
1,
so
C
is MDS. Conversely, if
C
is MDS,
then
m,(C)
=
n
-
k
+
1, which implies
k,-,(C)
=
0,
so
in view of the unit-step constraint, the only possible DLP
is
the lower bound to
k(C).
U
Remark:
In the case of equality, the LDP of
C
is
(0,
n
-
k
+
1,
n
-
k
+
2,...,
n}.
Thus MDS codes are those codes whose DLPs meet
the outer bounds
of
Fig.
5.
Their most important charac-
teristics follow directly from this extrema1 property.
Corollary:
If
C
is an
(n,
k)
MDS code, then
(a) its dual code
C
is an
(n, r)
MDS code;
(b) for every subset
J
c
I,
the punctured code P,(C) is
MDS;
(c) for every subset
J
G
I,
the shortened code
C,
is
MDS;
(d) in particular,
if
IJI
=
k,
then dim
P,(C>
=
k-i.e.,
every subset
J
of
size
k
is an information set
for
C;
(e) in particular, if
JJJ
=
r
+
1
=
d,
then dim
C,
=
1-i.e., for every subset
J
of
size
d,
there exists a
codeword of weight
d
with support
J.
Remark:
From (a) and (e), if
IJI
=
k
+
1, then there
exists a codeword
of
C’
of weight
k
+
1
with support
J
-i.e., every subset
J
of size
k
+
1
is
a check set for
C.
Pro08
(a) If the inverse DLP of
C
is
k(C)
=
{0,1,2,...,
k,.
. .
,
k},
then from Theorem 3, the DLP of
C
is
k(C
’)
=
{O;..,O,
1,2;.-,
r),
so
by Theorem
5,
C
is MDS.
(b) and (c): If the bounds of Theorem 4 are met with
equality, then dim
P,(C)
=
klJl(C)
and dim
C,
=
k,,,(C)
for every
J
c
I,
as noted in the proof of Theorem
5.
The
inverse DLP of
P,(C)
and the DLP of
C,
are thus equal
to those of
C
in the range
0
I
i
5
IJI,
so
by Theorem
5,
both are MDS (when regarded as codes
of
length
IJI).
0
In fact, if C is an
(n, k)
MDS code over a finite field
GF(
q),
then its entire weight distribution is determined as
a function of
n, k,
and
q
by these properties.
(d) and (e): Special cases of (b) and (c).
.
The Singleton bound is the general bound
k,,,(n, d)
=
n
-
d
+
1, which applies over any field
F.
Application of
this bound results in an alternative derivation
of
the MDS
bound of Theorem
5.
When
F
is a finite field, sharper bounds are‘ often
known. For binary codes, Brouwer and Verhoeff [lo]
tabulate results obtained by generations of coding re-
searchers. For example, the bounds for
d
=
4 and
d
=
8
are as follows.
Example
3:
The greatest possible DLP for a binary
linear
(n, k,
4) code is
(O,O,O,O,
1,1,2,3,4,4,5,6,7,8,9,10,11,11,12,
.-.}.
Any (2”,2”
-
m
-
1,4) extended Hamming code
C,
m
2
2, has this profile-i.e., the (4,1,4) code, the (8,4,4)
code, the (16,11,4) code, and
so
forth [l]. Shortened
codes of these codes also achieve this profile. The corre-
sponding LDP (GHW hierarchy) is
{0,4,6,7,8,10,11,12,13,14,15,16,18,
..*}.
Example
4:
The greatest possible DLP for a binary
linear
(n, k,
8)
code is
4,5,5, 6,
7,8,
9, 10,11, 12, 12;..}.
This profile is attained by the
(8,
1,
8)
repetition code, the
(16,
5,
8)
first-order Reed-Muller code, and the (24, 12,
8)
Golay code [l], but not by any (32, 16,
8)
code. The
corresponding LDP is
(0,
8,
12, 14, 15, 16, 18, 19, 20, 21,
22, 23, 24,
..e}.
IV.
TRELLIS
COMPLEXITY
AND
DLP’s
A linear block code
C
may be regarded as (the set of
output sequences of) a time-varying linear dynamical sys-
tem, provided that its index set
I
has a definite order.
Without loss
of
generality, we may take
I
as the set
which has an implicit natural order. We may then think of
I
as the
time axis
of
C,
and use temporal language such as
“before,” “after,” and
so
forth.

Citations
More filters

Codes and Decoding on General Graphs

Niclas Wiberg
TL;DR: It is showed that many iterative decoding algorithms are special cases of two generic algorithms, the min-sum and sum-product algorithms, which also include non-iterative algorithms such as Viterbi decoding.
Journal ArticleDOI

Codes and iterative decoding on general graphs

TL;DR: A general framework, based on ideas of Tanner, for the description of codes and iterative decoding (“turbo coding”) is developed, which clarifies, in particular, which a priori probabilities are admissible and how they are properly dealt with.
Journal ArticleDOI

On the BCJR trellis for linear block codes

TL;DR: It is shown that, among all trellises that represent a given code, the original trellis introduced by Bahl, Cocke, Jelinek, and Raviv in 1974, and later rediscovered by Wolf, Massey, and Forney, uniquely minimizes the edge count.
Journal ArticleDOI

The hardness of the closest vector problem with preprocessing

TL;DR: A new simple proof of the NP-hardness of the closest vector problem is given, showing that there are lattices for which the closest vectors problem remains hard, regardless of the amount of preprocessing.
Journal ArticleDOI

Geometric approach to higher weights

TL;DR: The notion of higher (or generalized) weights of codes is just as natural as that of the classical Hamming weight and the authors adopt the geometric point of view.
References
More filters
Journal ArticleDOI

The Z/sub 4/-linearity of Kerdock, Preparata, Goethals, and related codes

TL;DR: Certain notorious nonlinear binary codes contain more codewords than any known linear code and can be very simply constructed as binary images under the Gray map of linear codes over Z/sub 4/, the integers mod 4 (although this requires a slight modification of the Preparata and Goethals codes).
Posted Content

The Z_4-Linearity of Kerdock, Preparata, Goethals and Related Codes

TL;DR: In this paper, it was shown that all the nonlinear binary codes constructed by Nordstrom-Robinson, Kerdock, Preparata, Goethals, and Delsarte-Goethals can be constructed as binary images under the Gray map of linear codes over Z_4, the integers mod 4.
Journal ArticleDOI

Generalized Hamming weights for linear codes

TL;DR: By viewing the minimum Hamming weight as a certain minimum property of one-dimensional subcodes, a generalized notion of higher-dimensional Hamming weights is obtained, which characterize the code performance on the wire-tap channel of type II.
Journal ArticleDOI

Coset codes. II. Binary lattices and related codes

TL;DR: The family of Barnes-Wall lattices and their principal sublattices, which are useful in constructing coset codes, are generated by iteration of a simple construction called the squaring construction, and are represented by trellis diagrams that display their structure and interrelationships and that lead to efficient maximum-likelihood decoding algorithms.
Journal ArticleDOI

Efficient maximum likelihood decoding of linear block codes using a trellis

TL;DR: It is shown that soft decision maximum likelihood decoding of any (n,k) linear block code over GF(q) can be accomplished using the Viterbi algorithm applied to a trellis with no more than q^{(n-k)} states.