scispace - formally typeset
Open AccessJournal ArticleDOI

An Interpolation Procedure for List Decoding Reed–Solomon Codes Based on Generalized Key Equations

Reads0
Chats0
TLDR
A link is provided between syndrome-based decoding approaches based on Key Equations and the interpolation-based list decoding algorithms of Guruswami and Sudan for Reed-Solomon codes, capable of decoding beyond half the minimum distance.
Abstract
The key step of syndrome-based decoding of Reed-Solomon codes up to half the minimum distance is to solve the so-called Key Equation. List decoding algorithms, capable of decoding beyond half the minimum distance, are based on interpolation and factorization of multivariate polynomials. This article provides a link between syndrome-based decoding approaches based on Key Equations and the interpolation-based list decoding algorithms of Guruswami and Sudan for Reed-Solomon codes. The original interpolation conditions of Guruswami and Sudan for Reed-Solomon codes are reformulated in terms of a set of Key Equations. These equations provide a structured homogeneous linear system of equations of Block-Hankel form, that can be solved by an adaption of the Fundamental Iterative Algorithm. For an (n,k) Reed-Solomon code, a multiplicity s and a list size l , our algorithm has time complexity O(ls4n2).

read more

Content maybe subject to copyright    Report

5946 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011
An Interpolation Procedure for List Decoding
Reed–Solomon Codes Based on
Generalized Key Equations
Alexander Zeh, Student Member, IEEE, Christian Gentner, Member, IEEE, and Daniel Augot
Abstract—The key step of syndrome-based decoding of
Reed–Solomon codes up to half the minimum distance is to
solve the so-called Key Equation. List decoding algorithms, ca-
pable of decoding beyond half the minimum distance, are based
on interpolation and factorization of multivariate polynomials.
This article provides a link between syndrome-based decoding ap-
proaches based on Key Equations and the interpolation-based list
decoding algorithms of Guruswami and Sudan for Reed–Solomon
codes. The original interpolation conditions of Guruswami and
Sudan for Reed–Solomon codes are reformulated in terms of
a set of Key Equations. These equations provide a structured
homogeneous linear system of equations of Block-Hankel form,
that can be solved by an adaption of the Fundamental Iterative
Algorithm. For an
(
n; k
)
Reed–Solomon code, a multiplicity
s
and
a list size
`
, our algorithm has time complexity
O
(
`s
4
n
2
)
.
Index Terms—Block-Hankel matrix, fundamental iterative algo-
rithm (FIA), Guruswami–Sudan interpolation, key equation, list
decoding, Reed–Solomon codes.
I. INTRODUCTION
I
N 1999, Guruswami and Sudan [3]–[5] extended Sudan’s
original approach [6] by introducing multiplicities in the
interpolation step of their polynomial-time list decoding proce-
dure for Reed–Solomon and Algebraic Geometric codes. This
modification permits decoding of
Reed–Solomon codes
[7] (and Algebraic Geometric codes) of arbitrary code-rate
with increased decoding radius. Guruswami and
Sudan were focused on the existence of a polynomial-time
algorithm. Kötter [8] and Roth-Ruckenstein [9], [10] pro-
posed quadratic time algorithms for the key steps of the
Guruswami–Sudan principle for Reed–Solomon codes, i.e.,
Manuscript received September 03, 2009; revised September 13, 2010; ac-
cepted April 10, 2011. Date of current version August 31, 2011. This work
was supported by the German Research Council “Deutsche Forschungsgemein-
schaft” (DFG) by Grant No. Bo867/22-1. The material in this paper was pre-
sented at the IEEE International Symposium on Information Theory, Toronto,
ON, Canada, 2008, and at the IEEE Information Theory Workshop, Taormina,
Sicily, Italy, 2009.
A. Zeh is with the Institute of Telecommunications and Applied In-
formation Theory, University of Ulm, Germany, and also with the
INRIA—Saclay-Île-de-France and École Polytechnique, Paris, France
(e-mail: alexander.zeh@uni-ulm.de).
C. Gentner is with the Institute of Communications and Navigation of the
German Aerospace Center (DLR), Germany (e-mail: christian.gentner@dlr.de).
D. Augot is with INRIA—Saclay-Île-de-France and École Polytechnique,
Paris, France (e-mail: daniel.augot@inria.fr).
Communicated by M. Blaum, Associate Editor for Coding Theory.
Digital Object Identifier 10.1109/TIT.2011.2162160
interpolation and factorization of bivariate polynomials. Var-
ious other approaches for a low-complexity realization of
Guruswami–Sudan exist, e.g., the work of Alekhnovich [11],
where fast computer algebra techniques are used. Trifonov’s
[12] contributions rely on ideal theory and divide and conquer
methods. Sakata uses Gröbner-bases techniques [13], [14].
In this paper, we reformulate the
bivariate interpolation step
of Guruswami–Sudan for Reed–Solomon codes in a set of uni-
variate Key Equations [1]. This extends the previous work of
Roth and Ruckenstein [9], [10], where the reformulation was
done for the special case of Sudan. Furthermore, we present a
modification of the so-called Fundamental Iterative Algorithm
(FIA), proposed by Feng and Tzeng in 1991 [15]. Adjusted to
the special case of one Hankel matrix the FIA resembles the ap-
proach of Berlekamp and Massey [16], [17].
Independently of our contribution, Beelen and Høholdt refor-
mulated the Guruswami–Sudan constraints for Algebraic Geo-
metric codes [18], [19]. It is not clear, if the system they obtain
is highly structured.
This contribution is organized as follows. The next section
contains basic definitions for Reed–Solomon codes and bi-
variate polynomials. In Section III, we derive the Key Equation
for conventional decoding of Reed–Solomon codes from the
Welch-Berlekamp approach [20] and we present the adjust-
ment of the FIA for one Hankel matrix. A modified version
of Sudan’s reformulated interpolation problem based on the
work of Roth-Ruckenstein [9] is derived and the adjustment of
the FIA for this case is illustrated in Section IV. In Section V,
the interpolation step of the Guruswami–Sudan principle is
reformulated. The obtained homogeneous set of linear equa-
tions has Block-Hankel structure. We adjust the FIA for this
Block-Hankel structure, prove the correctness of the proposed
algorithm and analyze its complexity. We conclude this contri-
bution in Section VI.
II. D
EFINITIONS AND
PRELIMINARIES
Throughout this paper,
denotes the set of inte-
gers
and denotes the set of integers
.An matrix consists
of the entries
, where and . A uni-
variate polynomial
of degree less than is denoted by
. A vector of length is represented by
.
Let
be a power of a prime and let denote the
finite field of order
. Let denote nonzero dis-
tinct elements (code-locators) of
and let denote
0018-9448/$26.00 © 2011 IEEE

ZEH et al.: INTERPOLATION PROCEDURE FOR LIST DECODING REED–SOLOMON CODES 5947
nonzero elements (column-multipliers), the associated evalua-
tion map ev is
(1)
The associated Generalized Reed–Solomon code
of
length
and dimension is [21]
(2)
where
denotes the set of all univariate polynomials with
degree less than
. Generalized Reed–Solomon codes are MDS
codes with minimum distance
. The dual of a Gen-
eralized Reed–Solomon is also a Generalized Reed–Solomon
code with the same code locators and column multipliers
, where . The
explicit form of the column multipliers is [22]
(3)
We will take advantage of structured matrices and therefore
we recall the definition of a Hankel matrix in the following.
Definition 1 (Hankel Matrix): An
Hankel matrix
is a matrix, where for all
and holds.
Let us recall some properties of bivariate polynomials in
.
Definition 2 (Weighted Degree): Let the polynomial
be in . Then, the
-weighted degree of , denoted by ,
is the maximum over all
such that .
Definition 3 (Multiplicity and Hasse Derivative [23]): Let
be a polynomial in . Let
. A bivariate
polynomial
has at least multiplicity in the point
, denoted by
(4)
if the coefficients
are zero for all . Furthermore,
the
th Hasse derivative of the polynomial in the
point
is
(5)
Let denote the th Hasse derivative of
with respect to the variable .
We will use the inner product for bivariate polynomials to
describe our algorithms.
Definition 4 (Inner Product): Let two polynomials
and
in be given. The inner product of
and is defined by .
III. W
ELCH-BERLEKAMP AS
LIST-ONE DECODER AND THE
FUNDAMENTAL ITERATIVE
ALGORITHM
A. Syndrome-Based Decoding of Reed–Solomon Codes
Let
denote the error word and let
be the set of error locations (that is ). Let
. It is well known that a code can
recover uniquely any error pattern if and only if
. The
syndrome coefficients depend only
on the error word
and the associated syndrome polynomial
is defined by [22]
The error-locator polynomial is and the
error-evaluator polynomial
is
. They are related by the Key Equation:
(6)
The main steps for conventional decoding up to half the min-
imum distance are:
1) Calculate the syndrome polynomial
from the re-
ceived word
.
2) Solve (6) for the error-locator polynomial
and deter-
mine its roots.
3) Compute
and then determine the error values.
B. Derivation of the Key Equation From Welch-Berlekamp
We derive the classical Key Equation (6) from the simplest in-
terpolation based decoding algorithm, reported as the “Welch-
Berlekamp” decoding algorithm in [24]–[26]. We provide a sim-
pler representation than in [20] and give a polynomial derivation
of the Key Equation.
Consider a
code with support set ,
multipliers
and dimension . The Welch-
Berlekamp approach is based on the following lemma [27, Ch.
5.2].
Lemma 1 (List-One Decoder): Let
be a code-
word of a
code and let
be the received word. We search for a polynomial
in such that:
1)
,
2)
,
3)
.
If
has distance less than or equal to from the
received word
, then .
Let us connect Lemma 1 to (6).
Proposition 1 (Univariate Reformulation): Let
be the Lagrange interpolation polynomial, such that
holds. Let .
Then
satisfies Conditions 2)

5948 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011
and 3) of Lemma 1 if and only if there exists a polynomial
such that
(7)
and
.
Let
. Define the following
reciprocal polynomials:
(8)
Inverting the order of the coefficients of (7) leads to:
With (8), we obtain:
which we can consider modulo . We obtain
(9)
Since
, we can define the formal power series
:
(10)
Using the column multipliers (3) for the dual code, it can be
verified that
is the series of syndromes with
(11)
Thus, dividing (9) by
, we obtain
(12)
which corresponds to the classical Key Equation (6). The syn-
drome polynomial is
, and is the error-
locator polynomial
.
In the case of
errors, we consider only the terms of the
Key Equation of degree greater than
and we get the
following homogeneous linear system of equations:
.
.
.
.
.
.
.
.
.
.
.
.
(13)
The above syndrome matrix
for all and
has Hankel form (see Definition 1). Equation (12)
can be solved by the well-known Berlekamp-Massey algorithm
[16], [17] or with a modification of the Extended Euclidean al-
gorithm [28]. The parallels of the Berlekamp-Massey algorithm
and the Extended Euclidean algorithm have been considered in
[29]–[31].
We consider in the following the FIA [15], that can be used
to find the first
linearly dependent columns and connec-
tion coefficients
for an arbitrary matrix. The FIA
allows a significant reduction of complexity when adjusted to a
Hankel matrix as in (13).
C. The FIA for One Hankel Matrix
Given an arbitrary
matrix , the FIA
outputs the minimal number of
linearly dependent
columns together with the polynomial
,
with
, such that holds.
The FIA scans the
th column of the matrix row-wise in the
order
and uses previously stored polynomials to
update the current polynomial
. Let be the index of the
current column under inspection, and let
be the current candidate polynomial that satisfies
for some value of the row index . In other words, the coeffi-
cients of the polynomial
give us the vanishing linear com-
bination of the matrix consisting of the first
rows and the first
columns of the matrix . Suppose that the discrepancy
(14)
for next row
is nonzero. If there exists a previously stored
polynomial
and a nonzero discrepancy , corre-
sponding to row
, then the current polynomial is updated
in the following way:
(15)
The proof of the above update rule is straightforward [15].
In the case
and there is no discrepancy stored,
the actual discrepancy
is stored as . The corresponding
auxiliary polynomial is stored as
. Then, the FIA exam-
ines a new column
.
Definition 5 (True Discrepancy): Let the FIA examine the
th row of the th column of matrix . Furthermore, let the
calculated discrepancy (14) be nonzero and no other nonzero
discrepancy be stored for row
. Then, the FIA examines a new
column
. We call this case a true discrepancy.
Theorem 1 (Correctness and Complexity of the FIA [15]):
For an
matrix with , the Fundamental Iterative
Algorithm stops, when the row pointer has reached the last row

ZEH et al.: INTERPOLATION PROCEDURE FOR LIST DECODING REED–SOLOMON CODES 5949
Fig. 1. Illustration of the row pointer
of the classic FIA [(a)] and of the adjusted FIA [b)] when both algorithms are applied to the same 6
2
7 Hankel syndrome
matrix of a
GRS
(16
;
4)
code. The dots indicate a true discrepancy. In this case, both algorithms enter a new column, but with different initial values of their row
pointers.
of column . Then, the last polynomial corresponds to
a valid combination of the first
columns. The complexity
of the algorithm is
.
For a Hankel matrix
(as in Definition 1), the FIA can be
adjusted. Assume the case of a true discrepancy, when the FIA
examines the
th row of the th column of the structured matrix
. The current polynomial is . Then, the FIA starts exam-
ining the
th column at row with
and not at row zero. This reduces the cubic time complexity into
a quadratic time complexity [15].
To illustrate the complexity reduction of the FIA when ad-
justed to a Hankel matrix (compared to the original, unadjusted
FIA), we traced the examined rows for each column in Fig. 1.
Fig. 1(a) shows the values of
of the FIA without any adaption.
The row pointer
of the adapted FIA is traced in Fig. 1(b).
The points on the lines in both figures indicate the case, where
a true discrepancy has been encountered.
IV. S
UDAN INTERPOLATION STEP WITH A HORIZONTAL BAND
OF HANKEL MATRICES
A. Univariate Reformulation of the Sudan Interpolation Step
In this section, we recall parts of the work of Roth and Ruck-
enstein [9], [10] for the interpolation step of the Sudan [6] prin-
ciple. The aimed decoding radius is denoted by
, the corre-
sponding list size is
.
Problem 1 (Sudan Interpolation Step [6]): Let the aimed
decoding radius
and the received word be
given. The Sudan interpolation step determines a polynomial
, such that
1)
;,
2)
;
3)
.
We present here a slightly modified version of [9], to get an
appropriate basis for the extension to the interpolation step in
the Guruswami–Sudan case.
We have
. Let be the Lagrange interpolation polynomial,
s.t.
and .
The reciprocal polynomial of
is denoted by
.
Similar to Proposition 1, Roth-Ruckenstein [9] proved the
following. There is an interpolation polynomial
satis-
fying Conditions (2) and (3) if and only if there exists a uni-
variate polynomial
with degree smaller than ,
s.t.
.
Let the reciprocal polynomials be defined as in (8). From [9,
(19)] we have
(16)
where
. We introduce the power series
(17)
Inserting (17) into (16) leads to
(18)
Based on (18) we can now define syndromes for Problem 1.
Definition 6 (Syndromes for Sudan): The
generalized
syndrome polynomials
are given
by
(19)
The first-order Extended Key Equation is
(20)

5950 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011
with .
An explicit form of
is
(21)
Note 1: In [9], a further degree reduction is proposed. Then
(18), is modulo
and the polynomial disappears.
We do not present this improvement here, because we cannot
properly reproduce this behavior in the Guruswami–Sudan case
(see Note 2).
The degree of the LHS of (16) is smaller than
.
If we consider the terms of degree higher than
,we
obtain
homogeneous linear equations. Reverting back to the
originals univariate polynomials
, we get the following
system:
(22)
With
, we obtain the fol-
lowing matrix form:
.
.
.
(23)
where each submatrix
is a Hankel matrix. The syndrome polynomials
of Definition 6 are associated with
this horizontal band of
Hankel matrices by .
In the following, we describe how the FIA can be adapted to
solve the homogeneous system of (23).
B. Adjustment of the FIA for the Reformulated Sudan
Interpolation Problem
The FIA can directly be applied to the matrix
of (23), but if we want to take ad-
vantage of the Hankel structure we have to scan the columns of
in a manner given by the weighted degree
requirement of the interpolation problem.
Let
denote the ordering for the pairs
, where is given by
(24)
The pair that immediately follows
with respect to
the order defined by
is denoted by . The
columns of the matrix
are reordered
according to
. The pair indexes the th column of
th submatrix . More explicitly, we obtain the following
matrix
, where the columns of are reordered [see (25) at
the bottom of the page].
The corresponding homogeneous system of equations can
now be written in terms of the inner product for bivariate poly-
nomials (see Definition 4).
Problem 2 (Reformulated Sudan Interpolation Problem): Let
the
syndrome polynomials
be given by Definition 6 and let be
the corresponding bivariate syndrome polynomial. We search a
nonzero bivariate polynomial
such that
(26)
Hence, the bivariate polynomial
is a valid interpo-
lation polynomial for Problem 1. Note that each polynomial
, as defined in (16), has degree smaller than .
To index the columns of the rearranged matrix
, let
(27)
Algorithm 1 is the modified FIA for solving Problem 2. In con-
trast to the original Roth-Ruckenstein adaption we consider all
homogeneous linear equations (instead of ), according to Note
1. The column pointer is given by
, for indexing the th
column of the
th submatrix . Algorithm 1 virtually scans
the rearranged matrix
column after column (see Line 23 of
Algorithm 1). The true discrepancy value for row
is stored in
array
as , and the corresponding intermediate bivariate
polynomial is stored in array
as . The discrepancy calcu-
lation and the update rule [see (14) and (15) for the basic FIA] is
adapted to the bivariate case (see Line 16 of Algorithm 1). For
each submatrix
, the previous value of the row pointer is
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
(25)

Citations
More filters
Journal ArticleDOI

Faster Algorithms for Multivariate Interpolation With Multiplicities and Simultaneous Polynomial Approximations

TL;DR: In this paper, the authors reduce the multivariate interpolation problem of the Guruswami-Sudan algorithm to a problem of simultaneous polynomial approximations, which they solve using fast structured linear algebra.
Journal ArticleDOI

Computing minimal interpolation bases

TL;DR: In this paper, a deterministic algorithm was proposed for computing shifted minimal bases of relations between vectors of size $m$ for the vector M-Pad approximation problem, using O(m^{\omega-1} (m + |s|) ) field operations, where $m is the exponent of matrix multiplication, and $|s|$ is the sum of the entries of the input shift.
Journal ArticleDOI

On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes

TL;DR: The Wu list-decoding algorithm for generalized Reed-Solomon (GRS) codes is derived by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp-Massey algorithm.

List Decoding of Algebraic Codes

TL;DR: A fast maximum-likelihood list decoder based on the Guruswami– Sudan algorithm; a new variant of Power decoding, Power Gao, along with some new insights into Power decoding; and a new, module based method for performing rational interpolation for the Wu algorithm are given.
Dissertation

Bases of relations in one or several variables : fast algorithms and applications

TL;DR: Algorithms for a problem of finding relations in one or several variables generalizes that of computing a solution to a system of linear modular equations over a polynomial ring, including in particular the computation of Hermite-Pade approximants and bivariate interpolants.
References
More filters
Journal ArticleDOI

Polynomial Codes Over Certain Finite Fields

TL;DR: A mapping of m symbols into 2 symbols will be shown to be (2 m)/2 or ( 2 m 1)/2 symbol correcting, depending on whether m is even or odd.
Book

Algebraic Coding Theory

TL;DR: This is the revised edition of Berlekamp's famous book, "Algebraic Coding Theory," originally published in 1968, wherein he introduced several algorithms which have subsequently dominated engineering practice in this field.
Journal ArticleDOI

Shift-register synthesis and BCH decoding

TL;DR: It is shown in this paper that the iterative algorithm introduced by Berlekamp for decoding BCH codes actually provides a general solution to the problem of synthesizing the shortest linear feedback shift register capable of generating a prescribed finite sequence of digits.
Journal ArticleDOI

Improved decoding of Reed-Solomon and algebraic-geometry codes

TL;DR: An improved list decoding algorithm for decoding Reed-Solomon codes and alternant codes and algebraic-geometry codes is presented and a solution to a weighted curve-fitting problem is presented, which may be of use in soft-decision decoding algorithms for Reed- Solomon codes.
Journal ArticleDOI

Decoding of Reed Solomon Codes beyond the Error-Correction Bound

TL;DR: To the best of the knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides error recovery capability beyond the error-correction bound of a code for any efficient code.
Related Papers (5)
Frequently Asked Questions (2)
Q1. What are the contributions in "An interpolation procedure for list decoding reed–solomon codes based on generalized key equations" ?

This article provides a link between syndrome-based decoding approaches based on Key Equations and the interpolation-based list decoding algorithms of Guruswami and Sudan for Reed–Solomon codes. 

The authors reformulated the Guruswami–Sudan interpolation conditions ( for a multiplicity higher than one ) for Generalized Reed–Solomon codes into a set of univariate polynomial equations, which can partially be seen as Extended Key Equations. The authors adapted the Fundamental Iterative Algorithm of Feng and Tzeng to this special structure and achieved a significant reduction of the time complexity. As mentioned in Note 2, the set of equations can be further reduced, under the observation that the diagonal terms are constant, i. e., they do not depend on the received word.