scispace - formally typeset
Open AccessJournal ArticleDOI

Two-dimensional linear prediction and its application to adaptive predictive coding of images

TLDR
Results of this research indicate that by using adaptive prediction and quantization, intensity and density coded images of high quality can be obtained at information rates as low as 0.7 bits/pixel.
Abstract
This paper summarizes a study on two-dimensional linear prediction of images and its application to adaptive predictive coding of monochrome images. The study was focused on three major areas: two-dimensional linear prediction of images and its performance, implementation of an adaptive predictor and adaptive quantizer for use in image coding, and linear prediction and adaptive predictive coding of density (logarithm of intensity) images. Among the issues investigated are: autoregressive modeling of 2-D image sequences, estimation of the nonzero average bias of the image samples, stability of the inverse prediction error filter, and estimation of the parameters of a 2-D separable linear predictor. The implementation of the adaptive predictor is based on the results of linear predictive analysis. The adaptive quantization of the prediction error signal is done by using a flexible three-level quantizer for code words of fixed or variable length. The above ideas are further applied to density images for exploiting the multiplicative structure of images. The results of this research indicate that by using adaptive prediction and quantization, intensity and density coded images of high quality can be obtained at information rates as low as 0.7 bits/pixel.

read more

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING,
VOL.
ASSP-32,
NO. 6, DECEMBER
1984
1213
Two-Dimensional Linear Prediction
and
Its
Application to Adaptive Predictive
Coding
of
Images
Mwstmct-This paper summarizes a study on two-dimensional linear
prediction of images and its application to adaptive predictive coding
of monochrome images. The study was focused on three major areas:
two-dimensional linear prediction
of
images and its performance, imple-
mentation of
an
adaptive predictor and adaptive quantizer for use
in
image coding, and linear prediction and adaptive predictive coding
of
density (logarithm of intensity) images.
Among the issues investigated are: autoregressive modeling of
2-D
image sequences, estimation of the nonzero average bias of the image
samples, stability of the inverse prediction error filter, and estimation
of the parameters of a
2-D
separable linear predictor. The implementa-
tion
of
the adaptive predictor is based
on
the results of linear predictive
analysis. The adaptive quantization of the prediction error signal is
done by using
a
flexible three-level quantizer for code words
of
fixed or
variable length. The above ideas are further applied to density images
for exploiting the multiplicative structure of images.
The results of this research indicate that by using adaptive prediction
and quantization, intensity and density coded images of high quality
can be obtained at information rates
as
low as
0.7
bits/pixel.
I.
INTRODUCTION
T
HE
techniques of linear prediction have been applied with
great success in many problems of speech processing
[I]
-
[4].
Linear prediction is established as the predominant tech-
nique for extracting speech parameters and for speech coding
at low bit rates
[5]
.
This success in processing speech signals
suggests that similar techniques might be useful in modeling
and coding of 2-D image signals. Due to the extensive compu-
tation required for its implementation in two dimensions, only
the simplest forms of linear prediction have.received much at-
tention in image coding
[6],
[7].
However, current reduc-
tions in cost and increases in speed of digital signal processing
hardware suggest that it is no longer necessary to limit our
attention
to
simple processing schemes for image modeling and
coding. Thus, this paper consists of two parts. The first part
is concerned with autoregressive modeling of 2-D,image sig-
nals, and the use of two-dimensional linear predictive analysis
for extracting the parameters of this model. The second part
reports the performance of an adaptive predictive image cod-
ing scheme which uses adaptive two-dimensional linear predic-
tion and an adaptive three-level quantizer to quantize the pre-
diction error signal at low bit rates.
Manuscript received September
1,1983.
This work was supported
by
the Joint Services Electronics Program
of
the Department
of
Defense
under Contract DAAG29-81-K-0024.
The authors are with the School
of
Electrical Engineering, Georgia
Institute
of
Technology, Atlanta, GA 30332.
11.
2-D
LINEAR
PREDICTION
A.
Image
Model
Various autoregressive image models have been examined by
different researchers
[S]
aiming at different goals. Our objec-
tive is
to
introduce an autoregressive model which will account
for the spatial variability of image sequences and for the fact
that intensity image samples possess a. nonzero average bias
since they always assume nonnegative values. Hence, let us
consider the image model in Fig. ](a), where
x(m, n)
denotes
the 2-D sequence
of
intensity image samples and
a,
represents
a locally constant bias coefficient added
to
the input of the
feedback system. This feedback system, which accounts for
the autoregressive nature of our model, is called the predictor,
and its corresponding transfer function is
P(zl,
zz)
=
a(k,
I)
~;~z;'
(1)
kd
(k,
0
E
n
where
a(k,
I)
is a 2-D prediction coefficient array and
n
is a
set of integer pairs
to
be specified later. Fig. l(a) implies the
following difference equation relating the output
x(m,
n)
and
input
u(m,
n):
x(m,
n)
=
a(k,
I)
x(m
-
k,
n
-
I)
+
a,
+
u(m,
n).
kl
The
2-D
input sequence
u(m,
B)
may be thought
of
as either a
zero mean white noise field or as a 2-D unit impulse, depend-
ing upon whether we view the problem from a stochastic or
from a deterministic point of view.
An equivalent image model could result if we think of the
2-D sequence
x(m, n)
as being the sum of a zero mean auto-
regressive sequence
y(m,
n)
and a locally constant dc offset
B.
Then, as Fig.
1
(b) implies,
x(m,
n)
=y(m,
n)
+B
=
a(k,
I)y(m
-
k,
n
-
I)
kI
(k,
0
En
+
B
+
u(m,
n).
(3)
Comparing the equivalent difference equations (2) and
(3)
we
can find
a
relation between
a,
and
B
(4)
0096-3518/84/1200-1213$01.00
0
1984
IEEE

rND
SIGNAL
PROCESSING,
VOL. ASSP-32,
NO.
6,
DECEMBER
1984
The bias coefficient
a,
can be thought of as a bias at the input,
whereas
B
represents a bias at the output. In both cases, the
inclusion of a bias param.eter accounts
for
the fact that the
in-
tensity image samples
x(m,
ti)
are explicitly
biased,
since
they
are always nonnegative. The advantage of
(2)
contahing
(lo
is
the linearity of the normal equations which
a.re
jnvolvecl
in the
estimation of the parameters
of
the model. The difference
equation
(2)
represents either a means for synthesizing the
image signal
x(m,
n)
if we know the model coefficients and the
excitation
u(rn,
n),
or a means for extracting the model
parram-
eters if we have available the signal
x(m,
n)
and
rnake
some
assumptions about the input
u(m,
n).
The set
n
of integer pairs spanned by the indexes
(k,
I)
of
the prediction coefficient array
a(k,
1)
is called the
region
of
support
of
the predictor
or the
prediction
mask.
This
set de-
termines the spatial causality
of
the model. Spatial causality is
not inherent in image formation. However, it may be imposed
by
the scanning mechanism for a raster of image samples. Our
ultimate objective
is
to use the optimal estimates
of
the model
coefficients for resynthesizing the image signal
x(rrz,
n>
at the
decoder of the image codi.ng scheme. Therefore,
(2)
must be
recursively computable. This limits the possible prediction
masks
[9]
only to causal, nonsymmetric half-plane masks.
Sacrificing some generality, we have limited our study
to
causal prediction masks which possess a
Q
X
Q
quartex-plane
region of support; namely
masks
where
(a,
I)
range over all in-
teger pairs in the set
I1
=
{(k,
Z):O
<
k,
I
<
Q
-
1
and
(k,
1)
.jc
(0,O)).
(5)
The number of prediction coefficients included in
a
Q
X
.Q
quarter-plane prediction mask, also called
predictor order,
is
P=
(22
--
1.
(6)
B.
2-L)
Linear
Prediction
ofhtensiv
Image's
The parameters of the image model
(a(k,
I)
and
uo>
can be
estimated by the method of linear predictive analysis in which
it is assumed that the model coefficients are those which mini-
mize the nmmsqaared value
of
the
2-69
prediction error
sequence
e(m,
n)
=
x(m,
n)
--
a(k,
E)
x(wz
-
k,
n
-
I)
-
ao.
kI
(k,
I)
~n
The mean-squared prediction error residual is defined as
E=
e2
(m, n)
uu
m=I.
n=k
where the limits
L,
U
will be specified later. The prediction
error filter
is
a linear system with corresponding transfer
function
A@,,
z2)
=
1
--
qz,,
z2)
(9)
where
P(zl,
z2)
is defiaed in
(I).
The spticnal model coeffi-
cients are those which minimize
E,
and consequently they sat-
isfy the normal equations
a(k,
E)
@(IC,
1
:
i,
j>
+
a0S(i,
j)
kl
(k,
I)
E
n
=q!@,O:i,j),
(i,j)EII[
(1
04
a(k,
I)
S(k,
E)
+
aoN,
=
S(0,
a)
(lob)
(IC,
I)
E
I1
kl
where
uu
I@,
1:
i,
j)
3
x(m
-
k,
n
-
l>
x(m
-
i,
n
j>
m=L
n=L
(1
la)
S(k,
I)
=
x
x(m
-
k,
n
-
I)
(1
1
b)
uu
m
=
L
i?
=L
and
Ns
is the number
of
samples in the region of support
of
the
2-D
sequence
e(m,
n).
In
(1
1)
m
and
n
range over a set of
integers corresponding to a particular
M
X
M
region
of
the
image, called the
analysis
frume.
Over each analysis frame we
assume that the model coefficients are fixed, and we compen-
sate for the nonstationarity
of
the image by using small analy-
sis
frames and computing a different model for each frame.
The
minimum prediction error residual can be shown to be
given
by
&,in
=
@(O,
0
:
0,O)
-
x
a(k,
1)
@(O,
0
:
k,
I)
kI
(k,
I)
E
I3
-
aoS(0,
0).
(12)
In order
to
adopt a matrix representation for
(IO),
a one-
dimensional indexing
[9]
of the prediction coefficient array
a@,
I)
is defined as follows:
I(k,
I)
=
i
=
1
Q
+
k.
The above indexing corresponds
to
a rowwise scanning of
u(k,
E),
and it is not the only possible one. For
(k,
E)
E
I1
U
(0,
0)
the index
i
=
I(k,
I)
ranges over the integers
0
<
i
<P
in
a one-to-one re1ationshi.p with the integer pair
(IC,
I).
'Thus, we
can recover the integer pair
(k,
2)
=
I--*
(i),
and the four-index

MARAGOS
et
al.
:
TWO-DIMENSIONAL LINEAR PREDICTION
1215
dependence of
@(k,
1:
i,
j)
can be written as a two-index depen-
In this case the signal
x(m,
n)
is assumed to be zero every-
dence by writing
@@,
Z:
i,
])
=
@(q
:
r),
where
q
=
I(k,
1)
and
r
=
where outside the
M
X
M
frame and the summation for the
I(i,
]).
Similarly,
S(k,
I)
=
S(q).
At this point,
(10)
can be re-
squared error
E
involves all the nonzero values of
e(m,
n);
Le.,
written in matrix form
as
L=OandU=MtQ-2in(S)whereQXQisthesizeofthe
prediction mask. Therefore, we ‘must set
Ns
=
(M
t
Q
-
I)~
(13)
in (lob). In the autocorrelation case it can be proven easily
where that the correlation lags
@(k,
E:i,
])
possess the following
ca=
c
C=
p(1:
1)
r$(2
:
1)
*
* *
@(P:
1)
I
S(l)]
@(I
:
2) 9(2
:
2)
.
* *
9(P: 2)
/
S(2)
@(1:P) 9(2:P)***@(P:P)p(P)
.
I.
*I
I.
.
I.
S(1)
S(2)
*.
.S(P)
IN,
_____
--
7--
--
-
I-
For reasons explained later, we denote by
R
the upper left
P
X
P
principal submatrix of
C
which contains only correlation lags
as entries, and
a=
[a(I-’(l)),
-
. . ,
a(I-‘(P))/, a,]
(1
5)
=
[aT,
a,]
c=
[9(0:1);..,9(O:P)~,S(O)I~
(1
6)
=
[r
T,
~(011
where
[
j
denotes the transpose of a vector. The
(P
X
P)
matrix
R
and the
(PX
1)
vectors
a
and
r
are called, respec-
tively, the correlation matrix, the prediction coefficient vec-
tor, and the correlation vector.
Bias Estimation:
There are several issues to be considered in
the computation of the model coefficients. One concern is
the way the bias
B
is estimated. The derivation of the normal
equations in
(IO)
assumed that the bias remains constant over
each analysis frame. There are three ways to handle the bias:
1)
Estimate
a.
by solving the
(P
t
1)
X
(P
+
1) system
Ca
=
c,
and then exploit the relation
(4)
between
a.
and
B.
This ap-
proach is henceforth denoted by TBLP (true bias linear pre-
diction).
2)
Estimate the local mean of the signal over the
M
X
M
frame as being an approximation to
B,
subtract it from
x(m,
n)
and then use only (loa) with
a,
=
0,
solving
a
(P
X
P)
system. We denote this approach as
LMLP
(local mean linear
prediction).
3)
Do
not subtract any estimate of the bias Band
use only (loa) with
a.
=
0;
i.e., solve only the
(PX
P)
sys-
tem
Ra
=
r.
This last approach
is
simply denoted
LP
(linear
prediction).
Covariance Versus Autocorrelation:
Another concern is the
determination of the range of summation in
(8)
and
(1
1).
One
approach is to limit the summation to the
M
X
M
analysis
frame. This implies setting
L
=
0
and
U=
M
-
1
in
(8),
(1
l),
and
N,
=M2
in (lob), and it results in bringing inside the
analysis frame samples from the borders of the frame to be
supplied as needed in the computation
of
(1
1).
We call this
the “covariance method” as in one-dimensional linear predic-
tive analysis
[5].
Since
@(IC,
I:
i,
j)
=
@(i,
j:
k,
2)
[see
(1
la)],
the matrix
C
in (14) is symmetric. Moreover,
C
is
almost
always positive-definite, except for some degenerative cases
where it is positive-semidefinite. Therefore, it can be inverted
using Cholesky decomposition
[5].
Another approach is the so-called “autocorrelation method.”
\
if
(k-
i)(~
-j)<o.
Therefore, we can replace
@(k,
E:
i,
j)
with
R(
I
k
-
where
R(k,
1)
2
min(M-I,k+M-l) min(M-I,I+M-l)
m
=may
(0,
k)
n
=
max
(O!
I)
c
c
-xfm,n)x(m-k,n-Z).
(1
8)
From
(17),
(18)
it can be shown that
R(k,
1)
=R(-k;
-I).
Also,
in the autocorrelation method the sums
S(k,
I)
defined
in (llb) assume the same value for all different lags
(k,
a).
Therefore, the
P
+
1
equations of the system
(IO)
are
decou-
pled
in a system of
P
equations in
P
unknowns, plus an extra
decoupled
equation for
a,
in the TBLP case. More precisely,
the equations in
(IO)
now take the form
r
1
where
R,
a,
and
r
are defined in
(14),
(IS),
and
(16),
respec-
tively, and
y,
B
are known constants
y
=
B
.
[2
.
S(0,O)
-
B
N,]
(204
with
S(O,O)/N,,
for TBLP
B
=
S(0,
O)/M2,
for
LMLP
{OY
for
LP.
Thus, in the autocorrelation method the optimum bias
B
will
always be a little smaller than the local mean, and the system
of normal equations to be solved will always consist of
P
equa-
tions regardless of the way the bias is handled.
The matrix of the system of equations in (19a) is a
PX
P
symmetric block Toeplitz matrix which is always positive-
definite. There exist methods for the inversion of such ma-
trices which are more efficient than the Cholesky decomposi-
tion [IO]. The storage requirements for the autocorrelation
method, both for the signal
x(m,
n)
and for the entries
of
the
correlation matrix, are fewer than in the covariance method.

1216
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING,
VOL.
ASSP-32,
NO.
6,
DECEMBER
1984
In addition, by comparing (1 la) and
(1
8)
we can infer that
the covariance method requires a greater number of multi-
plications and additions for the computation of each corre-
lation lag, In the one-dimensional case, if the autocorrelation
method is used, the stability of the inverse prediction error fil-
ter is guaranteed, but with the covariance method it is not.
In the two-dimensional case neither approach can guarantee
stability, as shown in
[
1
11
.
Prediction
Error:
To examine the performance of two-
dimensional linear prediction, we computed the total normal-
ized prediction error by partitioning the image into an integer
number of nonoverlapping frames, summing the mean-squared
prediction error
(8)
from each individual frame, and dividing
the total error by the energy of the image signal
x(m, n)
over
the whole image. All the following results refer to the “Girl”
image of Fig. 2(a), but similar results were obtained for other
images as well.
Table I shows a comparison among the different approaches
for handling the bias using the covariance method and various
predictor orders and frame sizes. In the column headings
of
Table I, the first digit
P
refers
to
the predictor order and
A4
refers
to
the size of the
MX
M
analysis frame. Although the
results give a slight superiority to the TBLP method, all three
methods yield comparable prediction errors, This is not un-
expected since the available test images were oversampled, and
thus there was enough correlation between samples for all
of
the methods
to
work satisfactorily. However, as shown later,
we found a significant difference in the stability
of
the result-
ing models.
A
prediction error image is shown in Fig. 2(b), where the
prediction error samples (which are both positive and negative)
were mapped linearly onto the range 1-256
(8
bits/pixel)
so
that the prediction error sequence
e(m,
n)
could be displayed
as an image. It is obvious that linear prediction removes much
of the redundant information from the image, leaving only in-
formation about the edges.
Perspective plots of the magnitude of the
2-D
Fourier trans-
form of the original image and its prediction error are illus-
trated in Fig. 3(a) and (b). We see that in the frequency do-
main the linear prediction flattens the original low-pass image
spectrum.
Figs.
4
and
5
provide us with an informative comparison be-
tween the performance of the covariance and autocorrelation
method. In both figures the ordinate gives the mean-squared
prediction error (per frame, using the TBLP method) normal-
ized by the energy of the image signal
x(m,
n)
over each analy-
sis
frame, and averaged over 64 frames uniformly distributed
over the whole image of Fig. 2(a). In Fig.
4
the variation of
the error versus the size
M
of the
M
X
M
frame is illustrated.
The covariance method is shown
to
give a consistently smaller
prediction error and to be almost insensitive to frame size vari-
ations for a fixed predictor order
P=
3.
The error resulting
from the autocorrelation method is reduced significantly by
increasing the frame size, which implies that for large frames
both methods yield identical results. Fig. 5 shows the varia-
tion of the error versus the size
Q
of the
Q
X
Q
prediction
mask for a fixed frame size
M
=
32.
Again the covariance
method performs better than the autocorrelation method.
The size of the prediction mask has little effect upon the per-
(b)
Fig.
2. (a) Original
8
bit/pixel
image
(256
X
256 pixels).
(b)
Prediction
error
image
(P
=
8,
M
=
32).
TABLE
I
TOTAL XORMALIZED PREDICTION ERROR (PERCENT)
FOR
LINEAR PREDICTION OF INTENSITY IMAGES
Method/Conditions
(P,
M)
3,32 3, 16
8,
32
TBLP
LMLP
LP
0.707
0.643 0.622
0.710
0.655
0.6 24
0.715
0.669 0.633
formance
of
either method for this image. Prediction masks
bigger than
3
X
3
or
4
X
4
coefficients do not significantly im-
prove the prediction error.
C.
Predictor Stability
Another very important consideration concerning the all-
pole image model of Fig.
1
is its stability, because the model is
a fundamental component of both the coder and the decoder
in an adaptive predictive coding system
[l]
,
[2] and instabili-

MARAGOS
et
aL
:
TWO-DIMENSIONAL
LINEAR PREDICTION
1217
(b)
Fig.
3.
Perspective
plots
of
the magnitude
of
the
2-D
Fourier trans-
form (a)
of
the original image, and
(b)
of the prediction error
signal
(P
=
8,
M
=
32)
(the prediction error is magnified throe times relative
to the original image).
ANALYSIS ON INTENSITY IMAGE
1
'
PREO.
ORDER
=
3
AUTOCORRELATION
'
'
COVARIANCE
2L:
a
0
10
20
30
40
50
-
M
-
(
FRAME
SIZE
=
M X
M
>
Fig.
4.
Variation of prediction error versus frame size for intensity
images
(P
=
3).
ties could lead to large errors upon reconstructing the image
signal.
The system, about whose stability we must be concerned,
is the inverse prediction error filter. Its transfer function
is
I/A(z,, z2), where A(z,, z2) is given by
(9).
The impulse re-
sponse of A(z,,
z2)
has, support only on the first quadrant,
because we used
a
quarter-plane prediction mask. Therefore,
the inverse prediction error filter is recursively computable,
and conditions
for
its stability can be found in Huang's theo-
rem
[12],
from which we can derive the following necessary
condition.
Theorem:
A necessary condition for the stability of the first
quadrant recursive filter
l/A(zl,
z2)
is
ANALYSIS
ON
INTENSITY IMAGE
FRAME
SIZE
=
32
x
32
I
___.-..
t
2
6
a
-
Q
-
(
PREDICTION
MASK
SIZE
=
Q
X
Q
1
Fig.
5.
Variation
of
prediction error versus size
of
prediction
mask
for
intensity images
(M
=
32).
A(1,
1)
=
1
-
a(k,
I)
>
0.
kI
(k,
I)
E
n
The proof for the above condition
is
given in the Appendix.
Let
us
now recall that
P(1,
1)
=
1
-
A(l
,
1).
If
P(l
,
l)>
1,
we conclude from (21) that the model is necessarily unstable.
If
P(1,
1)<
1,
then the predictor might be stable since its
coefficients are away from the point of marginal instability:
81,
1)
=
1.
Also recall from
(4)
that
a.
=
B
[l
-
P(1,
l)].
Thus, the bias interacts with the stability of the model. For
positive image signals, the bias
B
must be a positive number.
Thus, comparing
(4)
and
(9)
with (21), we can
say
that
if
a,,
<
0
then the predictor is unstable. If
a.
>
0,
the predictor might
be stable.
When we arbitrarily require
a.
=
0
in the
(LP)
method, by
not estimating any bias, we force
P(1,
1)
=
1
whenever
B
is
nonzero, and thus force the model always
to
be marginally un-
stable. This is consistent with the fact that when we add a
constant (a bias) to the impulse response
of
an all-pole auto-
regressive model, then the resulting biased sequence
has
a
rational z-transform whose prediction coefficients of the de-
nominator polynomial
sum
up exactly to one.
This
is because
the added constant has a z-transform with a pole on the unit
surface.
The occurrence of an unstable model, to which Table
I1
re-
fers, is judged only by the criterion
a.
d
0.
However, for the
(LP)
method, the few times when the sum was less than
I
could be attributed to roundoff errors, because it has been
noticed experimentally that the
(LP)
method almost always
results in coefficients whose sum,
P(l
,
l),
is very close to unity.
This last observation indicates that there is indeed a bias in-
herent in the image data.
D.
2-0
Linear Prediction
of
Density Images
Linear prediction
of
a signal can
be
viewed as a linear opera-
tor acting upon the signal. Since linear operators obey the
principle of additive superposition, linear prediction is espe-
cially well suited to analysis of signals which possess additive
structure. Therefore, if linear prediction
is
to be applied
to
images, the question arises, can images be modeled properly by
a linear system?

Citations
More filters
Book

Introduction to data compression

TL;DR: The author explains the development of the Huffman Coding Algorithm and some of the techniques used in its implementation, as well as some of its applications, including Image Compression, which is based on the JBIG standard.
Journal ArticleDOI

The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS

TL;DR: LOCO-I as discussed by the authors is a low complexity projection of the universal context modeling paradigm, matching its modeling unit to a simple coding unit, which is based on a simple fixed context model, which approaches the capability of more complex universal techniques for capturing high-order dependencies.
Journal ArticleDOI

Digital processing of speech signals

Journal ArticleDOI

Context-based lossless interband compression-extending CALIC

TL;DR: It is demonstrated that CALIC's techniques of context modeling of DPCM errors lend themselves easily to modeling of higher-order interband correlations that cannot be exploited by simple interband linear predictors alone.
Journal ArticleDOI

Performance analysis of reversible image compression techniques for high-resolution digital teleradiology

TL;DR: The modified multichannel linear predictor outperforms the other methods while offering certain advantages in implementation and is compared to that of a novel two-dimensional linear predictive coder developed by extending the multichannels version of the Burg algorithm to two dimensions.
References
More filters
Journal ArticleDOI

Linear prediction: A tutorial review

TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Journal ArticleDOI

Digital processing of speech signals

Journal ArticleDOI

Speech analysis and synthesis by linear prediction of the speech wave.

TL;DR: Application of this method for efficient transmission and storage of speech signals as well as procedures for determining other speechcharacteristics, such as formant frequencies and bandwidths, the spectral envelope, and the autocorrelation function, are discussed.
Journal ArticleDOI

Image data compression: A review

TL;DR: A large variety of algorithms for image data compression are considered, starting with simple techniques of sampling and pulse code modulation (PCM) and state of the art algorithms for two-dimensional data transmission are reviewed.
Journal ArticleDOI

Picture coding: A review

TL;DR: This paper presents a review of techniques used for digital encoding of picture material, covering statistical models of picture signals and elements of psychophysics relevant to picture coding, followed by a description of the coding techniques.