scispace - formally typeset
Open AccessJournal ArticleDOI

Cubic convolution interpolation for digital image processing

R. Keys
- 01 Dec 1981 - 
- Vol. 29, Iss: 6, pp 1153-1160
Reads0
Chats0
TLDR
It can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines.
Abstract
Cubic convolution interpolation is a new technique for resampling discrete data. It has a number of desirable features which make it useful for image processing. The technique can be performed efficiently on a digital computer. The cubic convolution interpolation function converges uniformly to the function being interpolated as the sampling increment approaches zero. With the appropriate boundary conditions and constraints on the interpolation kernel, it can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines. A one-dimensional interpolation function is derived in this paper. A separable extension of this algorithm to two dimensions is applied to image data.

read more

Content maybe subject to copyright    Report

IEEE
TRANSACTIONS
ON
ACOUSTICS, SPEECH,
AND
SIGNAL
PROCESSING,
VOL.
ASSP-29,
NO.
6,
DECEMBER
1981
1153
Cubic Convolution Interpolation for Digital Image
Processing
ROBERT
G.
KEYS
Absfrucf-Cubic convolution interpolation is a new technique for
re-
sampling discrete data. It has a number of desirable features which
make it useful
for
image processing. The technique can be performed
efficiently on
a
digital
computer. The cubic convolution interpolation
function converges uniformly to the function being interpolated
as
the
sampling increment approaches zero, With the appropriate boundary
conditions and constraints on the interpolation kernel, it can be
shown
that the order of accuracy
of
the cubic convolution method
is
between
that
of
linear interpolation and that of cubic splines.
A
one-dimensional interpolation function is derived in this paper.
A
separable extension
of
this
algorithm to
two
dimensions is applied to
image data.
I
INTRODUCTION
NTERPOLATION
is
the process of estimating the inter-
mediate values of a continuous event from discrete samples.
Interpolation is used extensively in digital image processing to
magnify or reduce images and to correct spatial distortions.
Because of the amount of data associated with digital images,
an efficient interpolation algorithm is essential. Cubic con-
volution interpolation was developed in response to this
requirement.
The algorithm discussed in this paper is a modified version
of the cubic convolution algorithm developed by Rifman [l]
and Bernstein
[2].
The objective of this paper is to derive the
modified cubic convolution algorithm and to compare it with
other interpolation methods.
Two conditions apply throughout this paper. First, the
analysis pertains exclusively to the one-dimensional prob-
lem; two-dimensional interpolation is easily accomplished by
performing one- dimensional interpolation in each dimension.
Second, the data samples are assumed to be equally spaced, (In
the two-dimensional case, the horizontal and ,vertical sampling
increments do not have to be the same.) With these conditions
in mind, the first topic to consider is the derivation of the
cubic convolution algorithm.
BASIC
CONCEPTS
CONCERNING
THE
CUBIC
CONVOLUTION
ALGORITHM
An interpolation function is a special type of approximating
function.
A
fundamental property of interpolation functions
is that they must coincide with the sampled data at the inter-
polation nodes, or sample points, In other words, iff is a sam-
pled function, and if
g
is the corresponding interpolation func-
tion, then
g(xk)
=f(xk)
whenever
xk
is an interpolation node.
Manuscript received July
29, 1980;
revised January
5,
1981
and
April
The author
is
with the Exploration and Production Research Lab-
30,
1981.
oratory, Cities Service
Oil
Company, Tulsa,
OK
74102.
For equally spaced data, many interpolation functions can
be written in the form
Among the interpolation functions that can be characterized
in this manner are cubic splines and linear interpolation func-
tions. (See Hou and Andrews [3]
.)
In
(l),
and for the remainder of this paper,
h
represents the
sampling increment, the
xk’s
are the interpolation nodes,
u
is
the interpolation kernel, and
g
is the interpolation function.
The
Ck’S
are parameters which depend upon the sampled data.
They are selected
so
that the interpolation condition,g(xk)
=
f(xk)
for each
xk,
is satisfied.
The interpolation kernel in (1) converts discrete data into
continuous functions by an operation similar to convolution.
Interpolation kernels have a significant impact on the numer-
ical behavior of interpolation functions. Because of their in-
fluence on accuracy and efficiency, interpolation kernels can
be effectively used to create new interpolation algorithms.
The cubic convolution algorithm is derived from
a
set of con-
ditions imposed
on
the interpolation kernel which are designed
to maximize accuracy for a given level
of
computational effort.
THE CUBIC CONVOLUTION INTERPOLATION KERNEL
The cubic convolution interpolation kernel is composed of
piecewise cubic polynomials defined on the subintervals
(-
2,
-
l),
(-
1,
0),
(0,
l),
and
(1,
2).
Outside the interval
(-
2,
2),
the interpolation kernel is zero, As a consequence
of
this con-
dition, the number of data samples used to evaluate the inter-
polation function in (1) is reduced to four.
The interpolation kernel must be symmetric. Coupled with
the previous condition, this means that
u
must have the form
2
<
Is[.
The interpolation kernel must assume the values
u(0)
=
1
and
u(n)
=
0
when
n
is any nonzero integer. This condition
has an important computational significance. Since
h
is the
sampling increment, the difference between the interpolation
nodes
xi
and
xk
is
(j
-
k)
h.
Now if
xi
is substituted for
x
in
(I),
then
(1)
becomes
Because
u
(j
-
k)
is zero unless
j
=
k,
the right-hand side
of
(3)
0096-3518/81/1200-1153$00.75
0
1981
IEEE

1154
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. ASSP-29, NO.
6,
DECEMBER 1981
reduces to
cj.
The interpolation conditon requires that
g(xj)
=
f(xj).
Therefore,
cj
=f(xj).
In other words, the
ck’s
in
(1)
are
simply replaced by the sampled data. This is a substantial
computational improvement over interpolation schemes such
as the method of cubic splines. The spline interpolation kernel
is not zero for nonzero integers.
As
a result, the
ck’s
must be
determined by solving a matrix problem.
In addition to being
0
or
1
at the interpolation nodes, the in-
terpolation kernel must be continuous and have
a
continuous
first derivative. From these latter conditions, a set
of
equa-
tions can be derived for the coefficients in
(2).
The conditions
u(0)
=
1
and
u(1)
=
u(2)
=
0
provide four
equations for these coefficients:
1
=u(0)=Dl
O=u(l-)=Al
+B1
+C1
+Dl
0
=
u(l+)
=A,
t
B2
+
C2
t
Dz
O=u(2-)=8Az t4B2 +2Cz +Dz.
Three more equations are obtained from the fact that
u’
is
continuous at the nodes
0,
1,
and
2:
-
c1
=
u‘(o-)
=
U‘(O+)
=
c1
3A1
+
2B1
t
C1
=
u’(l-)
=
~’(1’)
=
3Az
+
2Bz
t
Cz
12A2
t
4B2
cz
=
U’(2-)
=
U’(2’)
=
0.
In all, the constraints imposed on
u
result in seven equations.
But since there are eight unknown coefficients, one more con-
dition is needed to obtain
a
unique solution. hfman
[l]
and
Bernstein
[2]
use the constraint that
Az
=
-
1.
In this pre-
sentation, however,
Az
will be selected so that the interpola-
tion function
g
approximates the original function
f
to as
high
a degree
as
possible. In particular, assume that
f
has several
orders of continuous derivatives so that Taylor’s theorem,
from calculus, applies. The idea will be to choose
Az
so
that
the cubic convolution interpolation function and the Taylor
series expansion for
f
agree for as many terms as possible.
To
accomplish this, let
A2
=a.
The remaining seven coef-
ficients can be determined, in terms of
a,
from the previous
seven equations. The solution for the interpolation kernel,
in terms of
a,
is
~~t2)~~~~-(at3)~s~’+~
o<Jsl<l
u(s)=
alsI3
-
5alsI2
t
gals1
-
4a
1
<
Is1
<2 (4)
Now suppose that
x
is any point at which the sampled data
is to be interpolated, Then
x
must be between two consecu-
tive interpolation nodes which can be denoted by
xi
and
xi+,
.
Let
s
=
(x
-
xj)/h.
Since
(x
-
xk
j/h
=
(x
-
xi
+-
xi
-
xk)/h
=
s
+
j
-
k,
(1)
can be written
as
2<
\SI.
g(x)
=
cRu(s
+j
-
k).
(5)
k
Furthermore, since
u
is zero except in the interval
(-2, 2j,
and
since
0
<
s
<
1,
(5)
reduces to
g(x)
=
Cj-,U(S
+
1)
t
CiU(S)
-t
Cj+lU(S
-
1)
+
Ci+,U(S
-
2).
(6)
From
(4),
it follows that
u(s
t
1)
=
a(s
t
1)3
-
Sa(s
t
1)’
+
8a(s
t
1)
-
4a
=
as3
-
2as2
t
as
u(s)
=(a
t
2)s3
-
(a
+
3)s2
+-
1
u(s-
1)=-(a+2)(s-
I)~
-
(a+3)(s-
1)2
t
1
=
-(at 2)s3
+
(2a
t
3)s2
-
as
u(s-
2)=-a(s- 2)3
-
5a(s-
2)’-
8a(s- 2)- 4a
=
-as3
+as2.
By substituting the above relationships into
(6)
and collect-
ing powers of
s,
the cubic convolution resampling function
becomes
g(x)
=
-
[a(cj+’
-
~j-1)
+
(a
+
2)
(cj+l
-
~i)]
s3
+
[2a(cj+,
-
cj-,)
+
3(cj+l
-
ci)
+a(~j+~
-
cj)]
s’
-
a(cj+l
-
cj-,)s
t
cj.
(7)
Iff has at least three continuous derivatives in the interval
[xj,
xi+,
1,
then according to Taylor’s theorem
cj+1
=f(xj+l)
=f(~j)
+f’(~j)h
+f”(~j)h’/2
+
O(h3>
(8)
where
h
=
-
xi.
O(h3)’represents the terms of order
h3
;
that is, terms which go to zero at a rate proportional to
h3.
Similarly,
Cj+2
=f(xj)
+
2hfyXj)
t
2h2fyxj)
t
o(h3)
(9)
cj-l
=f(xj)-
hfyxj)
+
h2fyxj)/2
+
o(h3).
(
10)
When
(8)-(10)
are substituted into
(7),
the following equa-
tion for the cubic convolution interpolation function is
obtained.
g(x)
=
-
(2a
t
1)
[2hf’(xj)
t
h2f”(xj)]s3
t
[(6a
t
3)
hf’(xj)
-t
(4a
t
3) h2f”(xj)/2]
s’
-
2@hfyXj)
t
f(xj)
+
o(h3
1.
(1
1)
Since
sh
=
x
-
xi,
the Taylor series expansion forf(x) about
x
is
f(x)
=f(xij
t
shfr(xi)
t t
o(h3).
(12)
Subtracting (1
1)
and
(1
2)
f(x)
-
g(x)
=
(2a
+
1)
[2hf’(xj)
t
h’f”(xj)]
s3
-
(2a
t
I)
[3hf’(xj)
t
h’f’’(xi)]
s2
+
(2a
t
1)
shf’(xjj
t
O(h3). (13)
If the interpolation function
g(x)
is
to
agree with the first
three terms of the Taylor series expansion forf, then the pa-
rameter
a
must be equal to
-
i.
This choice provides the final
condition for the interpolation kernel:
A2
=
-
3.
WhenAz
=
a
=
-
1,
then
2

KEYS:
CONVOLUTION
INTERPOLATION
FOR
DIP
1155
Equation
(14)
implies that the interpolation error goes
to
zero
uniformly at a rate proportional
to
h3,
the cube of the sam-
pling increment. In other words,
g
is a third-order approxi-
mation for
f.
The constraint
Az
=
-
3
is the only choice for
A2
that will achieve third-order precision; any other condition
will result in at most a first-order approximation.
Using the final condition that
Az
=
-
3,
the cubic convolu-
tion interpolation kernel is
BOUNDARY CONDITIONS
In the initial discussion,
f,
the function being sampled, was
defined for all real numbers. In practice, however,
f
can only
be observed on a finite interval. Because the domain off is
restricted
to
a finite interval, say
[a,
b]
,
boundary conditions
are necessary.
First of all, the sample points
xk
must be defined to corre-
spond to the new interval of observation,
[a,
b].
Let
xk
=
Xk-1
thfork=1,2,3;.-,N,wherexo=a,xN=b,and
h
=
(b
-
a)/N
for some large integer
N.
(The integer
N
may
be chosen from the Nyquist criterion.) The results of the pre-
vious section are valid for any set
of
uniformly spaced sample
points and thus are not affected by this new definition for
the
xkk
On the interval
[a,
b]
,
the cubic convolution interpolation
function can be written as
since to determine
g
for
all
x
in the interval
[a,
b]
,
the values
ofck fork=-1,
0,
1,
,Nt
1
areneeded. Fork=O,
1,2,
*
*
,
N,
ck
=f(xk).
For
k
=
-
1
and for
k
=N
t
1,
however, the
value off is unknown, since and
xN+
fall outside the in-
terval of observation. The values assigned to
cW1
and
c~+~
are
boundary conditions.
Boundary conditions must be chosen
so
that
g(x)
is an
O(h3)
approximation to
f(x)
for all
x
contained in the interval
[a,
b]
.
To find an appropriate condition for the left-hand boundary,
suppose that
x
is
a
point in the subinterval
[xo, x1
1.
For this
value of
x,
the interpolation function reduces to
g(X)=C-1U(S+ ~)+c~u(s)+c~u(s-
~)+c~u(s-
2)
(17)
where
s
=
(x
-
xo)/h.
By substituting the equations in (1
5)
for
u
and collecting powers
of
s,
the interpolation function re-
duces to
g(X)
=
S3
[Cz
-
3Cl
t
3Co
-
C-1]/2
-
S2
[Cz
-
4Cl
t5c~-2c~,]/2ts[c,-c~~]/2tc~.
(18)
If
g
is an
0(h3)
approximation forf, then the
s3
-term must be
zero. This means that
c-~
should be chosen
so
that
c-~
=
cz
-
3cl
t
3c0,
or
After substituting
(19)
into
(18),
the interpolation function
becomes
g(x>
=
s2
[f(xz)
-
2f(x,
1
+f(XO)l/2
+
s
[-f(xz)
f
4fh
1
-
3f(x0
11
/2 +f(xo). (20)
All
that remains is
to
show that
(20)
is a third-order approxi-
mation for
f(x).
First expand
f(x2)
and
f(xl)
in a Taylor
series about
x.
:
f(xz)
=f(xo)
t
2fyx0)h
+
2fyxO)
h2
+
o(h3) (21)
f(X1)
=mol
+f'(xo)
h
+f"(XO)
h2/2
f
0(h3).
(22)
By replacing
f(xz)
andf(xl) in
(20)
with
(21)
and
(22),
the
following result is obtained.
g(x)
=f(xo) tfyxo)
sh
tfrf(xo) ~/2
t
o(h3). (23)
Since
sh
=
x
-
xo,
the Taylor series expansion for
f(x)
about
x.
is
f(x) =f(xo) tf'(xo)
sh
+f"(xo) s2h2/2
+
O(h3).
(24)
Subtracting
(23)
from
(24)
f(x)
-
g(x)
=
0(h3
1.
Thus, the boundary condition specified by
(19)
results in a
third-order approximation for
f(x)
when
x. <x
<
xl.
A similar analysis can be used to obtain
cN+
1.
If
x
is
in the
interval
[x,+l,
XN],
the boundary condition
cN+l
=3f(xN)- 3fh-1)
+f(XN-Z)
(25)
will provide a third- order approximation for
f.
Using the interpolation kernel defined by (15) and the
boundary conditions (19) and
(25),
a complete description of
the cubic convolution interpolation function can now be
given.
When
xk <x
<
xk+
,
the cubic convolution interpolation
function is
g(x)
=
Ck-1
(-s3
i-
2s2
-
s)/2
+
Ck(3S3
-
5s2
-I-
2)/2
+
Ck+
1
(-3S3
t
4S2
t
S)/2
t
ck+z
(S3
-
s2)/2
where
s
=
(x
-
xk)/h
and
ck
=f(xk)
for
k
=
0,
1,
2,
*
,
N;
c-1
=
3f(x0)- 3f(xl)+f(x2);
and
CN+~
=
3f(xrv)
-
3f(x~-,
+
One of the basic assumptions used to derive the cubic con-
volution algorithm was that the sampled function possessed a
continuous third derivative. This assumption is not unreason-
able for many practical problems. For example, the sampled
function is often assumed
to
be band limited. Since band-
limited functions are infinitely differentiable, they easily meet
the requirements for cubic convolution interpolation.
Although a continuous third derivative is required for the
sampled function, no such restriction is imposed on the in-
terpolation function. In general, the cubic convolution inter-
polation function will not have a continuous second drivative.
Nevertheless, if the sampled function has a continuous third
derivative, then from the results of the last two sections, the
f(xN-2)*

1156 IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING,
VOL.
ASSP-29, NO. 6, DECEMBER 1981
interpolation function is a third-order approximation. This
fact is not inconsistent with other interpolation methods. For
example, linear interpolation gives a second-order approxima-
tion, provided the sampled function has a continuous second
derivative. However, the linear interpolation function is not
everywhere continuously differentiable. Further comparisons
with other interpolation methods are the topic of the next
section.
COMPARISON WITH OTHER
METHODS
There are several important considerations in the analysis of
an interpolation method. Of major importance is the accuracy
of the technique: the exactness with which the interpolation
function reconstructs the sampled function. Additionally,
some interesting effects can be predicted from the spectral
characteristics of the interpolation kernel. In this section, the
accuracy, the spectral properties, and the efficiency of the
cubic convolution interpolation algorithm will be compared
with other methods.
Some indication of the accuracy of the method is given by
the type of function which can be exactly reconstructed. The
cubic convolution interpolation function exactly reconstructs
any second-degree polynomial. This is because the third de-
rivative of any second-degree polynomial is zero and, thus, the
approximation error is zero. In contrast, linear interpolation
will reproduce at most a first-degree polynomial. A scheme
referred
to
by Rifman
[l]
and Bernstein [2] as the “nearest-
neighbor” algorithm uses the nearest sample as the interpo-
lated value. The nearest-neighbor algorithm is exact only
when the sampled function is a constant. By using cubic con-
volution instead of linear interpolation or nearest-neighbor
resampling, the degree of complexity of functions which can
be exactly reconstructed is increased.
The relative accuracy of different interpolation methods can
be determined from their convergence rates. The convergence
rate is a measure of how fast the approximation error goes
to
zero as the sampling increment decreases. In the derivation of
the cubic convolution algorithm, it was found that the ap-
proximation error consists of terms proportional to
h3,
where
h
is the sampling increment. In this case, the approximation
error goes to zero at least as fast as
h3
goes to zero. Thus, the
convergence rate for the cubic convolution interpolation func-
tion is
0(h3).
Linear interpolation functions have a
O(h2)
convergence
rate.
To
see this, suppose that
x
is a point between the pair
of interpolation nodes
xi
and
xi+,
.
Let
s
=
(x
-
xj)/h.
From
Taylor’s theorem, iff has a continuous second derivative in the
interval
[xi,
xi+
,
]
,
then
f(Xj) =f(x)
-
f’(x)
sh
+
O(h2)
(26)
where
O(h2)
is the remainder term. Since
-
x
=
(1
-
s)
h,
it
also follows (from Taylor’s theorem) that
f(xj+l) =f(x) +f’(x)
(1
-
8)
h
+
O(h2).
(27)
Now if (26) is multiplied by
1
-
s
and if (27) is multiplied by
s
and the resulting equations are added, then
(1-s)f(xi)+sf(xj+l)=f(x)+0(h2).
(28)
The left-hand side of
(28)
is the linear interpolation function
for
f(x).
The right-hand side of (28) shows that the approxi-
mation error is proportional
to
h2.
Thus, the convergence rate
for linear interpolation is
O(h2).
Since the cubic convolution
algorithm has a
O(h3)
convergence rate, the cubic convolution
interpolation function will generally be a more accurate ap-
proximation to the sampled function than the linear interpola-
tion function.
The nearest-neighbor algorithm has a
O(h)
convergence rate.
This is an immediate consequence of the mean value theorem.
Iff has a continuous derivative on the interval between
x
and
xi,
then there is a point
m
between
x
and
xi
such that
f(x> =f(xj)
+f‘(m)
sh
(29)
where
s
=
(x
-
xj)/h.
If
xi
is the nearest interpolation node to
x,
then
f(xj)
is the value of the nearest-neighbor interpolation
function for
f(x).
The approximation error in (29) is propor-
tional to
h
which means that the convergence rate is
O(h).
Additional insight into the behavior of interpolation func-
tions can be gained from their frequency domain character-
istics. All of the interpolation functions mentioned
so
far can
be written in the form
Examples of some interpolation kernels which replace
u
in
(30)
are shown in Fig.
1.
Taking the Fourier transform of
(30),
G(U)
=
2
ck
eXp
(-ibxk)
hU(Uh)
(3
1)
k
where
and
1
r+-
Equation
(31)
illustrates the “smoothing” effect of inter-
polation. The summation term in (31) is the discrete Fourier
transform of the sampled data, and
U(oh)
acts as a smoothing
filter. An analysis of the various interpolation schemes can be
made by comparing the Fourier transforms of their interpola-
tion kernels.
The amplitude spectra of the nearest-neighbor, linear inter-
polation, and cubic convolution interpolation kernels are
graphed in Fig.
2
for frequencies from
0
to
4n/h.
The response
of an ideal interpolation kernel (for band-limited data) is a
unit step function which has the value of one for frequencies
between
-
n/h
and
+n/h,
and zero elsewhere. Such an interpo-
lation kernel would pass every frequency component of a
band-limited function without change, provided the sampling
increment was sufficiently small.
Deviations from the ideal spectrum in the shaded region in
Fig. 2 (from
0
to
+n/h)
cause a loss of high frequency infor-

KEYS:
CONVOLUTION
INTERPOLATION
FOR
DIP
1157
0
n/
h
2n/
h
3H/
h
4n/
h
Fig.
2.
Amplitude spectra
of
interpolation kernels.
FREOUENCY
mation. In image data, the loss of high frequency information
causes the image to appear blurred. On the other hand, devia-
tions from the ideal spectrum beyond the shaded area contrib-
ute to aliasing. From Fig.
2,
it is evident that of the three
methods presented, the cubic convolution method provides
the best approximation to the ideal spectrum.
Some of the points discussed in this section can be illustrated
with a numerical example. Since the cubic convolution algo-
rithm was developed for resampling image data, it is appro-
priate to use a two-dimensional example. Consider the two-
dimensional radially symmetric function,
f(x,
y)
=
sin (0.5r2),
where
r2
=x2
+y2.
Since the Fourier transform of this func-
tion is
F(o,,
a,,)
=
cos(0.5p2) where
p2
=
o$
+
o;,
the
function
f(x,
y)
is not band limited. From the sampling
theorem, it is known that any attempt to reconstruct
f
from
discrete samples must fail. Nevertheless, with the results de-
rived in the last section, a reasonably accurate approximation
for
f
can be obtained within a bounded region.
For this example, identical sampling increments were used
for both the
x
and
y
coordinates. The sampling increment
h
was chosen to be half the length of the interval between the
22nd and 23rd zeros of
f(r).
This value of
h
guarantees that
there will be at least two samples between each zero crossing
of
f(r)
within the region over whichfis being sampled. In this
case, the sampling increment will be
h
=
0.132119066. Sam-
ples of
f(x,
y)
were obtained on a 64
X
64 element grid with
the origin in the upper left-hand corner.
An image was formed from the two-dimensional data by
converting amplitude values into light intensities. Maximum
intensity (white) was assigned to the maximum amplitude of
f,
t1.
Zero intensity (black) was assigned to the minimum
value off,
-
1.
Intermediate values of
f(x,
y)
were converted
to proportional shades of gray.
The physical size of
an
image
is
controlled by the number of
samples in an image and the sample spacing. The example
images in this paper are displayed at
a
sample spacing of
100
samples/in horizontally and 96 samples/in vertically. Each
image fills
a
3.5 in square. Therefore, a two-dimensional array
of 350
X
336 points is required to represent each image.
To
obtain
a
350
X
336 point array from a 64
X
64 point array,
two-dimensional interpolation must be used. Interpola-
tion employed in this manner is equivalent to digital image
magnification.
Two-dimensional interpolation is accomplished by one-
dimensional interpolation with respect to each coordinate.
The two-dimensional cubic convolution interpolation function
is
a
separable extension of the one-dimensional interpolation
function. When
(x,
y)
is a point in the rectangular subdivision
[xi,
xi+,
]
X
[yk,
yk+l],
the two-dimensional cubic convolu-
tion interpolation function is
where
u
is the interpolation kernel of
(15)
and
h,
and
h,
are
the
x
and
y
coordinate sampling increments. For interior grid
points, the
cik’s
are given by
cjk
=f(xi,
yk).
If
xN
is the upper

Citations
More filters
Journal ArticleDOI

Image registration methods: a survey

TL;DR: A review of recent as well as classic image registration methods to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas.
Journal ArticleDOI

Example-based super-resolution

TL;DR: This work built on another training-based super- resolution algorithm and developed a faster and simpler algorithm for one-pass super-resolution that requires only a nearest-neighbor search in the training set for a vector derived from each patch of local image data.
Book

Remote sensing, models, and methods for image processing

TL;DR: The Nature of Remote Sensing: Introduction, Sensor Characteristics and Spectral Stastistics, and Spatial Transforms: Introduction.
Journal ArticleDOI

Functional and effective connectivity in neuroimaging: A synthesis

TL;DR: This article presents one approach that has been used in functional imaging and shows how the integration within and between functionally specialized areas is mediated by functional or effective connectivity.
Proceedings ArticleDOI

Super-resolution through neighbor embedding

TL;DR: This paper proposes a novel method for solving single-image super-resolution problems, given a low-resolution image as input, and recovers its high-resolution counterpart using a set of training examples, inspired by recent manifold teaming methods.
References
More filters
Journal ArticleDOI

Cubic splines for image interpolation and digital filtering

TL;DR: Applications to image and signal processing include interpolation, smoothing, filtering, enlargement, and reduction, and experimental results are presented for illustrative purposes in two-dimensional image format.
Journal ArticleDOI

Digital image processing of earth observation sensor data

TL;DR: Digital image processing techniques that were developed to precisely correct Landsat multispectral Earth observation data are described and illustrations of the results achieved are given.

Digital rectification of ERTS multispectral imagery

S. S. Rifman
TL;DR: In this article, the first step toward producing precision corrected ERTS multispectral imagery has been produced utilizing all digital techniques, and the resultant image is represented in a meter/meter mapping utilizing an intensity resampling technique.
Related Papers (5)