scispace - formally typeset
Open AccessJournal ArticleDOI

Numerical approximation of vector-valued highly oscillatory integrals

Sheehan Olver
- 21 Jun 2007 - 
- Vol. 47, Iss: 3, pp 637-655
TLDR
In this article, a vector-valued version of the asymptotic expansion is constructed, which allows us to determine the order of a Levin-type method with highly oscillatory kernels, such as Airy functions or Bessel functions.
Abstract
We present a method for the efficient approximation of integrals with highly oscillatory vector-valued kernels, such as integrals involving Airy functions or Bessel functions. We construct a vector-valued version of the asymptotic expansion, which allows us to determine the asymptotic order of a Levin-type method. Levin-type methods are constructed using collocation, and choosing a basis based on the asymptotic expansion results in an approximation with significantly higher asymptotic order.

read more

Content maybe subject to copyright    Report

Numerical approximation of vector-valued highly oscillatory integrals
Sheehan Olver
Abstract We present a method for the efficient approximation of integrals with
highly oscillatory vector-valued kernels, such as the Airy function. We generalize the
asymptotic expansion for the vector-valued case, which allows us to determine the asymp-
totic order of a Levin-type method. Levin-type methods are constructed using collocation,
and choosing the basis wisely results in an approximation with significantly higher asymp-
totic order.
1. Introduction
We are concerned with the integral
I[f] =
Z
b
a
f(x)
>
y(x) dx,
where f is a smooth vector-valued function and y is a highly oscillatory vector-valued function. We assume
that y depe nds on a parameter ω that determines the frequency of oscillations. We also assume that y
satisfies the differential equation
y
0
(x) = A(x)y(x),
where A is a matrix-valued function that depends on ω. Some common examples include:
y(x) = e
iω g(x)
, A(x) = iωg
0
(x),
y(x) =
J
m1
(ωx)
J
m
(ωx)
, A(x) =
m1
x
ω
ω
m
x
,
y(x) =
Ai (ωx)
ωAi
0
(ωx)
, A(x) =
0 1
ω
3
x 0
,
where Ai is an Airy function and J
m
is a Bessel function [6].
For large values of ω, traditional quadrature techniques fail to approximate I[f ] efficiently. Unless the
number of sample points is sufficiently greater than the number of oscillations, Gauss–Legendre quadrature
gives an approximation which is essentially random. In the one-dimensional case of y = e
iω g
with no
stationary points, the integral I[f ] is O
ω
1
for increasing ω [9]. This compares with an error of order
O(1) of the traditional techniques. This implies that it is more accurate to approximate I[f] by zero than
to use Gauss–Legendre quadrature!
The goal of this paper is to generalize a method developed by Levin in [5] to obtain higher asymptotic
orders. This will b e accom plished in a similar vein to [7], which dealt with the case of y = e
iω g
. In [7],
the asymptotic expansion was used to determine the asymptotic behaviour of the error of a Levin-type
method. Thus our first task is to derive a vector-valued kernel version of the asymptotic expansion. This is
accomplished in Section 3, using the asymptotic tools developed in Section 2. With an asymptotic expansion
in hand, we can successfully prove the order of error for a Levin-type method in Section 4. In [7], it was
noted that choosing a certain basis causes the asymptotic order to increase without the need for nontrivial
multiplicities. In Section 5, we construct a vector-valued version of such a basis, allowing us to obtain higher
asymptotic orders with significantly smaller systems.
Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, Wilberforce Rd, Cambridge
CB3 0WA, UK, email: S.Olver@damtp.cam.ac.uk
1

2. Matrix and function asymptotics
In this section we present notation for the asymptotic b e haviour of matrices and functions that depend
on ω as a parameter. For the entirety of the paper, all norms are L
norms, for vectors, matrices and
functions. The norm of a function is taken over the interval [a, b]. Furthermore, we use 1
n×m
to represent
an n × m matrix whose entries are all one. We will sometimes write 1 if the size of the matrix is implied by
context. Finally, when the vector of all ones is known to be a row vector, we emphasize this fact by denoting
the vector as 1
>
, where the number of columns is implied by context.
Let A = (a
ij
) and
˜
A = (˜a
ij
) be two n × m matrices which depend on a real parameter ω, such that the
entries of
˜
A are always nonnegative. We write A = O
˜
A
for ω if it is true componentwise: a
ij
=
O
˜a
ij
. Thus A = O(1) means that all the com ponents of A are bounded for increasing ω. Multiplication
works as expected: if A = O
˜
A
and B = O
˜
B
then AB = O
˜
A
˜
B
. Note, however, that O
1
˜
A
is
not equivalent to O
˜
A
. If f is an n-dimensional vector, then kf k is of the same asymptotic order as
˜
f
>
1
n×1
= 1
>
˜
f. Furthermore, if A = O(1) is a square matrix, then det A = O(1). Finally, it can easily be
seen that kAk and
A
>
have the same asymptotic order.
We can find the asymptotic behaviour of A
1
under certain assumptions, which will be used for the
proof of Theorem 4.1. An n × n matrix A satisfies the right-hand regularity condition for a nonsingular
matrix W depending on ω if it can be written as A = P + GW , where G is a nonsingular matrix such
that G
1
= O(1) and P is a matrix such that P W
1
= o(1). Likewise, A satisfies the left-hand regularity
condition for W if it can be written as A = P + W G, where again G
1
= O(1), but now P is a matrix such
that W
1
P = o(1). We c an often choose G so that it is independent of ω, in which case it is only necessary
to show that G is nonsingular.
Theorem 2.1 If A satisfies the right-hand regularity condition, then A
1
= O
W
1
1
. If A satisfies the
left-hand regularity condition, then A
1
= O
1W
1
.
Proof : We begin with the case where A satisfies the right-hand regularity condition. Note that A =
(P W
1
G
1
+ I)GW = (I M)GW for M = P W
1
G
1
. Since G
1
= O(1), it follows that M = o(1)
and large ω ensures that kMk < 1. We thus know that the inverse of I M exists, and furthermore
(I M)
1
= I + M(I M)
1
= I + o(1) (I M )
1
.
If (I M )
1
was not O(1), we would obtain a contradiction, since the right-hand side of the equality could
not be of the same asymptotic order. It follows that (I M )
1
= O(1), and we can write
A
1
= W
1
G
1
(I M)
1
= W
1
O(1) = O
W
1
1
.
We can do something similar for the left-hand regularity condition. Now A = W G (G
1
W
1
P + I) =
W G(I M), for M = G
1
W
1
P . By the same logic as before, the inverse of I M exists and is O(1).
Thus we can write
A
1
= (I M)
1
G
1
W
1
= O
1W
1
.
Q.E.D.
We now turn our attention to functions which depend on ω as a parameter, for example f (x) = Ai (ωx).
Let f be such a function, and
˜
f a nonnegative constant that depends on ω. We write f = O
˜
f
if the norm
of f and its derivatives are all of order O
˜
f
as ω . In other words,
f
(m)
= O
˜
f
, for m = 0, 1, . . ..
The most common usage is f = O(1), which states that f and its derivatives are bounded in [a, b] for
increasing ω. We also use this notation for vector-valued and matrix-valued functions in a componentwise
manner. Let A(x) = (a
ij
(x)) be an n × m matrix-valued function that depends on ω, and
˜
A = a
ij
) an
n × m matrix with nonnegative components, which also depends on ω. We write A = O
˜
A
if it is true
componentwise: a
ij
= O
˜a
ij
for ω .
2

Note that this class of functions has the following properties, where A = O
˜
A
and B = O
˜
B
are
compatible matrix-valued functions, c = Oc) is a constant depending on ω, m is a nonnegative integer and
a x b:
A(x) = O
˜
A
, A
(m)
= O
˜
A
, A + B = O
˜
A +
˜
B
= O
max
n
˜a
ij
,
˜
b
ij
o
,
AB = O
˜
A
˜
B
, cA = O
˜c
˜
A
.
Finally, if A(x) = O
˜
A
for all a x b, then
R
b
a
A(x) dx = O
˜
A
.
3. Asymptotic expansion
An asymptotic expansion is a valuable tool in the analysis of integrals, and for large ω will provide
a fairly accurate numerical approximation to I[f]. Consider for a moment the one-dimensional oscillator
y = e
iω g
. In the derivation of its asymptotic expansion [3], the fact that y satisfies the differential equation
y
0
(x) = iωg
0
(x)y(x) = A(x)y(x)
was used. The asymptotic expansion follows from writing y as A
1
y
0
, assuming that A(x) 6= 0 in the interval
of integration, and integrating by parts:
Z
b
a
fy dx =
Z
b
a
fA
1
y
0
dx =
fA
1
y
b
a
Z
b
a
(fA
1
)
0
y dx =
1
iω
"
f
g
0
0
y
#
b
a
1
iω
Z
b
a
f
g
0
0
y dx.
As ω becomes large, the error resulting from the integral term decays. We can do something similar for the
vector-valued case:
Theorem 3.1 Suppose that y satisfies the differential equation
y
0
(x) = A(x)y(x),
in the interval [a, b], for some invertible matrix-valued function A such that A
1
= O
ˆ
A
, for ω .
Define
Q
A
s
[f] =
s1
X
k=0
(1)
k
σ
k
(b)
>
A
1
(b)y(b) σ
k
(a)
>
A
1
(a)y(a)
,
where
σ
0
f , σ
>
k+1
= (σ
>
k
A
1
)
0
, k = 0, 1, . . . .
If f = O(
˜
f) and y(x) = O(
˜
y) for a x b, then
Q
A
s
[f] I[f ] = (1)
s+1
Z
b
a
σ
>
s
y dx = O
˜
f
>
ˆ
A
s+1
˜
y
, ω .
Proof : Note that
Z
b
a
σ
>
k
y dx =
Z
b
a
σ
>
k
A
1
y
0
dx =
σ
>
k
A
1
y
b
a
Z
b
a
(σ
>
k
A
1
)
0
y dx =
σ
>
k
A
1
y
b
a
Z
b
a
σ
>
k+1
y dx.
Thus, by induction, the first equality holds. We now show that σ
>
k
= O
˜
f
>
ˆ
A
k
. The case of k = 0 follows
by definition. Otherwise, assume it is true for k, and we will prove it for k + 1:
σ
>
k+1
= σ
>
k
0
A
1
+ σ
>
k
A
1
0
= O
˜
f
>
ˆ
A
k
O
ˆ
A
+ O
˜
f
>
ˆ
A
k
O
ˆ
A
= O
˜
f
>
ˆ
A
k+1
.
The theorem now follows since
Z
b
a
σ
>
s
y dx =
σ
>
s
A
1
y
b
a
Z
b
a
σ
>
s+1
y dx = O
˜
f
>
ˆ
A
s+1
˜
y
+ O
˜
f
>
ˆ
A
s+1
˜
y
= O
˜
f
>
ˆ
A
s+1
˜
y
.
Q.E.D.
3

25 30 35 40 45 50
1
2
3
4
25 30 35 40 45 50
0.2
0.4
0.6
0.8
Figure 1: The error of Q
A
1
[f] scaled by ω
7/4
(left figure), compared to the error of Q
A
2
[f] scaled by ω
13/4
(right figure), for I[f ] =
R
2
1
[cos xAi (ωx) ω e
x
Ai
0
(ωx)] dx.
Corollary 3.2 follows from Theorem 3.1, and will be used in the proof of Theorem 4.1.
Corollary 3.2 Suppose that
f(a) = f (b) = f
0
(a) = f
0
(b) = · · · = f
(s1)
(a) = f
(s1)
(b) = 0.
Then
I[f] = O
˜
f
>
ˆ
A
s+1
˜
y
.
The asymptotic expansion for y(x) = e
iω g(x)
follows immediately, assuming that g
0
6= 0, in which
case A
1
(x) = 1/(iωg(x)) = O
ω
1
. Thus Q
A
s
[f] approximates I[f] with an order O
ω
s1
. For the
two-dimensional case, examples include, assuming 0 < a < b,
y(x) =
J
m1
(ωx)
J
m
(ωx)
= O
ω
1/2
1
, A
1
= O
ω
2
ω
1
ω
1
ω
2
= O
ω
1
1
,
y(x) =
Ai (ωx)
ωAi
0
(ωx)
= O
ω
1/4
ω
5/4
, A
1
= O
0 ω
3
1 0
,
where the asymptotics of the Bessel and Airy functions can be found in [1]. In the Bessel case, each
component of A
1
is O
ω
1
, hence, assuming that f = O(1), we have an error of order
˜
f
>
ˆ
A
s+1
˜
y = O
ˆ
A
s+1
k
˜
yk
= O
ω
s
3
2
.
In the Airy case, we know that
ˆ
A
2k
˜
y =
ω
3k
0
0 ω
3k
˜
y =
ω
3k1/4
ω
3k+5/4
,
ˆ
A
2k+1
˜
y =
0 ω
3(k+1)
ω
3k
0
˜
y =
ω
3k7/4
ω
3k1/4
.
Thus, if
˜
f = 1,
˜
f
>
ˆ
A
s+1
˜
y = O
ω
3
2
s
1
4
.
On the other hand, if
˜
f = (1, 0)
>
, then
˜
f
>
ˆ
A
s+1
˜
y = O
ω
3
2
s
7
4
.
As a simple example, consider the integral
Z
2
1
f
>
y dx =
Z
2
1
[cos xAi (ωx) ω e
x
Ai
0
(ωx)] dx,
for f (x) = (cos x, e
x
)
>
and y(x) = (Ai (ωx) , ωAi
0
(ωx))
>
. Figure 1 compares the one-term and two-
term expansions. As can be seen, adding an additional term does indeed increase the asymptotic order by
3/2.
4

4. Levin-type methods
The fundamental problem with using an asymptotic expansion as a numerical approximation is that for
fixed ω the accuracy is limited: the infinite sum does not necessarily converge. To combat this issue, we will
derive a Levin-type method that has the same asymptotic b ehaviour as the asymptotic expansion, whilst
providing the ability to decrease error further. In [5], a method was developed to compute integrals using
a collocation system. The current author generalized this method to include multiplicities in [7], for the
specific oscillator e
iω g
. By adding multiplicities to the e ndpoints, we obtain a method with higher as ymptotic
order. In this section, we complete the generalization for vector-valued kernels. We will use the asymptotic
expansion to determine the asymptotic order of the Levin-type method. Note that we include cases that
were not analysed in [5], such as the Airy function case. When the Levin-type method is equivalent to the
original method, we obtain the asymptotic bound derived in [10], which is more accurate than the bound
found in [5].
Had we known a vector-valued function F such that
F
>
y
0
= f
>
y,
then computing the integral I[f ] would have been trivial: I[f ] =
F
>
y
b
a
. We can rewrite this condition as
L[F ] = f, for L[F ] = F
0
+ A
>
F .
Finding F explicitly is in general not possible. However, we can approximate this function using collocation.
Let v(x) =
P
n
k=1
c
k
ψ
k
(x) for some set of basis functions {ψ
k
}, where ψ
k
: R R
d
, and n = d
P
m
k
, i.e.,
the total number of equations in the system (4.1). For a sequence of nodes {x
1
, . . . , x
ν
} and multiplicities
{m
1
, . . . , m
ν
}, we determine the coefficients of v by solving the system
L[v] (x
k
) = f (x
k
), . . . , L[v]
(m
k
1)
(x
k
) = f
(m
k
1)
(x
k
), k = 1, 2, . . . , ν. (4.1)
We then define a Levin-type method as
Q
L
[f] = v(b)
>
y(b) v(a)
>
y(a).
Let P[g] be the vector consisting of the function g evaluated at each node and multiplicity, written in
partitioned form as
P[g] =
g(x
1
)
.
.
.
g
(m
1
1)
(x
1
)
.
.
.
g(x
ν
)
.
.
.
g
(m
ν
1)
(x
ν
)
.
Furthermore, let Ψ = O
˜
Ψ
be the d × n matrix-valued function whose kth column equals ψ
k
, written in
partitioned form as
Ψ(x) = (ψ
1
(x), . . . , ψ
n
(x)).
Then we can write the system (4.1) as Bc = ϕ, where c = (c
1
, . . . , c
n
)
>
,
B = P[L[Ψ]] = (P[L[ψ
1
]], . . . , P[L[ψ
n
]]), ϕ = P[f ] = O(
˜
ϕ) = O
˜
f
.
.
.
˜
f
.
Thus v = Ψc.
5

Figures
Citations
More filters
Journal ArticleDOI

Clenshaw–Curtis–Filon-type methods for highly oscillatory Bessel transforms and applications

TL;DR: In this paper, a Clenshaw-Curtis-Filon-type method for approximating highly oscillatory Bessel transforms is proposed, which is based on a special Hermite interpolation polynomial at the CCC points that can be efficiently evaluated using O(N log N) operations.
Dissertation

Numerical approximation of highly oscillatory integrals

Sheehan Olver
TL;DR: Olver et al. as discussed by the authors investigated efficient methods for numerical integration of highly oscillatory functions, over both univariate and multivariate domains, and demonstrated that high oscillation is in fact beneficial: the methods discussed in this paper improve with accuracy as the frequency of oscillation increases.
Journal ArticleDOI

Fast integration of highly oscillatory integrals with exotic oscillators

TL;DR: An efficient Filon-type method for the integration of systems containing Bessel functions with exotic oscillators based on a diffeomorphism transformation is presented and applications to Airy transforms are given.
Journal ArticleDOI

Fast, numerically stable computation of oscillatory integrals with stationary points

TL;DR: In this article, the authors presented a numerically stable algorithm for computing oscillatory integrals with stationary points, which does not lose accuracy as the frequency increases and does not require deformation into the complex plane.
Journal ArticleDOI

Numerical approximations to integrals with a highly oscillatory Bessel kernel

TL;DR: In this paper, a new numerical method for computing highly oscillatory Bessel transforms was proposed, which can be efficiently computed by using Gauss-Laguerre quadrature rule.
References
More filters
Book

Harmonic Analysis: Real-variable Methods, Orthogonality, and Oscillatory Integrals

TL;DR: In this article, the authors introduce the Heisenberg group and describe the Maximal Operators and Maximal Averages and Oscillatory Integral Integrals of the First and Second Kind.
Reference BookDOI

Asymptotics and Special Functions

TL;DR: A classic reference, intended for graduate students mathematicians, physicists, and engineers, this book can be used both as the basis for instructional courses and as a reference tool as discussed by the authors, and it can be found in many libraries.
Related Papers (5)
Frequently Asked Questions (10)
Q1. What contributions have the authors mentioned in the paper "Numerical approximation of vector-valued highly oscillatory integrals" ?

The authors present a method for the efficient approximation of integrals with highly oscillatory vector-valued kernels, such as the Airy function. 

In other words, ∥∥f (m)∥∥ = O(f̃), for m = 0, 1, . . ..The most common usage is f = O(1), which states that f and its derivatives are bounded in [a, b] for increasing ω. 

An n × n matrix A satisfies the right-hand regularity condition for a nonsingular matrix W depending on ω if it can be written as A = P + GW , where G is a nonsingular matrix such that G−1 = O(1) and P is a matrix such that PW−1 = o(1). 

A Levin-type method retains the asymptotic behaviour of the expansion, while increasing the accuracy of the approximation for fixed frequency. 

In [7] it was noted that for the eiωg oscillator, using the functions σk from the asymptotic expansion as a basis caused the order of the resulting Levin-type method to increase with each additional node point. 

Let A = (aij) and à = (ãij) be two n×m matrices which depend on a real parameter ω, such that the entries of à are always nonnegative. 

the asymptotic order depended on using Gauss-Laguerre quadrature, which exploits the exponential nature of an oscillator, which will not work for the Airy oscillator case. 

Using a generalization of an asymptotic expansion, the accuracy of the approximation in fact improves as the frequency of oscillations increases. 

To combat this issue, the authors will derive a Levin-type method that has the same asymptotic behaviour as the asymptotic expansion, whilst providing the ability to decrease error further. 

The asymptotic order of the error predicted by the preceding corollary is equivalent to that of the asymptotic expansion for both the case where f̃ = 1 and f̃ = (1, 0)>.