scispace - formally typeset
Open AccessJournal ArticleDOI

Weighted Fourier Series Representation and Its Application to Quantifying the Amount of Gray Matter

TLDR
The WFS representation is a data smoothing technique that provides the explicit smooth functional estimation of unknown cortical boundary as a linear combination of basis functions and is applied in quantifying the amount of gray matter in a group of high functioning autistic subjects.
Abstract
We present a novel weighted Fourier series (WFS) representation for cortical surfaces. The WFS representation is a data smoothing technique that provides the explicit smooth functional estimation of unknown cortical boundary as a linear combination of basis functions. The basic properties of the representation are investigated in connection with a self-adjoint partial differential equation and the traditional spherical harmonic (SPHARM) representation. To reduce steep computational requirements, a new iterative residual fitting (IRF) algorithm is developed. Its computational and numerical implementation issues are discussed in detail. The computer codes are also available at http://www.stat.wisc.edu/ ~mchung/softwares/weighted-SPHARM/weighted-SPHARM.html . As an illustration, the WFS is applied in quantifying the amount of gray matter in a group of high functioning autistic subjects. Within the WFS framework, cortical thickness and gray matter density are computed and compared

read more

Content maybe subject to copyright    Report

566 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 26, NO. 4, APRIL 2007
Weighted Fourier Series Representation and
Its Application to Quantifying the Amount
of Gray Matter
Moo K. Chung*, Kim M. Dalton, Li Shen, Alan C. Evans, and Richard J. Davidson
Abstract—We present a novel weighted Fourier series (WFS)
representation for cortical surfaces. The WFS representation is
a data smoothing technique that provides the explicit smooth
functional estimation of unknown cortical boundary as a linear
combination of basis functions. The basic properties of the
representation are investigated in connection with a self-ad-
joint partial differential equation and the traditional spherical
harmonic (SPHARM) representation. To reduce steep com-
putational requirements, a new iterative residual fitting (IRF)
algorithm is developed. Its computational and numerical imple-
mentation issues are discussed in detail. The computer codes
are also available at http://www.stat.wisc.edu/~mchung/soft-
wares/weighted-SPHARM/weighted-SPHARM.html. As an
illustration, the WFS is applied in quantifying the amount of gray
matter in a group of high functioning autistic subjects. Within the
WFS framework, cortical thickness and gray matter density are
computed and compared.
Index Terms—Cortical thickness, diffusion smoothing , gray
matter density, iterative residual fitting, SPHARM, spherical
harmonics.
I. INTRODUCTION
I
N this paper, we present a new morphometric framework
called the weighed Fourier series (WFS) representation.
The WFS is both a global hierarchical parameterization and
an explicit data smoothing technique formulated as a solution
to a self-adjoint partial differential equation (PDE). WFS
generalizes the traditional spherical harmonic (SPHARM)
representation [23], [44] with additional exponential weights.
The exponentially decaying weights make the representation
converges faster and reduce the ringing artifacts (Gibbs phe-
nomenon) [22] significantly. Unlike SPHARM, WFS can be
reformulated as heat kernel smoothing [9] when the self-ad-
joint operator becomes the Laplace–Beltrami operator. Then
as similar to heat kernel smoothing, the random field theory
Manuscript received October 1, 2006; revised December 29, 2006. Asterisk
indicates corresponding author.
*M. K. Chung is with the Department of Statistics, Biostatistics, and Med-
ical Informatics, and the Waisman Laboratory for Brain Imaging and Behavior,
University of Wisconsin, Madison, WI 53706 USA (e-mail: mchung@stat.wisc.
edu).
K. M. Dalton and R. J. Davidson are with the Waisman Laboratory for Brain
Imaging and Behavior, University of Wisconsin, Madison, WI 53706 USA.
L. Shen is with the Department of Computer and Information Science, Uni-
versity of Massachusetts, Darthmouth, MA 02747 USA.
A. C. Evans is with the McConnell Brain Imaging Centre, Montreal Neuro-
logical Institute, McGill University, Montreal, QC H3A 2B4 Canada..
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TMI.2007.892519
[51]–[53] can be used for statistical inference on localizing
abnormal shape variations in a clinical population. Many basic
theoretical properties of WFS and its numerical implementation
issues are presented in great detail. The WFS representation
requires estimating 18 723 unknown Fourier coefficients on a
high resolution spherical mesh sampled at more than 40 000
vertices. This requires a specialized linear solver with fairly
steep computational resources. To address this issue, we have
developed a new estimation technique called the iterative
residual fitting (IRF) algorithm [43]. The computational burden
has been reduced substantially by decomposing the subspace
spanned by spherical harmonics into smaller subspaces, and it-
eratively performing the least squares estimation on the smaller
subspaces. The correctness of the algorithm is proven and its
accuracy is numerically evaluated.
As an illustration of the new technique, we have applied it in
localizing abnormal amounts of gray mater in a group of high
functioning autistic subjects. The cerebral cortex has a highly
convoluted geometry and it is likely that the local difference
in gray matter concentration can characterize a clinical popula-
tion. Within the WFS framework, gray matter density and cor-
tical thickness are also computed. Then statistical parametric
maps (SPM) are constructed to localize the regions of abnormal
gray matter. These two measurements are also compared to de-
termine if increased cortical thickness corresponds to increased
gray matter locally.
The main contributions of the paper are the development of
the underlying theory of the WFS representation and its numer-
ical implementation using the iterative residual fitting (IRF) al-
gorithm. We also made the computer code freely available to
public. In the following subsections, we will review the liter-
ature that is directly related to our methodology and address
what our specific contributions are in the context of the previous
literature.
A. Spherical Harmonic Representation
The SPHARM representation [4] has been applied to subcor-
tical structures such as the hippocampus and the amygdala [23],
[27], [30], [44]. In particular, Gerig et al. used the mean squared
distance (MSD) of the SPHARM coefficients in quantifying
ventricle surface shape in a twin study [23]. Shen et al. used
the principal component analysis technique on the SPHARM
coefficients of schizophrenic hippocampal surfaces in reducing
the data dimension [44]. Recently, it has begun to be applied to
more complex cortical surfaces [27], [43]. Gu et al. presented
SPHARM as a surface compression technique, where the main
0278-0062/$25.00 © 2007 IEEE

CHUNG et al.: WEIGHTED FOURIER SERIES REPRESENTATION AND ITS APPLICATION 567
geometric feasures are encoded in the low degree spherical har-
monics, while the noises are in the high degree spherical har-
monics [27].
In SPHARM, the spherical harmonic are used in constructing
the Fourier series expansion of the mapping from cortical sur-
faces to a unit sphere. So SPHARM is more of an interpola-
tion technique than a smoothing technique, and thus it will have
the ringing artifacts [22]. On the other hand, WFS is a kernel
smoothing technique given as a solution to a self-adjoint PDE.
The solution to the PDE is expanded in basis functions. In a sim-
ilar spirit, Bulow used the spherical harmonics in isotropic heat
diffusion via the Fourier transform on a unit sphere as a form of
hierarchical surface representation [5]. WFS offers many advan-
tages over the previous PDE-based smoothing techniques [1],
[10]. The PDE-based smoothing methods tend to suffer numer-
ical instability [1], [6], [10] while WFS has no such problem.
Since the traditional PDE-based smoothing gives an implicit nu-
merical solution, setting up a statistical model is not straightfor-
ward. However, WFS provides an explicit series expansion so it
is easy to apply a wide variety of statistical modeling techniques
such as the GLM [21], principal component analysis (PCA) [44]
and functional-PCA [36], [41].
SPHARM will be shown to be the special case of WFS. In
the SPHARM representation, all measurements are assigned
equal weights and the coefcients of the series expansion is es-
timated in the least squares fashion. In WFS, closer measure-
ments are weighted more and the coefcients of the series ex-
pansion is estimated in the weighted least squares fashion. So
WFS is more suitable than SPHARM when the realization of the
cortical boundaries, as triangle meshes, are noisy and possibly
discontinuous. In most SPHARM literature, the degree of the
Fourier series expansion has been arbitrarily determined and the
problem of the optimal degree has not been addressed. Our WFS
formulation addresses the determination of the optimal degree
in a unied statistical modeling framework. The WFS-based
global parametrization is computationally expensive compared
to the local quadratic polynomial tting [4], [10], [13], [29], [40]
while providing more accuracy and exibility for hierarchical
representation.
B. Cortical Thickness
The cerebral cortex has the topology of a 2-D convoluted
sheet. Most of the features that distinguish these cortical regions
can only be measured relative to that local orientation of the cor-
tical surface [13]. The CSF and gray matter interface is referred
to as the outer surface (pial surface) while the gray and white
matter interface is referred to as the inner surface [10], [32].
Then the distance between the outer and inner surfaces is de-
ned as the cortical thickness and it has been widely used as an
anatomical index for quantifying the amount of gray matter in
the brain [9], [10], [18].
Unlike 3-D whole brain volume based gray matter density,
1-D cortical thickness measures have the advantage of providing
a direct quantication of cortical geometry. It is likely that dif-
ferent clinical populations will exhibit different cortical thick-
ness. By analyzing cortical thickness, brain shape differences
can be quantied locally [10], [19], [32], [35]. The cortical sur-
faces are usually segmented as triangle meshes that are con-
structed from deformable surface algorithms [15], [18], [32].
Then the cortical thickness is mainly dened and estimated as
the shortest distance between vertices of the two triangle meshes
[18], [32]. The mesh construction and discrete thickness com-
putation procedures introduce substantial noise in the thickness
measure [9] (Fig. 6). So it is necessary to increase the signal-to-
noise ratio (SNR) and smoothness of data for the subsequent
random eld based statistical analysis. For smoothing cortical
data, diffusion equation based methods have been used [1], [6],
[9], [10]. The shortcoming of these approaches is the need for
numerically solving the diffusion equation possibly via the -
nite element technique. This is an additional computational step
on top of the cortical thickness estimation. In this paper, we
present a more direct approach that smoothes and parameter-
izes the coordinates of a mesh directly via WFS such that the
resulting thickness measures are already smooth. In WFS, the
cortical surfaces are estimated as a weighted linear combina-
tion of smooth basis functions so that most algebraic operations
on WFS will also be smooth.
II. C
AUCHY PROBLEM AS A
SMOOTHING PROCESS
Consider
to be a compact differentiable manifold.
Let
be the space of square integrable functions in
with inner product
(1)
where
is the Lebegue measure such that is the total
volume of
. The norm is dened as
The linear partial differential operator is self-adjoint if
for all , . Then the eigenvalues and eigen-
functions
of the operator are obtained by solving
(2)
Without the loss of generality, we can order eigenvalues
and the eigenfunctions to be orthonormal with respect to the
inner product (1). Consider a Cauchy problem of the following
form:
(3)
The initial functional data
can be further stochastically
modeled as
(4)
where
is a stochastic noise modeled as a mean zero Gaussian
random eld and
is the unknown signal to be estiamted. The

568 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 26, NO. 4, APRIL 2007
PDE (3) diffuses noisy initial data over time and estimate the
unknown signal
as a solution. The time controls the amount
of smoothing and will be termed as the bandwidth. The unique
solution to (3) is given by the following theorem.
Theorem 1: For the self-adjoint linear differential operator
,
the unique solution of the Cauchy problem (3) is given by
(5)
Proof: For each xed
, has expansion
(6)
Substitute (6) into (3). Then we obtain
(7)
The solution of (7) is given by
.Sowehave
solution
At ,wehave
The coefcients must be the Fourier coefcients .
The implication of Theorem 1 is obvious. The solution
decreases exponentially as time
increases and smoothes out
high spatial frequency noise much faster than low-frequency
noise. This is the basis of many of PDE-based image smoothing
methods. PDE involving self-adjoint linear partial differential
operators such as the LaplaceBeltrami operator or iterated
Laplacian have been widely used in medical image analysis as
a way to smooth either scalar or vector data along anatomical
boundaries [1], [5], [6], [10]. These methods directly solve the
PDE using standard numerical techniques such as the nite
difference method or the nite element method. The problem
with directly solving PDEs is the numerical instability and the
complexity of setting up the numerical scheme. WFS differs
from these previous methods in such a way that we only need
to estimate the Fourier coefcients in a hierarchical fashion to
solve the PDE.
A. Weighted Fourier Series
We will investigate the properties of the nite expansion of
(5) denoted by
This expansion will be called as the weighted Fourier Series
(WFS). By rearranging the inner product, the WFS can be
rewritten as kernel smoothing
(8)
(9)
with symmetric positive denite kernel
given by
(10)
The subscript
is introduced to show the dependence of the
kernel on time
. This shows that the solution of the Cauchy
problem (3) can be interpreted as kernel smoothing.
When the differential operator
, the LaplaceBeltrami
operator, the Cauchy problem (3) becomes an isotropic diffusion
equation. For this particular case,
is called the heat kernel
with bandwidth
[7], [9]. For an arbitrary cortical manifold, the
basis functions
can be computed and the exact shape of heat
kernel can be determined numerically. Although it can be done
by setting up a huge nite element method [39], this is not a
trivial numerical computation. A simpler approach is to use the
rst order approximation of the heat kernel for small bandwidth
and iteratively apply it up to the desired bandwidth [9].
WFS can be reformulated as a kernel regression problem [17].
At each xed point
, we estimate unknown signal (4) with
smooth function
by minimizing the integral of the
weighted squared distance between
and
(11)
The minimizer of (11) is given by the following theorem.
Theorem 2:
Proof: Since the integral is quadratic in , the minimum
exists and obtained when
Solving the equation, we obtain the result.
Theorem 2 shows WFS is the solution of a weighted least
squares minimization problem.

CHUNG et al.: WEIGHTED FOURIER SERIES REPRESENTATION AND ITS APPLICATION 569
When is the LaplaceBeltrami operator with , the
heat kernel
is a probability distribution in , i.e.,
For this special case, Theorem 2 simplies to
In minimizing the weighted least squares in Theorem 2, it is
possible to restrict the function space
to a nite sub-
space that is more useful in numerical implementation. Let
be the subspace spanned by basis . Then we have the
following theorem.
Theorem 3: If
and , then
Proof: Let . The integral is
written as
Since the functional is quadratic in , the minimum
exists and it is obtained when
for all . By differ-
entiating
and rearranging terms, we obtain
Now integrate the equations respect to measure and obtain
If is a probability distribution, this theorem holds. For any
other symmetric positive denite kernel, it can be made to be a
probability distribution by renormalizing it. So Theorem 3 can
be applicable in wide variety of kernels.
B. Isotropic Diffusion on Unit Sphere
Let us apply the WFS theory to a unit sphere denoted by
.
Since algebraic surfaces provide basis functions in a close form,
it is not necessary to construct numerical basis [39]. The WFS
in
is given by the solution of isotropic diffusion. The spher-
ical parametrization of
is given by the polar angle and the
azimuthal angel
(12)
with
. The spherical Laplacian
corresponding to the parametrization (12) is given by
There are eigenfunctions , corre-
sponding to the same eigenvalue
satisfying
is called the spherical harmonic of degree and order
[12], [50]. It is given explicitly as
,
,
,
where
and
is the associated Legendre polynomials of order .
Unlike many previous imaging literatures on spherical har-
monics that used the complex-valued spherical harmonics [5],
[23], [27], [44], only real-valued spherical harmonics are used
throughout the paper for convenience in setting up a real-valued
stochastic model.
For
, ,wedene the inner product as
where Lebesgue measure . Then with re-
spect to the inner product, the spherical harmonics satises the
orthonormal condition
where is the Kronekers delta. The kernel is given by
(13)
The associated WFS is given by
with Fourier coefcient . We will call this form
of WFS as the weighted-SPHARM. The special case
is
the traditional SPHARM representation used in representing the
Cartesian coordinates of anatomical boundaries [23], [27], [44].
Consider subspace

570 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 26, NO. 4, APRIL 2007
which is spanned by up to the -th degree spherical harmonics.
Then the SPHARM satisfy the least squares minimization
problem different from Theorem 2 and Theorem 3.
Theorem 4:
(14)
III. N
UMERICAL
IMPLEMENTATION
In constructing the WFS representation, all we need is es-
timating Fourier coefcients
. There are three
major techniques for computing the Fourier coefcients. The
rst method numerically integrates the Fourier coefcients over
a high resolution triangle mesh [7]. Although this approach is
the simplest to implement numerically and more accurate, the
computation is extremely slow, due to the brute force nature of
the technique. The second method is based on the fast Fourier
transform (FFT) [5], [27]. The drawback of FFT is the need for
a predened regular grid system so if the mesh topology is dif-
ferent for each subject as in the case of FreeSurfer [18], a time
consuming interpolation is needed. The third method is based on
solving a system of linear equations [23], [43], [44] that mini-
mize the least squares problem in Theorem. This is the most
widely used numerical technique in the SPHARM literature.
However, the direct application of the least squares estimation
is not desirable when the size of the linear equation is extremely
large.
Let
Given nodes in mesh, the discretization of (14)
is given by
(15)
The minimum of (15) is obtained when
(16)
all
. The (16) is referred as the normal equation in
statistical literatures. The normal equation is usually solved via
a matrix inversion. Let
and
Also let
.
.
.
.
.
.
.
.
.
be a submatrix consisting of the -th degree spherical
harmonics evaluated at each node
. Then (16) can be rewritten
in the following matrix form:
(17)
with the design matrix
and unknown
parameter vector
. The linear system is
solved via
(18)
The problem with this widely used formulation is that the size of
the matrix
is , which becomes fairly large and may
not t in typical computer memory. So it becomes unpractical
to perform matrix operation (18) directly. This is true for many
cortical surface extraction tools such as FreeSurfer [18] that pro-
duces no less than
nodes for each hemisphere.
This computational bottleneck can be overcome by breaking the
least squares problem in the subspace
into smaller subspaces
using the iterative residual tting (IRF) algorithm [43]. The IRF
algorithm was rst introduced in [43] although the correctness
of the algorithm was not given. In this paper, we present The-
orem 5 that proves the correctness of the IRF for the rst time.
The IRF algorithm can be also used in estimating SPHARM co-
efcients by letting the bandwidth
in the algorithm.
A. Iterative Residual Fitting (IRF) Algorithm
Decompose the subspace
into smaller subspaces as the
direct sum
where subspace
is spanned by the -th degree spherical harmonics only. Then
the IRF algorithm estimates the Fourier coefcients
in each
subspace
iteratively from increasing the degree from 0 to .
Suppose we estimated the coefcients
up to de-
gree
somehow. Then the residual vector based on this
estimation is given by
(19)
The components of the residual vector
are identical so we
denote all of them as
. At the next degree , we estimate
the coefcients
by minimizing the difference between the
residual
and . This is formally stated
as the following theorem.

Figures
Citations
More filters
Journal ArticleDOI

Investigation of mindfulness meditation practitioners with voxel-based morphometry

TL;DR: MRI brain images of 20 mindfulness (Vipassana) meditators are investigated and meditation practice is associated with structural differences in regions that are typically activated during meditation and in areas that are relevant for the task of meditation.
Journal ArticleDOI

A review on cognitive and brain endophenotypes that may be common in autism spectrum disorder and attention-deficit/hyperactivity disorder and facilitate the search for pleiotropic genes

TL;DR: The hitherto rather separate research fields of autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD) are brought together, and by contrasting and combining findings of the endophenotypes of ASD and ADHD new insights can be gained into the etiology and pathophysiology of these two disorders.
Journal ArticleDOI

Individual subject classification for Alzheimer's disease based on incremental learning using a spatial frequency representation of cortical thickness data.

TL;DR: This paper proposes an incremental method for AD classification using cortical thickness data, and shows that the entorhinal cortex was the most discriminative region for classification, which is consistent with previous pathological findings.
Journal ArticleDOI

Seven-Tesla magnetic resonance images of the substantia nigra in Parkinson disease

TL;DR: To investigate anatomical changes in the substantia nigra of Parkinson disease patients with age‐matched controls by using ultra‐high field magnetic resonance imaging (MRI).
Journal ArticleDOI

General multivariate linear modeling of surface shapes using SurfStat.

TL;DR: A unified computational and statistical framework for modeling amygdala shape variations in a clinical population using the SurfStat package that completely avoids the complexity of specifying design matrices is presented.
References
More filters
Journal ArticleDOI

Cortical surface-based analysis. I. Segmentation and surface reconstruction

TL;DR: A set of automated procedures for obtaining accurate reconstructions of the cortical surface are described, which have been applied to data from more than 100 subjects, requiring little or no manual intervention.
Journal ArticleDOI

Voxel-Based Morphometry—The Methods

TL;DR: In this paper, the authors describe the steps involved in VBM, with particular emphasis on segmenting gray matter from MR images with non-uniformity artifact and provide evaluations of the assumptions that underpin the method, including the accuracy of the segmentation and the assumptions made about the statistical distribution of the data.
Book

Methods of Mathematical Physics

TL;DR: In this paper, the authors present an algebraic extension of LINEAR TRANSFORMATIONS and QUADRATIC FORMS, and apply it to EIGEN-VARIATIONS.
Book

Spline models for observational data

Grace Wahba
TL;DR: In this paper, a theory and practice for the estimation of functions from noisy data on functionals is developed, where convergence properties, data based smoothing parameter selection, confidence intervals, and numerical methods are established which are appropriate to a number of problems within this framework.
Journal ArticleDOI

Measuring the thickness of the human cerebral cortex from magnetic resonance images

TL;DR: An automated method for accurately measuring the thickness of the cerebral cortex across the entire brain and for generating cross-subject statistics in a coordinate system based on cortical anatomy is presented.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What contributions have the authors mentioned in the paper "Weighted fourier series representation and its application to quantifying the amount of gray matter" ?

The authors present a novel weighted Fourier series ( WFS ) representation for cortical surfaces. The WFS representation is a data smoothing technique that provides the explicit smooth functional estimation of unknown cortical boundary as a linear combination of basis functions. Its computational and numerical implementation issues are discussed in detail. 

Increasing surface registration toward the template reduces the registration variability while increasing the gray density variability due to the misalignment of gray matter. 

Sampling the SPHARM bases up to 85 degree at approximately 40 000 points on a unit sphere requires 2.4 GB of space and 16 min of computation. 

The shortcoming of these approaches is the need for numerically solving the diffusion equation possibly via the finite element technique. 

an automatic tissue-segmentation algorithm based on a supervised artificial neural network classifier was used to classify each voxel as cerebrospinal fluid (CSF), gray matter, or white matter [31]. 

By assigning the density value of a voxel that contains a vertex of a cortical mesh to the vertex, the authors can project the gray matter density onto inner, middle and outer surfaces. 

The spherical Laplacian corresponding to the parametrization (12) is given byThere are eigenfunctions , corresponding to the same eigenvalue satisfyingis called the spherical harmonic of degree and order [12], [50]. 

Then the eigenvalues and eigenfunctions of the operator are obtained by solving(2)Without the loss of generality, the authors can order eigenvaluesand the eigenfunctions to be orthonormal with respect to the inner product (1). 

This robustness of WFS is also related to the fact that it is a PDE-based datasmoothing technique while the traditional SPHARM is more of interpolation or reconstruction technique. 

Then the cortical thickness is mainly defined and estimated as the shortest distance between vertices of the two triangle meshes [18], [32]. 

The average computational time decreases as the number of cortical surfaces increases since the SPHARM bases are recycled for all surfaces at a given degree.