scispace - formally typeset

Journal ArticleDOI

Establishing correlations of scalp field maps with other experimental variables using covariance analysis and resampling methods.

01 Jun 2008-Clinical Neurophysiology (Elsevier)-Vol. 119, Iss: 6, pp 1262-1270

TL;DR: Covariance mapping combined with bootstrapping methods has high statistical power and yields unique and directly interpretable results in EEG/MEG scalp data analysis.
Abstract: Objective In EEG/MEG experiments, increasing the number of sensors improves the spatial resolution of the results. However, the standard statistical methods are inappropriate for these multivariate, highly correlated datasets. We introduce a procedure to identify spatially extended scalp fields that correlate with some external, continuous measure (reaction-time, performance, clinical status) and to test their significance. Methods We formally deduce that the channel-wise covariance of some experimental variable with scalp field data directly represents intracerebral sources associated with that variable. We furthermore show how the significance of such a representation can be tested with resampling techniques. Results Simulations showed that depending on the number of channels and subjects, effects can be detected already at low signal to noise ratios. In a sample analysis of real data, we found that foreign-language evoked ERP data were significantly associated with foreign-language proficiency. Inverse solutions of the extracted covariances pointed to sources in language-related areas. Conclusions Covariance mapping combined with bootstrapping methods has high statistical power and yields unique and directly interpretable results. Significance The introduced methodology overcomes some of the ‘traditional’ statistical problems in EEG/MEG scalp data analysis. Its application can improve the reproducibility of results in the field of EEG/MEG.
Topics: Covariance (55%), Resampling (54%), Bootstrapping (statistics) (54%), Covariance mapping (52%), Analysis of covariance (52%)

Content maybe subject to copyright    Report

This article appeared in a journal published by Elsevier. The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information
regarding Elsevier’s archiving and manuscript policies are
encouraged to visit:
http://www.elsevier.com/copyright

Author's personal copy
Establishing correlations of scalp field maps with other experimental
variables using covariance analysis and resampling methods
Thomas Koenig
a,
*
, Lester Melie-Garcı
´
a
b
, Maria Stein
a
, Werner Strik
a
, Christoph Lehmann
a
a
Department of Psychiatric Neurophysiology, University Hospital of Psychiatry, Bolligenstr. 111, 3000 Bern 60, Switzerland
b
Neuroimaging Department, Cuban Neuroscience Center, Calle 25 esq.158, Cubanacan, Playa, Havana, Cuba
Accepted 26 December 2007
Abstract
Objective: In EEG/MEG experiments, increasing the number of sensors improves the spatial resolution of the results. However, the stan-
dard statistical methods are inappropriate for these multivariate, highly correlated datasets. We introduce a procedure to identify spa-
tially extended scalp fields that correlate with some external, continuous measure (reaction-time, performance, clinical status) and to test
their significance.
Methods: We formally deduce that the channel-wise covariance of some experimental variable with scalp field data directly represents
intracerebral sources associated with that variable. We furthermore show how the significance of such a representation can be tested with
resampling techniques.
Results: Simulations showed that depending on the number of channels and subjects, effects can be detected already at low signal to noise
ratios. In a sample analysis of real data, we found that foreign-language evoked ERP data were significantly associated with foreign-lan-
guage proficiency. Inverse solutions of the extracted covariances pointed to sources in language-related areas.
Conclusions: Covariance mapping combined with bootstrapping methods has high statistical power and yields unique and directly inter-
pretable results.
Significance: The introduced methodology overcomes some of the ‘traditional’ statistical problems in EEG/MEG scalp data analysis. Its
application can improve the reproducibility of results in the field of EEG/MEG.
Ó 2008 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Keywords: ERP; Topography; Correlation; Statistics; Randomization; Inverse solution
1. Introduction
Technical improvements have made it possible that in
EEG/MEG and ERP/ERF experiments, the number of
sensors on the scalp could be substantially increased. This
higher spatial sampling frequency is a relevant factor for
the quality of results obtained in electrophysiological
experiments. First, the increase in spatial information
reduces spatial aliasing (Gevins, 1996; Luu et al., 2001)
and improves the sensitivity and specificity of the results.
Second, the accuracy of EEG/MEG and ERP/ERF
inverse solution improves significantly when high-densit y
electrode arrays are being used (see Michel et al., 2004,
for a review).
However, the increase of number of sensors results in an
increasingly multivariate dataset that is increasingly corre-
lated in space. When scalp field data are used to study the
effects of some experimental conditions, such data requires
adequate statistical treatment. In many studies, the statisti-
cal approaches chosen to analyze multi-channel scalp field
data do not take the relations between sensors properly
into account and disregard the physical basis of the signals
to be analyzed. Namely, many studies employed strategies
where some sensors or groups of sensors are selected a-pri-
1388-2457/$34.00 Ó 2008 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
doi:10.1016/j.clinph.2007.12.023
*
Corresponding author. Tel.: +41 31 930 9369; fax: +41 31 930 9961.
E-mail address: thomas.koenig@puk.unibe.ch (T. Koenig).
www.elsevier.com/locate/clinph
Clinical Neurophysiology 119 (2008) 1262–1270

Author's personal copy
ori. Then, standard univariate statistics are used and the
different recording sites are considered as repeated
(within-subject) measures. The additional information
obtained with higher spatial sampling is thus often poorly
exploited by the statistics applied.
The univariate approach is furthermore problematic
from a physics point of view: the function that relates the
activity of a given point source in the brain to a measurable
electric and/or magnetic field on the scalp (the so-called
leadfield) implies that any source in the brain produces a
field extending over the entire scalp surface. With several
sources simultaneously active, the measured scalp field
becomes the sum of the scalp fields produced by those
sources (Mosher et al., 1999).
In terms of statistics of EEG/MEG and ERP/ERF data,
this impl ies that the basic entity for analysis should be the
scalp electric field. Furthermore, since effects of intracere-
bral generators on the scalp fields are additive, the effects
of a difference in processing in two experimental conditions
are directly reflected as the difference field between the
scalp fields evoked by those two conditions. The difference
field is thus exactly the field produced by those intracere-
bral generators that account for the difference between
the conditions. Accordingly, difference maps have been
employed routinely in EEG and ERP studies (e.g. Duffy
et al., 1981; Steger et al., 2000).
In order to establish the statistical significance of such
difference maps, one can either use the standard multi-
variate statistical approaches such as MANOVA (Vasey
and Thayer, 1987). However, a MANOVA requires that
there are more observations than sensors, a condition
that becomes increasingly difficult to meet with increasing
number of sensors. Furthermore, since the data is spa-
tially correlated, the degrees of freedom are much lower
than the number of sensors suggests. Therefore, it has
become increasingly popular to use multivariate random-
ization statistics to establish the significance of a differ-
ence map measured between two conditions (Karniski
et al., 1994; Galan et al., 1997; Maris, 2004; Greenblatt
and Pflieger, 2004). These procedures are computation-
ally more expensive, but require very little assumptions
and have a high statistical power. The procedure to com-
pute such a randomization statistics for scalp field data is
simple and straightforward. Assume we have scalp field
maps of a series of subjects recorded under two condi-
tions. In a first step, the scalp field maps are averaged
separately for both conditions. Then, the difference map
is computed. The total amplitude of this difference map
is an indicator of the strength of the difference and can
be easily measured using the Global Field Power (i.e.
the standard deviation across sensors, Lehmann and
Skrandies, 1980). Next, the amplitude of the difference
map under the null-hypothesis is established. This is
achieved by randomly shuffling the two conditions (either
within subject, for paired designs, or across subjects, for
unpaired designs). The field maps of the two conditions
are again averaged across subjects, and the difference
map between the condition mean maps is computed.
The total amplitude of this randomly obtained difference
therefore represents a value obtained under the null-
hypothesis. By repeating this randomization many times,
one can therefore obtain a good estimate of the distribu-
tion of the total map difference under the null hypothesi s.
The probability that the total amplitude of the difference
map obtained from the two real conditions is rand om is
then defined as the percentage of observations where this
amplitude was smaller than the amplitude of the ran-
domly obtained difference maps. This rando mization
approach has the advantage that it is fully multivariate,
that it is based on a (realistic) additive model of scalp
fields, and that it does not require any a-priori assump-
tions about the distribution of the variables. It has been
used in a series of studies (e.g. Kondakor et al., 1995;
Strik et al., 1998).
Now how can one proceed if one does not have two dis-
crete conditions, but some continuous external variable?
The additivity of EEG/MEG fie lds implies that if there is
a set of sources with activation that is linearly related to
the external variable, this will result in a single scalp field
that is added to the measurements at the sensors, propor-
tionally to the exter nal variable. Since the absence of the
effect of the external variable implies the absence of the
field generated by that effe ct, one can further impose that
the regression line crosses the origin. Such a relation can
easily be assessed using the covari ance, across observa-
tions, of all the single sensor signals with the external var-
iable. In order to establish whether such a covariance scalp
field is of statistical significance, one can again use resam-
pling methods.
The aim of the current paper is to (a) introduce the
methodology to extract such covariance scalp fields and
to test their statistical significance using resampling meth-
ods, (b) illustrate the utilit y of such a method using for-
eign-language evoked potentials in subjects with varying
language proficiency and (c) relate the method to other
methods such as partial least squares (PLS, see McIntosh
et al., 1996; Lobaugh et al., 2001 ).
2. Methods
Notation: In this paper, bold symbols denote a column
vector or matrix and non-bold symbols, a scalar magni-
tude. Superscript T denotes transpose. The notation
N(l,R) represents a normal distribution with mean l and
covariance R. The symbol de notes ‘‘distributed as,
e.g. x N(l,R) means the random variable x is normally
distributed with parameters l and R. Symbol tr’ denotes
the trace of a matrix. Symbol 1
n
denotes a column vector
of length n with all elements with value 1.
We assume a linear relation between the current density
strength in a voxel and the behavioral variable (e.g. reac-
tion-time):
j
i
¼ X
k
h
i
þ l1
3
ð1Þ
T. Koenig et al. / Clinical Neurophysiology 119 (2008) 1262–1270 1263

Author's personal copy
where j
i
¼
j
ix
j
iy
j
iz
2
4
3
5
is the current den sity vector in the voxel
i’andh
i
¼
h
ix
h
iy
h
iz
2
4
3
5
is the vector of linear regression coeffi-
cients, X
k
is the behavioral variable for subject k and l
is the mean effect term.
For all N
g
current density sources in the brain we obtain
the following equation:
J
k
¼ X
k
h þ l1
3N
g
ð2Þ
where J
k
¼
j
1
.
.
.
j
N
g
2
6
4
3
7
5
and h ¼
h
1
.
.
.
h
N
g
2
6
4
3
7
5
The mathematical rela-
tion between primary current density and the voltage re-
corded in the electrode array on the scalp is obtained
through the leadfield or gain matrix K
N
e
3N
g
. Then the volt-
age measured on the scalp using can be expressed by:
v
k
¼ KJ
k
¼ X
k
Kh þ lK 1
3N
g
ð3Þ
If we defined the term b = Kh as a voltage regression coef-
ficient, and l ¼ lK 1
3N
g
as the voltage mean effect coeffi-
cient, we could rewrite the previous equation as:
v
k
¼ X
k
b þ l ð4Þ
where v
k
¼ v
1k
; ...; v
N
e
k
½
T
is a voltage vector for the N
e
electrodes for the subject k’.
Then for N
s
subjects the equation (4) becomes:
V ¼ X b
T
þ 1
N
s
l
T
ð5Þ
where X ¼
X
1
.
.
.
X
N
s
2
6
4
3
7
5
is a matrix that encloses the behavioral
variable value for all N
s
subjects. The voltage matrix is or-
ganized as V ¼
v
T
1
.
.
.
v
T
N
s
2
6
4
3
7
5
¼
v
11
... v
1N
e
.
.
.
.
.
.
.
.
.
v
N
s
1
... v
N
s
N
e
2
6
4
3
7
5
. One row i’
of the matrix V is a topography for the subject i’.One col-
umn‘j’ is the voltage of the electrode ‘j’ for all subjects.
Equation (5) can be rewritten in a more compact way as:
V ¼
e
X
~
b ð6Þ
where
e
X ¼
X1
N
s
½and
~
b ¼
b
T
l
T

The observation equa-
tion is obtained from (6) by including an additive noise
term:
V ¼
e
X
~
b þ e ð7Þ
The equation (7) is a standard multivariate regression mod-
el, wher e
e
X is the design matrix and
~
b, the regression
coefficients.
The e term is the experimental noise, and most applica-
tions assume that it has mean 0 and unknown covariance
matrixes R
e
and R
s
such that e N(0,R
s
,R
e
), where e is
independent of X. The term R
e
provides the covariance
structure between EEG sensors. The matrix R
s
expresses
the covariance structure between subjects.
Under this noise assumption, the log-likelihood function
for the data matr ix V in terms of the parameters
~
b, R
s
and
R
e
is given by:
l
~
b; R

¼
1
2
N
s
log 2pR
s
jj
1
2
N
e
log R
e
jj
1
2
tr V
e
X
~
b

R
1
e
V
e
X
~
b

T
R
1
s
ð8Þ
In order to estimate the parameters in (7), we made use of
the Bayesian formalism. A summary of the basis of the
Bayesian Inference Theory and the derivations of the
parameter estimators are summarized in the Supplemen-
tary Appendixes B and C.
The design matrix
e
X in our case is full rank. If no prior
information for regression parameter
~
b is considered a
maximum likelihood estimator is obtained. The
~
b estimator
is unique and given by the expression:
^
b ¼
e
X
T
R
1
s
e
X

1
e
X
T
R
1
s
V ð9Þ
If one assumes that there is no covariance across subjects as
R
s
= nI,(n is a variance term) one obtains that
^
b ¼
e
X
T
e
X

1
e
X
T
V ð10Þ
(Mardia et al., 1979, Note that this estimator does not de-
pend on the covariance matrix R
e
): Term
e
X
T
e
X

1
is a
2 2 symmetric matrix that can be repres ented by:
A ¼
e
X
T
e
X

1
¼
X
T
1
T
N
s
"#
X1
N
s
½
!
1
¼
X
T
X

1
0
0 N
1
s
"#
ð11Þ
Without loss of generality, we can center the column of X
to have mean 0,which is obtained by calculating the mean
and subtracting it from the X column. It is convenient to
separate the effect of the mean from the other independent
variables. Then A is a diagonal matrix because X
T
1
N
s
¼ 0.
The term (X
T
X)
1
is a scalar magnitude.
Using the result of equation (11) the voltage regression
coefficient and the voltage mean effect have the following
expressions:
b
T
¼ X
T
X

1
X
T
V ¼ c X
T
V ð12Þ
l
T
¼
1
N
s
P
N
s
i¼1
V
i1
; ... ;
1
N
s
P
N
s
i¼1
V
iN
e

ð13Þ
where c =(X
T
X)
1
.
Each element of the vector l is the mean for each elec-
trode through all subjects.
When inter-subject covariance is modeled, the expres-
sion for the voltage regression coefficient and voltage mean
effect is given by equations (13) and (14) of the Supplemen-
1264 T. Koenig et al. / Clinical Neurophysiology 119 (2008) 1262–1270

Author's personal copy
tary Appendix C. In this case it is necessary to estimate the
covariance matrices R
e
and R
s
given by equations (17) and
(23) of the Supplementary Appendix C. The estimation
algorithm is defined in Supplementary Appendix C as well.
The amplitudes of the estimated covariance map b
depend on the variance of V, on the variance of X, and
on the strength of the relation between V and X. (The ref-
erence of b is identical to the reference of V; given formula
(12), post-multiplying Vwith any matrix that defines a ref-
erence will result in b being multiplied by the same matrix).
In order to obtain a global (across electrodes) measure
of the size of the estimator b, we calculated the Global
Field Power measure (GFP, Lehmann and Skrandies,
1980) using the following equation:
d ¼ GFP bðÞ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1=N
e
b
b

T
b
b

q
ð14Þ
where
b ¼
1
N
e
P
N
e
j¼1
b
i
,
b ¼
b 1
N
e
Apart from the constant scaling factor ‘c’ and the elim-
ination of the spatial baselin e, GFP is identical to the sin-
gular value used by Lobaugh et al. (2001). The GFP has the
advantage that it is reference- independent (Lehmann and
Skrandies, 1980).
In order to establish the significance of such a covari-
ance scalp field b, i.e. in order to estimate the probability
that a covariance field with d can be obtained by chance,
one rand omizes the sequence of X, resulting in an X
*
.
The strength d
*
of the covariance field resulting from X
*
is then, using equations (12) and (14) (where no covariance
between subjects is assumed), computed as follows:
d
¼ GFPðc V
T
X
Þð15Þ
yielding a d
*
based solely in the null-hypothesis. When
repeating this randomi zation n times and n is sufficiently
large, the random distribution of d
*
is approximated by
d
n
. Since the randomization destroys a possible physiolog-
ical relation between V and X, the value of d
n
depends
again on the variance of V, on the variance of X, and on
a now rand omly obtained strength of the relation between
Vand X. Since the variance of V and X are not affected by
randomizing and can thus be considered as scaling factors
that are constant for d and all d
*
, differences between d and
d
*
indicate solely differences in the strength of the relation
between the real and the randomized da tasets. Thus, the
one-tailed probability p that the original d is part of the dis-
tribution of d
n
is given by the percentage of randomization
runs where d
n
is larger than the original d.
p ¼
1
n
X
n
i¼1
1
d
i
>d
fg
ð16Þ
In correlation statistics, one is often not only interested in
the significance of the correlation, but also in the correla-
tion coefficient, its confidence interval, and the fraction of
common variance. In order to compute the correlation
coefficient, the strength of the covariance map b in the
ERP data V has to be computed case-wise. This strength
s is defined as
s ¼ V b ð17Þ
Using (12) we thus obtain
s ¼ c VV
T
X ð18Þ
The correlation r is then defined as the Pearson Correlation
Coefficient of s and X; the percent common variance is
equal to r
2
. The confidence interval of r can be estimated
using bootstrapping methods: From the original sample
of N
s
observations, m new subsets of the same size are ran-
domly drawn by sampling with replacement (Efron and
Tibshirani, 1993). For each sub-sample, r
*
is computed.
The distribution of these r
*
values is corrected for non-nor-
mality using the Fisher transformation (atanh(x) = 1/
2
*
ln((1 + x)/(1 x)), see Davison and Hinkley, 1997). In
the Fisher transformed distribution of r
*
, the so- called stu-
dentized bootstrap confidence intervals are constructed
from the bootstrap replicates (see Davison and Hinkley,
1997 for details). These studentized bootstrap confidence
intervals finally are back-transformed to the original scale.
Studentized confidence intervals are known to give the best
coverage overall.
Similar to the ANCOVA, the method presented here is a
general linear model with one continuous variable. (In the
ANCOVA, the continuous variable may or may not be a
confounding factor of no interest). Furthermore, it will
be shown below that the method is an extension of a
method of comparing ERP topographies between groups
that have been labeled topographic analysis of variance
(TANOVA, Strik et al., 1998, see Wirth et al., in press
for a multifactorial implementation), although so far, the
TANOVA has only been used for comparison of two con-
ditions. The TANOVA is thus similar to the ANOVA in
the sense that it used categorical independent variables,
while the present method is similar to the ANCOVA by
using one continuous independent variable. Accordingly,
the method will be called TANCOVA.
3. Simulations
In order to test the sensitivity of the proposed model to
noise, a series of simulations were computed. For each sim-
ulation, a dataset was generated consisting of an either 19
or 64 chann el, zero-mean, normally distributed random
map and a vector of either 12 or 50 also normally distrib-
uted random values that served as an external variable. The
random map was mu ltiplied by the random external vari-
able to obt ain simulated data that are compatible with
the model outlined in formula (4). To these data, uncorre-
lated random noise was added that was scaled against the
simulated data to obtain different signal-to-noise ratios
(SNRs). Using these noisy random simulated data and
the random external variable, the p-value was computed
for each simulation run. Furthermore, using the squared
correlation coefficient (r
2
, which indicates the amount of
common variance), the covariance map extracted by the
simulation was compared to the random map used to
T. Koenig et al. / Clinical Neurophysiology 119 (2008) 1262–1270 1265

Figures (4)
Citations
More filters

Journal ArticleDOI
TL;DR: The aim of Ragu is to maximize statistical power while minimizing the need for a-priori choices of models and parameters (like inverse models or sensors of interest) that interact with and bias statistics.
Abstract: We present a program (Ragu; Randomization Graphical User interface) for statistical analyses of multichannel event-related EEG and MEG experiments. Based on measures of scalp field differences including all sensors, and using powerful, assumption-free randomization statistics, the program yields robust, physiologically meaningful conclusions based on the entire, untransformed, and unbiased set of measurements. Ragu accommodates up to two within-subject factors and one between-subject factor with multiple levels each. Significance is computed as function of time and can be controlled for type II errors with overall analyses. Results are displayed in an intuitive visual interface that allows further exploration of the findings. A sample analysis of an ERP experiment illustrates the different possibilities offered by Ragu. The aim of Ragu is tomaximize statistical power while minimizing the need for a-priori choices of models and parameters (like inverse models or sensors of interest) that interact with and bias statistics.

188 citations


Cites background or methods from "Establishing correlations of scalp ..."

  • ...The suggested quantifier and the suggested statistical testing rely on previously reviewed and published papers [2, 4, 5] and are briefly explained below....

    [...]

  • ...In the literature, the procedure to compare groups and/or conditions has been called TANOVA (topographic analysis of variance); if a linear predictor is used, the proposed term is TANCOVA (topographic analysis of covariance)....

    [...]

  • ...If instead of a group/condition membership a predictor is available that is assumed to be linearly related to the activity of an unknown set of sources, the scalp field produced by this set of sources can be estimated using the so-called covariance maps βj [4]....

    [...]

  • ...By checking the “continuous/rank data” box in the betweensubject design dialog, the individual performance (learning rates in the present example) can be entered (Figure 4(b)), and the program will compute a TANCOVA....

    [...]

  • ...This approach is called TANCOVA and is also available in the program....

    [...]


Journal ArticleDOI
Thomas Koenig1, Lester Melie-Garcia2Institutions (2)
TL;DR: A simple and effective method to test whether an event consistently activates a set of brain electric sources across repeated measurements of event-related scalp field data, called topographic consistency test (TCT).
Abstract: We present a simple and effective method to test whether an event consistently activates a set of brain electric sources across repeated measurements of event-related scalp field data. These repeated measurements can be single trials, single subject ERPs, or ERPs from different studies. The method considers all sensors simultaneously, but can be applied separately to each time frame or frequency band of the data. This allows limiting the analysis to time periods and frequency bands where there is positive evidence of a consistent relation between the event and some brain electric sources. The test may therefore avoid false conclusions about the data resulting from an inadequate selection of the analysis window and bandpass filter, and permit the exploration of alternate hypotheses when group/condition differences are observed in evoked field data. The test will be called topographic consistency test (TCT). The statistical inference is based on simple randomization techniques. Apart form the methodological introduction, the paper contains a series of simulations testing the statistical power of the method as function of number of sensors and observations, a sample analysis of EEG potentials related to self-initiated finger movements, and Matlab source code to facilitate the implementation. Furthermore a series of measures to control for multiple testing are introduced and applied to the sample data.

145 citations


Cites methods from "Establishing correlations of scalp ..."

  • ...The TCT complements other global procedures for statistical testing of ERPs that are also based on randomization and bootstrapping (Galan et al. 1997; Greenblatt and Pflieger 2004; Karniski et al. 1994; Koenig et al. 2008; Lobaugh et al. 2001), but that are used to compare different conditions....

    [...]


Journal ArticleDOI
Kay Jann1, Mara Kottlow1, Thomas Dierks1, Chris Boesch1  +1 moreInstitutions (1)
22 Sep 2010-PLOS ONE
TL;DR: The data supports the physiological and neuronal origin of the RSNs and substantiates the assumption that the standard EEG frequency bands and their topographies can be seen as electrophysiological signatures of underlying distributed neuronal networks.
Abstract: Background: fMRI Resting State Networks (RSNs) have gained importance in the present fMRI literature. Although their functional role is unquestioned and their physiological origin is nowadays widely accepted, little is known about their relationship to neuronal activity. The combined recording of EEG and fMRI allows the temporal correlation between fluctuations of the RSNs and the dynamics of EEG spectral amplitudes. So far, only relationships between several EEG frequency bands and some RSNs could be demonstrated, but no study accounted for the spatial distribution of frequency domain EEG. Methodology/Principal Findings: In the present study we report on the topographic association of EEG spectral fluctuations and RSN dynamics using EEG covariance mapping. All RSNs displayed significant covariance maps across a broad EEG frequency range. Cluster analysis of the found covariance maps revealed the common standard EEG frequency bands. We found significant differences between covariance maps of the different RSNs and these differences depended on the frequency band. Conclusions/Significance: Our data supports the physiological and neuronal origin of the RSNs and substantiates the assumption that the standard EEG frequency bands and their topographies can be seen as electrophysiological signatures of underlying distributed neuronal networks.

138 citations


Cites methods from "Establishing correlations of scalp ..."

  • ...While in previous studies the BOLD signal fluctuations in each voxel or of a whole RSN were explained by a single EEG feature (e.g. arbitrary single or few channels [15,16,22] or global features such as global field power [19] or global field synchronization [17]), we provide the relation of the variance of EEG spectral power at each electrode to the dynamics of different RSN using Covariance Mapping [23]....

    [...]

  • ...Covariance Mapping The Covariance Mapping for the ten selected RSNs revealed specific significant spatial distributions of the spectral scalp field across frequencies (Figure 1)....

    [...]

  • ...Combination: Covariance Mapping of the fMRI GCs (RSNs) and the EEG The covariance between the normalized individual datasets extracted from the EEG respectively fMRI were calculated similar to the approach presented in Koenig et al. [23]....

    [...]

  • ...arbitrary single or few channels [15,16,22] or global features such as global field power [19] or global field synchronization [17]), we provide the relation of the variance of EEG spectral power at each electrode to the dynamics of different RSN using Covariance Mapping [23]....

    [...]


Journal ArticleDOI
Franck Amyot1, David B. Arciniegas2, Brazaitis Mp3, Curley Kc3  +12 moreInstitutions (8)
TL;DR: Although CT, MRI, and TCD were determined to be the most useful modalities in the clinical setting, no single imaging modality proved sufficient for all patients due to the heterogeneity of TBI; all imaging modalities reviewed demonstrated the potential to emerge as part of future clinical care.
Abstract: The incidence of traumatic brain injury (TBI) in the United States was 3.5 million cases in 2009, according to the Centers for Disease Control and Prevention. It is a contributing factor in 30.5% of injury-related deaths among civilians. Additionally, since 2000, more than 260,000 service members were diagnosed with TBI, with the vast majority classified as mild or concussive (76%). The objective assessment of TBI via imaging is a critical research gap, both in the military and civilian communities. In 2011, the Department of Defense (DoD) prepared a congressional report summarizing the effectiveness of seven neuroimaging modalities (computed tomography [CT], magnetic resonance imaging [MRI], transcranial Doppler [TCD], positron emission tomography, single photon emission computed tomography, electrophysiologic techniques [magnetoencephalography and electroencephalography], and functional near-infrared spectroscopy) to assess the spectrum of TBI from concussion to coma. For this report, neuroimag...

113 citations


Journal ArticleDOI
Thomas Koenig1, Maria Stein1, Matthias Grieder1, Mara Kottlow2  +1 moreInstitutions (2)
TL;DR: A randomization-based procedure that works without assigning grand-mean microstate prototypes to individual data, and shows an increased robustness to noise, and a higher sensitivity for more subtle effects of microstate timing, is proposed.
Abstract: Dynamic changes in ERP topographies can be conveniently analyzed by means of microstates, the so-called "atoms of thoughts", that represent brief periods of quasi-stable synchronized network activation. Comparing temporal microstate features such as on- and offset or duration between groups and conditions therefore allows a precise assessment of the timing of cognitive processes. So far, this has been achieved by assigning the individual time-varying ERP maps to spatially defined microstate templates obtained from clustering the grand mean data into predetermined numbers of topographies (microstate prototypes). Features obtained from these individual assignments were then statistically compared. This has the problem that the individual noise dilutes the match between individual topographies and templates leading to lower statistical power. We therefore propose a randomization-based procedure that works without assigning grand-mean microstate prototypes to individual data. In addition, we propose a new criterion to select the optimal number of microstate prototypes based on cross-validation across subjects. After a formal introduction, the method is applied to a sample data set of an N400 experiment and to simulated data with varying signal-to-noise ratios, and the results are compared to existing methods. In a first comparison with previously employed statistical procedures, the new method showed an increased robustness to noise, and a higher sensitivity for more subtle effects of microstate timing. We conclude that the proposed method is well-suited for the assessment of timing differences in cognitive processes. The increased statistical power allows identifying more subtle effects, which is particularly important in small and scarce patient populations.

87 citations


Cites background or methods from "Establishing correlations of scalp ..."

  • ...Sample Data Analysis and Simulations The sample data and analysis are based on an experiment that has previously been used to demonstrate statistical procedures of the analysis of ERPs (Koenig et al. 2008, 2011)....

    [...]

  • ...…data consist of ERPs recorded in 16 healthy young English-speaking exchange students that spent a year in the German-speaking part of Switzerland and that participated in a larger study on the neurobiology of training-related changes of the language system (Koenig et al. 2008; Stein et al. 2006)....

    [...]

  • ...These data consist of ERPs recorded in 16 healthy young English-speaking exchange students that spent a year in the German-speaking part of Switzerland and that participated in a larger study on the neurobiology of training-related changes of the language system (Koenig et al. 2008; Stein et al. 2006)....

    [...]


References
More filters

Book
Bradley Efron1, Robert TibshiraniInstitutions (1)
01 Jan 1993
TL;DR: This article presents bootstrap methods for estimation, using simple arguments, with Minitab macros for implementing these methods, as well as some examples of how these methods could be used for estimation purposes.
Abstract: This article presents bootstrap methods for estimation, using simple arguments. Minitab macros for implementing these methods are given.

36,497 citations


Book
Anthony C. Davison1, David Hinkley2Institutions (2)
28 Oct 1997
Abstract: This book gives a broad and up-to-date coverage of bootstrap methods, with numerous applied examples, developed in a coherent way with the necessary theoretical basis. Applications include stratified data; finite populations; censored and missing data; linear, nonlinear, and smooth regression models; classification; time series and spatial problems. Special features of the book include: extensive discussion of significance tests and confidence intervals; material on various diagnostic methods; and methods for efficient computation, including improved Monte Carlo simulation. Each chapter includes both practical and theoretical exercises. Included with the book is a disk of purpose-written S-Plus programs for implementing the methods described in the text. Computer algorithms are clearly described, and computer code is included on a 3-inch, 1.4M disk for use with IBM computers and compatible machines. Users must have the S-Plus computer application. Author resource page: http://statwww.epfl.ch/davison/BMA/

6,412 citations


Journal ArticleDOI
Thomas E. Nichols1, Andrew P. Holmes2Institutions (2)
TL;DR: The standard nonparametric randomization and permutation testing ideas are developed at an accessible level, using practical examples from functional neuroimaging, and the extensions for multiple comparisons described.
Abstract: Requiring only minimal assumptions for validity, nonparametric permutation testing provides a flexible and intuitive methodology for the statistical analysis of data from functional neuroimaging experiments, at some computational expense. Introduced into the functional neuroimaging literature by Holmes et al. ([1996]: J Cereb Blood Flow Metab 16:7-22), the permutation approach readily accounts for the multiple comparisons problem implicit in the standard voxel-by-voxel hypothesis testing framework. When the appropriate assumptions hold, the nonparametric permutation approach gives results similar to those obtained from a comparable Statistical Parametric Mapping approach using a general linear model with multiple comparisons corrections derived from random field theory. For analyses with low degrees of freedom, such as single subject PET/SPECT experiments or multi-subject PET/SPECT or fMRI designs assessed for population effects, the nonparametric approach employing a locally pooled (smoothed) variance estimate can outperform the comparable Statistical Parametric Mapping approach. Thus, these nonparametric techniques can be used to verify the validity of less computationally expensive parametric approaches. Although the theory and relative advantages of permutation approaches have been discussed by various authors, there has been no accessible explication of the method, and no freely distributed software implementing it. Consequently, there have been few practical applications of the technique. This article, and the accompanying MATLAB software, attempts to address these issues. The standard nonparametric randomization and permutation testing ideas are developed at an accessible level, using practical examples from functional neuroimaging, and the extensions for multiple comparisons described. Three worked examples from PET and fMRI are presented, with discussion, and comparisons with standard parametric approaches made where appropriate. Practical considerations are given throughout, and relevant statistical concepts are expounded in appendices.

5,237 citations


"Establishing correlations of scalp ..." refers methods in this paper

  • ...Alternatively, one may employ techniques for the correction of multiple comparisons that are commonly employed in functional neuroimaging (Nichols and Holmes, 2002; Carbonell et al., 2004)....

    [...]


Journal ArticleDOI
Marta Kutas, Steven A. Hillyard1Institutions (1)
11 Jan 1980-Science
TL;DR: In a sentence reading task, words that occurred out of context were associated with specific types of event-related brain potentials that elicited a late negative wave (N400).
Abstract: In a sentence reading task, words that occurred out of context were associated with specific types of event-related brain potentials. Words that were physically aberrant (larger than normal) elecited a late positive series of potentials, whereas semantically inappropriate words elicited a late negative wave (N400). The N400 wave may be an electrophysiological sign of the "reprocessing" of semantically anomalous information.

4,014 citations


"Establishing correlations of scalp ..." refers background in this paper

  • ...The figure indicates that effects of language proficiency are found in a time window that has often been associated with language processing (Kutas and Hillyard, 1980)....

    [...]


Journal ArticleDOI
TL;DR: It is shown that modern EEG source imaging simultaneously details the temporal and spatial dimensions of brain activity, making it an important and affordable tool to study the properties of cerebral, neural networks in cognitive and clinical neurosciences.
Abstract: Objective: Electroencephalography (EEG) is an important tool for studying the temporal dynamics of the human brain's large-scale neuronal circuits. However, most EEG applications fail to capitalize on all of the data's available information, particularly that concerning the location of active sources in the brain. Localizing the sources of a given scalp measurement is only achieved by solving the so-called inverse problem. By introducing reasonable a priori constraints, the inverse problem can be solved and the most probable sources in the brain at every moment in time can be accurately localized. Methods and Results: Here, we review the different EEG source localization procedures applied during the last two decades. Additionally, we detail the importance of those procedures preceding and following source estimation that are intimately linked to a successful, reliable result. We discuss (1) the number and positioning of electrodes, (2) the varieties of inverse solution models and algorithms, (3) the integration of EEG source estimations with MRI data, (4) the integration of time and frequency in source imaging, and (5) the statistical analysis of inverse solution results. Conclusions and Significance: We show that modern EEG source imaging simultaneously details the temporal and spatial dimensions of brain activity, making it an important and affordable tool to study the properties of cerebral, neural networks in cognitive and clinical neurosciences.

1,492 citations


"Establishing correlations of scalp ..." refers methods in this paper

  • ...Second, the accuracy of EEG/MEG and ERP/ERF inverse solution improves significantly when high-density electrode arrays are being used (see Michel et al., 2004, for a review)....

    [...]


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20212
20203
20191
20183
20171
20165