scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Multi-channel EEG compression based on 3D decompositions

TL;DR: Numerical results for a standard EEG data set show that tensor-based coding achieves lower worst-case error and comparable average error than the wavelet- and energy-based schemes.
Abstract: Various compression algorithms for multi-channel electroencephalograms (EEG) are proposed and compared. The multi-channel EEG is represented as a three-way tensor (or 3D volume) to exploit both spatial and temporal correlations efficiently. A general two-stage coding framework is developed for multi-channel EEG compression. In the first stage, we consider (i) wavelet-based volumetric coding; (ii) energy-based lossless compression of wavelet subbands; (iii) tensor decomposition based coding. In the second stage, the residual is quantized and coded. Through such two-stage approach, one can control the maximum error (worst-case distortion). Numerical results for a standard EEG data set show that tensor-based coding achieves lower worst-case error and comparable average error than the wavelet- and energy-based schemes.

Summary (1 min read)

Introduction

  • Various compression algorithms for multi-channel electroencephalograms (EEG) are proposed and compared.
  • A general two-stage coding framework is developed for multi-channel EEG compression.
  • In Section 2 the authors explain the multi-way (or volumetric data) representation of multi-channel EEG.
  • The authors stack the matrices from subsequent time instances to form a volume, as shown in Fig. 1(b).
  • In the following the authors explain their three compression algorithms.

3.1.1. Volumetric Coding Approach

  • Fig. 2 shows a diagram of the proposed two-stage coder for multi-channel EEG signals.
  • The compressed dataIen is then decoded, yielding the reconstructed dataIl.
  • The residual ε is uniformly quantized to generate quantization indicesεq, with maximum error no larger thanδ: Table 1.
  • Ni is the number of elements inth subband (d) Coding orderO = descend(RED) (e) Code the subband according to O by Arithmetic coding (b) RE= RE+ REDO(i) ·NO(i) // update relative energy of the coded subband.
  • The pre-processing step,i.e., formation of tensor from multi-channel EEG, is the principle difference from the coders used in image compression [7].

3.1.2. Subband Specific Arithmetic Coding (SAC)

  • The authors first order the wavelet subbands based on their relative energy density (RED).
  • The remaining subbands are less significant in terms of their RED; the authors first quantize them (cf. (5)), and then apply arithmetic coding.
  • This two-stage procedure results in lossy compression of the EEG signals.
  • It is noteworthy that in the second coding step, the authors quantize the wavelet subbands; this may lead to a substantial error in time domain.
  • In other words, the authors cannot control the maximum distortion in time domain through this approach.

3.2. Tensor-based Compression

  • The authors apply parallel factor decomposition decomposition [8] to the three-way tensorI, formed from multichannel EEG.
  • The PARAFAC based decomposition of a three-way tensor is given by: I = r ∑ i=1 ai ◦ bi ◦ ci + E , (7) whereE represents the residual tensor, and, b, andc represent the factors along the three modes, whereas◦ stands for the outer-product along the particular mode.
  • The residual quantization is performed in time-domain for volumetric and PARAFAC coding, whereas in SAC, quantization is performed in wavelet domain.
  • The authors have presented novel compression schemes for multichannel EEG.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

MULTI-CHANNEL EEG COMPRESSION BASED ON 3D DECOMPOSITIONS
Justin Dauwels
Srinivasan K.
Ramasubba Reddy M.
Andrzej Cichocki
Nanyang Technological University, Singapore, E-mail: justin@dauwels.com
Indian Institute of Technology Madras, Chennai, India, E-mail: srinivasan.sivam@gmail.com
RIKEN BSI, Laboratory for Advanced Brain Signal Processing, Wako-Shi, Japan
ABSTRACT
Various compression algorithms for multi-channel electroen-
cephalograms (EEG) are proposed and compared. The multi-
channel EEG is represented as a three-way tensor (or 3D
volume) to exploit both spatial and temporal correlations effi-
ciently. A general two-stage coding framework is developed
for multi-channel EEG compression. In the first stage, we
consider (i) wavelet-based volumetric coding; (ii) energy-
based lossless compression of wavelet subbands; (iii) tensor
decomposition based coding. In the second stage, the residual
is quantized and coded. Through such two-stage approach,
one can control the maximum error (worst-case distortion).
Numerical results for a standard EEG data set show that
tensor-based coding achieves lower worst-case error and
comparable average error than the wavelet- and energy-based
schemes.
Index Terms arithmetic coding, three-way tensor, ten-
sor decomposition, wavelet transform
1. INTRODUCTION
Electroencephalogram (EEG) is a recording of the electrical
activity of the human brain, usually acquired by a number
electrodes placed on the scalp. In the past decade, there has
been tremendous growth in EEG based research activities,
e.g., automated EEG analysis for diagnosis of neurological
diseases, and brain computer interfacing (BCI) [1]. In most
applications, EEG recordings are done for an extended pe-
riod, and long-term recordings often generate massive EEG
data sets. Therefore, EEG compression plays an important
role for efficient storage and transmission. The main chal-
lenges for EEG compression are as follows:
the number of EEG channels can be large (e.g., 256),
the sampling rate can be high (several kHz) in order to
capture evoked potentials and high frequency oscilla-
tions.
Many techniques have been developed for compress-
ing EEG (see, e.g., [2] and references therein). However,
those methods often compress individual channel separately.
EEG signals from adjacent channels are often strongly corre-
lated (inter-channel correlation), and each individual channel
has temporal correlations (intra-channel correlation). The
single-channel EEG compression algorithms, when extended
directly to multi-channel EEG, will be inefficient as the inter-
channel correlations are not exploited. For multi-channel
EEG, intra- and inter-channel correlations must be exploited
together for efficient compression.
Multi-channel EEG compression is less intensively stud-
ied, and one could find only few instances in the literature;
we categorize the algorithms into lossless [3, 4] and lossy
[5] methods. All multi-channel compression schemes con-
sider inter- and intra-channel correlation separately, and ex-
ploit them by different techniques. However, intra- and inter-
channel correlations are often not independent and exploiting
them in a singe step may improveefficiency. We explore ways
to arrange the multi-channel EEG in suitable form, particu-
larly, to exploit both types of correlations in a single step.
In our previous work [2], we introduced a pre-processing
technique where single-channel EEG is arranged as a matrix
before compression; this representation improved the Rate-
Distortion (R-D) performance over conventionalcompression
schemes. Extending our previous work, in [9] we explored
several ways to arrange multi-channel EEG as matrices and
tensors, and evaluated several matrix/tensor decomposition
techniques based on their R-D performance. Here we present
a more systematic study of multi-way (or volumetric) rep-
resentations of multi-channel EEG for the purpose of com-
pression; we consider several compression schemes that use
such representations, more specifically, based on 3D wavelet
transforms or tensor decompositions. Moreover, we develop
a two-stage framework for compression: in the first stage we
compress the EEG by means of volumetric coding, tensor
decomposition and energy-based coding of significant sub-
bands; in the second stage, we apply arithmetic coding to the
time residual for the first two algorithms and wavelet-domain
residual for third algorithm, after uniform quantization. Such
two-stage compression scheme allows us to bound the maxi-
mum (worst-case)distortion. We discuss the compression per-
formance of the three compression algorithms by means of an
average and worst-case distortion measure.
This paper is structured as follows. In Section 2 we ex-
plain the multi-way (or volumetric data) representation of
multi-channel EEG. We outline our compression algorithms

(a) t/dt/s volume (b) s/s/t volume
Fig. 1. Formation of 3D volume from multi-channel EEG. (a) The matrices
(I
1
, I
2
, . . . , I
M
) formed from single-channel EEG are stacked as volume
(“t/dt/s volume”). (b) At any time instance, we form matrix from the multi-
channel EEG (A & B denote samples from adjacent electrodes in EEG mon-
tage). N such matrices formed at subsequent time instances are then stacked
along the z-direction to form a volume (“s/s/t volume”).
in Section 3, and present our results in Section 4, followed by
concluding remarks in Section 5.
2. TENSOR/VOLUMETRIC DATA FORMATION
FROM EEG
Spatially adjacent channels of multi-channel EEG are strongly
correlated, and each individual channel is strongly correlated
across time. To exploit both spatial and temporal correlations
simultaneously, we arrange multi-channel EEG as a 3D vol-
ume or three-way tensor. We consider two specific ways to
extract a volumetric data from multi-channel EEG, where the
three axes capture spatial and temporal variations in different
form.
In Fig. 1(a) we illustrate the EEG volume formed accord-
ing to our first method. The k-th slice I
k
of the volume I,
extracted from channel k, can be written as:
I
(k)
t/dt/s
= {I
k
|k = 1, . . . , M } (1)
=
i
k
(1) i
k
(2) · · · i
k
(N)
i
k
(2N ) i
k
(2N 1) · · · i
k
(N + 1)
.
.
.
.
.
.
.
.
.
.
.
.
· · · · · i
k
(N
2
)
(N×N)
.
From our previous studies [2, 6], we found that such ar-
rangement leads to improved compression performance over
conventional vector-based compression schemes. Next, the
matrices associated with the single-channel EEG signals are
stacked to form 3D volume, as shown in Fig. 1(a). Adjacent
slices in the tensor correspond to adjacent EEG channels. We
refer to this volume as “t/dt/s”, where the x, y, and z direc-
tions reflect temporal (t), delayed (dt) temporal, and spatial
(s) variations respectively.
We also consider an alternative method to form a tensor
from multi-channel EEG. A matrix is formed from the multi-
channel EEG at each time instance. We arrange the matrix
such that its elements follow similar adjacency as the EEG
montage; for the sake of brevity, we omit the details. We
stack the matrices from subsequent time instances to form a
volume, as shown in Fig. 1(b). The x y plane reflects the
spatial correlations, and the temporal correlations is along the
z direction. We refer to this volumetric data as “s/s/t”. The
k-th slice of the volume may be written as:
I
(k)
s/s/t
={i
(i,j)
(k)|k = 1, . . . , N } (2)
=
i
(1,1)
(k) i
(1,2)
(k) · · · i
(1,N
2
)
(k)
i
(2,1)
(k) i
(2,2)
(k) · · · i
(2,N
2
)
(k)
.
.
.
.
.
.
.
.
.
.
.
.
i
(N
1
,1)
(k) i
(N
1
,2)
(k) · · · i
(N
1
,N
2
)
(k)
(N
1
×N
2
)
,
where i and j refer to the position in the x-y plane, whereas
the slice number k refers to the time index. The dimension of
the x y plane is limited by the number of channels, and the
slices in the x y plane may be square or rectangular.
3. COMPRESSION ALGORITHMS
We first perform lossy coding (Stage 1), followed by arith-
metic coding of the quantized residuals (Stage 2). We con-
sider three lossy compression algorithms (Stage 1): (i) 3D
Wavelet volumetric coding, (ii) 3D Wavelet subband specific
arithmetic coding, and (iii) tensor decomposition(PARAFAC)
based coding. In the following we explain our three compres-
sion algorithms.
3.1. Wavelet-based Compression
3.1.1. Volumetric Coding Approach
Fig. 2 shows a diagram of the proposed two-stage coder for
multi-channel EEG signals. We denote the EEG volume by I
(both types of volumes, cf. Fig 1). In the first stage, we
compress I with a scalable wavelet encoder based on succes-
sive bit-plane encoding, resulting in the compressed data I
en
;
we use a bi-orthogonal wavelet transform (5/3 filters) as in
our previous work [2]. The compressed data I
en
is then de-
coded, yielding the reconstructed data I
l
. Next we quan-
tize the residue ε = I I
l
, resulting in ε
q
, which is com-
pressed by arithmetic coding, leading to ε
qen
. Both are used
by the decoder to approximate the original data. The com-
pressed data I
en
is first decoded, yielding the lossy recon-
structed data I
l
. The data ε
qen
is passed through an arith-
metic decoder and then dequantized, resulting in ˆε. The latter
is an approximation of the residual ε. Eventually, the data I
is reconstructed as I
nl
= I
l
+ ˆε. The volume I
nl
is at last
rearranged to yield the reconstructed EEG signal(s). We can
readily confirm the following relations:
I = I
l
+ ε (3)
I
nl
= I
l
+ ˆε. (4)
Therefore, it follows that ||ε ˆε||
= ||I I
nl
||
, and hence
||ε ˆε||
δ is equivalent to ||I I
nl
||
δ. The residual
ε is uniformly quantized to generate quantization indices ε
q
,
with maximum error no larger than δ:

Formation
of
Volume/Tensor
Wavelet-based
3D Encoder
Wavelet-based
3D Decoder
+
Q(·)
Uniform
Quantizer
Residual
Arithmetic
Coder
I
I
l
ε
ε
q
ε
qen
I
en
Multi-channel
EEG
Fig. 2. Wavelet-based volumetric coding of multi-channel EEG
Table 1. Wavelet-based subband specific coding procedure
Step 1: Initialization
(a) Form the volume from the multi-channel EEG,
I t/dt/s or s/s/t volume
(b) Compute the Wavelet transform of the volume.
I
w
= 3D-DWT(I, D)
// D-level decomposition yields 7D + 1 subband cubes
(c) Determine the relative energy density (RED
i
) of the sub-
bands (i = 1, . . . , 7D + 1)
RED(i) =
P
j
I
i
w
(j)
2
N
i
·
P
j
I
w
(j)
2
where N
i
is the number of elements in i
th
subband
(d) Coding order O = descend(RED)
(e) Set the threshold τ (% of total energy)
Step 2: First-pass coding - Coding of significant subbands until τ . Set
relative-energy RE = 0, i = 1.
while (RE < τ )
(a) Bitstream AC(I
O(i)
w
) // Code the subband according
to O by Arithmetic coding
(b) RE = RE + RED
O(i)
· N
O(i)
// update relative energy of the coded subband. N (O(i))
is number of elements in the subband O(i)
(c) i i + 1
end
Step 3: Second-pass coding - lossy coding of the remaining subbands
(O(i + 1) to O(7D + 1))
for (j = i + 1 : 7D + 1)
(a) I
O(j)
w
= Q(I
O(j)
w
, δ)
// Quantize the wavelet coefficients with quantizer step-
size δ
(b) Bitstream AC(I
O(j)
w
)
end
ε
q
=
ε+δ
2δ+1
, ε > 0
εδ
2δ+1
, ε < 0
, (5)
where ⌊·⌋ denotes the integer part of the argument. At the
decoder end, the residual bitstream ε
qen
is decoded to yield
ε
q
, followed by a dequantizer defined to guarantee ||ε ˆε||
δ:
ˆε = (2δ + 1)ε
q
. (6)
By adding the lossy reconstruction I
l
and the dequantized
residual ˆε, we obtain the final near-lossless reconstruction
I
nl
with guarantee ||I I
nl
|| δ. In words, the maxi-
mum distortion is therefore bounded to δ. The pre-processing
step, i.e., formation of tensor from multi-channel EEG, is the
principle difference from the coders used in image compres-
sion [7].
3.1.2. Subband Specific Arithmetic Coding (SAC)
In this approach, we first order the wavelet subbands based on
their relative energy density (RED). We use the same wavelets
as in volumetric coding approach (cf. Section 3.1.1). In first
stage, we compress the most significant wavelet subbands
losslessly, followed by lossy compression of the subbands
with smaller energy concentration. Specifically, in the first
stage, we apply arithmetic coding to the subbands with high-
est RED, until a certain threshold τ (% of total energy) is
reached. The remaining subbands are less significant in terms
of their RED; we first quantize them (cf. (5)), and then ap-
ply arithmetic coding. This two-stage procedure results in
lossy compression of the EEG signals. The pseudo-code of
the algorithm is presented in Table 1. We code each subband
separately using simple arithmetic coding where all the coef-
ficients within the same subband are represented by a single
probability model. It is noteworthy that in the second coding
step, we quantize the wavelet subbands; this may lead to a
substantial error in time domain. In other words, we cannot
control the maximum distortion in time domain through this
approach.
3.2. Tensor-based Compression
We apply parallel factor decomposition (PARAFAC) decom-
position [8] to the three-way tensor I, formed from multi-
channel EEG. In our previous study [9], we have shown
that PARAFAC yielded the best compression performance
among various other matrix and tensor decompositions. The
PARAFAC based decomposition of a three-way tensor is
given by:
I =
r
X
i=1
a
i
b
i
c
i
+ E, (7)
where E represents the residual tensor, and a, b, and c repre-
sent the factors along the three modes, whereas stands for
the outer-product along the particular mode. These three fac-
tors efficiently capture the major variations along the three
modes. In the first-stage we encode the PARAFAC factors
using a simple bit-plane coding scheme, and in the second
stage, we apply arithmetic coding to the residuals after uni-
form quantization (5).
4. RESULTS
We test the performance of our compression algorithms on
the EEG-Motor Mental Imagery datasets of physiobank
database [10]. This EEG dataset consists of 64-channel
recordings, recorded from healthy subjects at 80Hz sampling

Compression Ratio
PRD(%)
1 2 3 4 5 6 7 8 9 10
0
5
10
15
3D Volumetric
coding
PARAFAC
coding
Subband AC
(a) PRD(%)
Compression Ratio
PSNR(x, ˜x)
1 2 3 4 5 6 7 8 9 10
15
20
25
30
35
40
3D Volumetric
coding
PARAFAC coding
Subband
AC
(b) PSNR(x, ˜x)
Fig. 3. Compression performance of the wavelet-based volumetric coding,
subband specific arithmetic coding and PARAFAC based coding for t/dt/s
volume/three-way tensor.
rate and with 12 bit resolution. We analyze the performance
of the algorithms based on compression ratio:
CR =
L
orig
L
comp
, (8)
where L
orig
and L
comp
are the bit length of original and recon-
structed multi-channel EEG signals respectively. The quality
of the reconstructed signal (˜x) is assessed using percent root-
mean-square distortion (PRD (%)):
PRD (%) =
v
u
u
t
P
N
i=1
(x(i) ˜x(i))
2
P
N
i=1
x(i)
2
× 100. (9)
We also use an alternative quantitative distortion measure,
based on the maximum absolute difference between x and ˜x:
PSNR(x, ˜x) = 10 log
10
2
Q
1
max(|x ˜x|)
. (10)
We consider segments of 1024 samples from each chan-
nel, arranged in a suitable volume size, specifically, 32 × 32 ×
64 for t/dt/s volume, 8 × 8 × 1024 for s/s/t volume. In all
the three algorithms, we vary the quantization step-size from
0 (lossless) till 19 (lossy), and measured the CR and PRD(%)
for each step-size. The energy threshold (τ) for the subband
specific arithmetic coding is fixed to 50%; we obtained the
best results for that value of the threshold. The results are
summarized in Fig. 3. Since the results for t/dt/s and s/s/t are
similar, we only show results for the t/dt/s volume construc-
tion.
It is clear from Fig. 3 that SAC outperforms both vol-
umetric and PARAFAC coding with respect to PRD(%);
with respect to PSNR, volumetric and PARAFAC coding
perform similarly, but they clearly outperform SAC. The
residual quantization is performed in time-domain for volu-
metric and PARAFAC coding, whereas in SAC, quantization
is performed in wavelet domain. The idea behind SAC is to
quantize the residual wavelet subbands with least energy. As
we are quantizing wavelet coefficients in SAC, the maximum
error in time domain cannot be controlled, and consequently,
it is larger than the other two approaches considered here.
Interestingly, it is promising that the average error (PRD) is
smaller for SAC compared to other two approaches. How-
ever, the distortion may be large in very few samples, which
may be tolerable in some specific applications.
5. CONCLUSION
We have presented novel compression schemes for multi-
channel EEG. The main idea is to exploit the intra- and
inter-channel correlations simultaneously by arranging the
multi-channel EEG as a volume, and to represent that volume
in different ways. Particularly, we considered volumetric
coding, energy-based coding of wavelet subbands, and ten-
sor based coding. Next we compressed the residual, which
allows us to bound the worst-case distortion (in volumetric
and PARAFAC coding). The tensor-based coding scheme
yields smaller worst-case error than both subband specific
coding and volumetric coding, yet the average error is only
slightly larger than in subband specific coding and much
smaller than in volumetric coding. Therefore, tensor-based
coding is an attractive approach for multi-channel EEG com-
pression. If larger worst-case distortion is tolerable, wavelet
subband coding may also be a suitable option. In our future
study, we planned to improve the worst-case error of the pro-
posed wavelet subband specific coding by suitable threshold
selection.
6. REFERENCES
[1] E. Nidermeyer and F. L. D. Silva, Electroencephalography: Basic
Principles, Clinical applications and related fields, 5th ed., Lippincott
Williams and Wilkins, 2005.
[2] K. Srinivasan, J. Dauwels, and M. R. Reddy, A two-dimensional ap-
proach to lossless EEG compression, Biomedical signal processing
and control, vol. 6, pp. 387–394, 2011.
[3] G. Antoniol and P. Tonella, “EEG data compression techniques, IEEE
Transactions on Biomedical engineering, vol. 44, no. 2, pp. 105–114,
Feb. 1997.
[4] Y. Wongsawat, S. Oraintara, T. Tanaka, and K. Rao, “Lossless multi-
channel EEG compression, in Proceedings of IEEE International
Symposium on Circuits and Systems (ISCAS), sep. 2006, pp. 1611–
1614.
[5] Q. Liu, M. Sun, and R. Sclabassi, Decorrelation of multichannel EEG
based on hjorth filter and graph theory, in Proc. 6th International
Conference on Signal Processing, vol. 2, Aug. 2002, pp. 1516–1519.
[6] K. Srinivasan and M. R. Reddy, “Efficient pre-processing technique
for lossless real-time EEG compression, Electronics Letters, vol. 46,
no. 1, pp. 26–27, Jan. 2010.
[7] S. Yea and W. Pearlman, A wavelet-based two stage near lossless
coder, IEEE transactions on Image processing, vol. 15, no. 11, pp.
3488–3500, Nov. 2006.
[8] T. G. Kolda and B. W. Bader, “Tensor decompositions and applica-
tions, SIAM Review, vol. 51, no. 3, pp. 455–500, September 2009.
[9] J. Dauwels, K. Srinivasan, M. R. Reddy, and A. Chichocki, “Multi-
channel EEG compression based on matrix and tensor decomposi-
tions, in Proc. International conference on acoustics, speech and sig-
nal processing (ICASSP), 2011, pp. 629–632.
[10] A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. C.
Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E.
Stanley, “PhysioBank, PhysioToolkit, and PhysioNet : Components of
a New Research Resource for Complex Physiologic Signals, Circu-
lation, vol. 101, no. 23, pp. e215–220, 2000.
Citations
More filters
Journal ArticleDOI
TL;DR: The robustness of Bi-level Burrows Wheeler Compression Algorithm (BBWCA) in terms of the compression efficiency for different types of image data is demonstrated and it is shown that BBWCA is capable of compressing 2-D data effectively.
Abstract: This research paper demonstrates the robustness of Bi-level Burrows Wheeler Compression Algorithm (BBWCA) in terms of the compression efficiency for different types of image data. The scheme was designed to take advantage of the increased inter-pixel redundancies resulting from a two pass Burrows Wheeler Transformation (BWT) stage and the use of Reversible Colour Transform (RCT). In this research work, BBWCA was evaluated for raster map images, Colour Filter Array (CFA) images as well as 2-D ElectroEncephaloGraphy (EEG) data and compared against benchmark schemes. Validation has been carried out on various examples and they show that BBWCA is capable of compressing 2-D data effectively. The proposed method achieves marked improvement over the existing methods in terms of compression size. BBWCA is 18.8 % better at compressing images as compared to High Efficiency Video Codec (HEVC) and 21.2 % more effective than LZ4X compressor for CFA images. For the EEG data, BBWCA is 17 % better at compressing images as compared to WINRK and 25.2 % more effective than NANOZIP compressor. However, for the Raster images PAQ8 supersedes BBWCA by 11 %. Among the different schemes compared, the proposed scheme achieves overall best performance and is well suited to small and large size image data compression. The parallelization process reduces the execution time particularly for large size images. The parallelized BBWCA scheme reduces the execution time by 31.92 % on average as compared to the non-parallelized BBWCA.

21 citations


Cites methods from "Multi-channel EEG compression based..."

  • ...in [18] utilises wavelet-based volumetric coding, energy-based lossless compression of wavelet subbands and tensor decomposition based coding for effective compression....

    [...]

Proceedings ArticleDOI
03 Nov 2013
TL;DR: This work presents a data-driven statistical model that takes advantage of the multivariate nature of the data collected by a heterogeneous sensor network to learn spatio-temporal patterns that enable it to employ an aggressive duty cycling policy on the individual sensor nodes, thereby reducing the overall energy consumption.
Abstract: A key factor in a successful sensor network deployment is finding a good balance between maximizing the number of measurements taken (to maintain a good sampling rate) and minimizing the overall energy consumption (to extend the network lifetime). In this work, we present a data-driven statistical model to optimize this tradeoff. Our approach takes advantage of the multivariate nature of the data collected by a heterogeneous sensor network to learn spatio-temporal patterns. These patterns enable us to employ an aggressive duty cycling policy on the individual sensor nodes, thereby reducing the overall energy consumption. Our experiments with the OMNeT++ network simulator using realistic wireless channel conditions, on data collected from two real-world sensor networks, show that we can sample just 20% of the data and can reconstruct the remaining 80% of the data with less than 9% mean error, outperforming similar techniques such is distributed compressive sampling. In addition, energy savings ranging up to 76%, depending on the sampling rate and the hardware configuration of the node.

8 citations


Cites methods from "Multi-channel EEG compression based..."

  • ...The proposed technique drastically reduces the amount of sampled data at each node, thus allowing the nodes to spend more time in a low-power sleep mode and save energy....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an optimal tensor truncation method for performing compression of the data, which first reshapes the multi-channel EEG signal as a tensor and initially identifies the optimum size of the compressed tensor.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: The newly inaugurated Research Resource for Complex Physiologic Signals (RRSPS) as mentioned in this paper was created under the auspices of the National Center for Research Resources (NCR Resources).
Abstract: —The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of He...

11,407 citations

Journal ArticleDOI
TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Abstract: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.

9,227 citations


"Multi-channel EEG compression based..." refers methods in this paper

  • ...We apply parallel factor decomposition (PARAFAC) decomposition [8] to the three-way tensor I, formed from multichannel EEG....

    [...]

01 Jan 1998
TL;DR: Historical aspects introduction to the neurophysiological basis of the EEG and DC potentials cellular substrates of spontaneous and evoked brain rhythms dynamics of EEG as signals and neuronal populations are introduced.
Abstract: Historical aspects introduction to the neurophysiological basis of the EEG and DC potentials cellular substrates of spontaneous and evoked brain rhythms dynamics of EEG as signals and neuronal populations - models and theoretical considerations biophysical aspects of EEG and magnetoencephalogram generation technological basis of the EEG recording EEG recording and operation of the apparatus the EEG signal - polarity and field determination digitized (paperless) EEG recording the normal EEG in the waking adult sleep and EEG maturation of the EEG - development of waking and sleep patterns EEG patterns and genetics nonspecific abnormal EEG patterns abnormal EEG patterns - epileptic and paroxysmal activation methods brain tumours and other space-occupying lesions (with a section on oncological CNS complications) the EEG in cerebral inflammatory processes cerebrovascular disorders and EEG EEG and old age EEG and dementia EEG and neurodegenerative disorders the EEG in infantile brain damage and cerebral palsy craniocerebral trauma metabolic central nervous system disorders cerebral anoxia - experimental view cerebral anoxia - clinical aspects coma and brain death epileptic seizure disorders non-epileptic attacks polygraphy polysomnography - principles and applications in sleep and arousal disorders neonatal electroencephalography event-related potentials - methodology and quantification contingent negative variation and Bereitschafts-potential visual evoked potentials auditory evoked potentials evoked potentials in infancy and childhood neurometric use of event-related potentials event-related potentials - P 300 and psychological implications neuroanaesthesia and intraoperative neurological monitoring clinical use of magnetoencephalography brain mapping - methodology the clinical use of brain mapping EEG analysis - theory and practice the EEG in patients with migraine and other headaches psychiatric disorders and the EEG psychology, physiology and the EEG EEG in aviation, space exploration and diving EEG and neuropharmacology - experimental approach EEG, drug effect and central nervous system poisoning toxic encephalography the special form of stereo-electroencephalography electroencephalography subdural EEG recordings special techniques of recording and transmission prolonged EEG monitoring in the diagnosis of seizure disorders EEG monitoring during carotid endarterectomy and open heart surgery computer analysis and cerebral maturation special use of EEG computer analysis in clinical neurology.

3,211 citations


"Multi-channel EEG compression based..." refers methods in this paper

  • ...In the first stage, we consider (i) wavelet-based volumetric coding; (ii) energybased lossless compression of wavelet subbands; (iii) tensor decomposition based coding....

    [...]

Book
01 Apr 1993
TL;DR: The main thrust of Electroencephalography is to preserve the sound basis of classic EEG recording and reading and, on the other hand, to present the newest developments for future EEG/neurophysiology research, especially in view of the highest brain functions as mentioned in this paper.
Abstract: The main thrust of Electroencephalography is to preserve the sound basis of classic EEG recording and reading and, on the other hand, to present the newest developments for future EEG/neurophysiology research, especially in view of the highest brain functions. The Fourth Edition features new chapters on modern and future oriented EEG/EP research, spinal monitoring and dipole modelling

3,195 citations

Journal ArticleDOI
TL;DR: Electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed and the adoption of a collapsed Huffman tree for the encoding/decoding operations is shown.
Abstract: Electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed. Data compression permits one to achieve significant reduction in the space required to store signals and in transmission time. The Huffman coding technique in conjunction with derivative computation reaches high compression ratios (on average 49% on Holter and 58% on EEG signals) with low computational complexity. By exploiting this result a simple and fast encoder/decoder scheme capable of real-time performance on a PC was implemented. This simple technique is compared with other predictive transformations, vector quantization, discrete cosine transform (DCT), and repetition count compression methods. Finally, it is shown that the adoption of a collapsed Huffman tree for the encoding/decoding operations allows one to choose the maximum codeword length without significantly affecting the compression ratio. Therefore, low cost commercial microcontrollers and storage devices can be effectively used to store long Holter EEG's in a compressed format.

178 citations

Frequently Asked Questions (10)
Q1. What is the dimension of the x y plane?

The dimension of the x− y plane is limited by the number of channels, and the slices in the x− y plane may be square or rectangular. 

The main idea is to exploit the intra- and inter-channel correlations simultaneously by arranging the multi-channel EEG as a volume, and to represent that volume in different ways. 

The tensor-based coding scheme yields smaller worst-case error than both subband specific coding and volumetric coding, yet the average error is only slightly larger than in subband specific coding and much smaller than in volumetric coding. 

(10)The authors consider segments of 1024 samples from each channel, arranged in a suitable volume size, specifically, 32×32× 64 for t/dt/s volume, 8 × 8 × 1024 for s/s/t volume. 

The authors consider three lossy compression algorithms (Stage 1): (i) 3D Wavelet volumetric coding, (ii) 3D Wavelet subband specific arithmetic coding, and (iii) tensor decomposition (PARAFAC) based coding. 

The authors consider two specific ways to extract a volumetric data from multi-channel EEG, where the three axes capture spatial and temporal variations in different form. 

The quality of the reconstructed signal (x̃) is assessed using percent rootmean-square distortion (PRD (%)):PRD (%) =√ √ √ √ ∑N i=1(x(i)− x̃(i)) 2∑N i=1 x(i)2 × 100. (9)The authors also use an alternative quantitative distortion measure, based on the maximum absolute difference between x and x̃:PSNR(x, x̃) = 10 log10(2Q − 1max(|x− x̃|)). 

In their previous work [2], the authors introduced a pre-processing technique where single-channel EEG is arranged as a matrix before compression; this representation improved the RateDistortion (R-D) performance over conventional compression schemes. 

The energy threshold (τ ) for the subband specific arithmetic coding is fixed to 50%; the authors obtained the best results for that value of the threshold. 

The authors analyze the performance of the algorithms based on compression ratio:CR = LorigLcomp , (8)where Lorig and Lcomp are the bit length of original and reconstructed multi-channel EEG signals respectively.