scispace - formally typeset
Search or ask a question

Showing papers on "Artifact (error) published in 2018"


Journal ArticleDOI
TL;DR: In this paper, a deep residual learning network is proposed to remove aliasing artifacts from artifact corrupted images, which can work as an iterative k-space interpolation algorithm using framelet representation.
Abstract: Objective: Accelerated magnetic resonance (MR) image acquisition with compressed sensing (CS) and parallel imaging is a powerful method to reduce MR imaging scan time. However, many reconstruction algorithms have high computational costs. To address this, we investigate deep residual learning networks to remove aliasing artifacts from artifact corrupted images. Methods: The deep residual learning networks are composed of magnitude and phase networks that are separately trained. If both phase and magnitude information are available, the proposed algorithm can work as an iterative k-space interpolation algorithm using framelet representation. When only magnitude data are available, the proposed approach works as an image domain postprocessing algorithm. Results: Even with strong coherent aliasing artifacts, the proposed network successfully learned and removed the aliasing artifacts, whereas current parallel and CS reconstruction methods were unable to remove these artifacts. Conclusion: Comparisons using single and multiple coil acquisition show that the proposed residual network provides good reconstruction results with orders of magnitude faster computational time than existing CS methods. Significance: The proposed deep learning framework may have a great potential for accelerated MR reconstruction by generating accurate results immediately.

275 citations


Journal ArticleDOI
TL;DR: This editorial provides some constructive guidance across different positioning statements with actionable recommendations for DSR authors and reviewers to serve as a foundational step towards clarifying misconceptions about DSR contributions.
Abstract: With the rising interest in Design Science Research (DSR), it is crucial to engage in the ongoing debate on what constitutes an acceptable contribution for publishing DSR the design artifact, the design theory, or both. In this editorial, we provide some constructive guidance across different positioning statements with actionable recommendations for DSR authors and reviewers. We expect this editorial to serve as a foundational step towards clarifying misconceptions about DSR contributions and to pave the way for the acceptance of more DSR papers to top IS journals.

253 citations


Journal ArticleDOI
TL;DR: The Harvard Automated Processing Pipeline for EEG (HAPPE) is proposed as a standardized, automated pipeline compatible with EEG recordings of variable lengths and artifact contamination levels, including high-artifact and short EEG recordings from young children or those with neurodevelopmental disorders.
Abstract: Electroenchephalography (EEG) recordings collected with developmental populations present particular challenges from a data processing perspective. These EEGs have a high degree of artifact contamination and often short recording lengths. As both sample sizes and EEG channel densities increase, traditional processing approaches like manual data rejection are becoming unsustainable. Moreover, such subjective approaches preclude standardized metrics of data quality, despite the heightened importance of such measures for EEGs with high rates of initial artifact contamination. There is presently a paucity of automated resources for processing these EEG data and no consistent reporting of data quality measures. To address these challenges, we propose the Harvard Automated Processing Pipeline for EEG (HAPPE) as a standardized, automated pipeline compatible with EEG recordings of variable lengths and artifact contamination levels, including high-artifact and short EEG recordings from young children or those with neurodevelopmental disorders. HAPPE processes event-related and resting-state EEG data from raw files through a series of filtering, artifact rejection, and re-referencing steps to processed EEG suitable for time-frequency-domain analyses. HAPPE also includes a post-processing report of data quality metrics to facilitate the evaluation and reporting of data quality in a standardized manner. Here, we describe each processing step in HAPPE, perform an example analysis with EEG files we have made freely available, and show that HAPPE outperforms seven alternative, widely-used processing approaches. HAPPE removes more artifact than all alternative approaches while simultaneously preserving greater or equivalent amounts of EEG signal in almost all instances. We also provide distributions of HAPPE's data quality metrics in an 867 file dataset as a reference distribution and in support of HAPPE's performance across EEG data with variable artifact contamination and recording lengths. HAPPE software is freely available under the terms of the GNU General Public License at https://github.com/lcnhappe/happe.

252 citations


Proceedings ArticleDOI
18 Jul 2018
TL;DR: This study systematically validates ASR on ten EEG recordings in a simulated driving experiment and shows that the optimal ASR parameter is between 10 and 100, which is small enough to remove activities from artifacts and eye- related components and large enough to retain signals from brain-related components.
Abstract: One of the greatest challenges that hinder the decoding and application of electroencephalography (EEG) is that EEG recordings almost always contain artifacts - non-brain signals. Among existing automatic artifact-removal methods, artifact subspace reconstruction (ASR) is an online and realtime capable, component-based method that can effectively remove transient or large-amplitude artifacts. However, the effectiveness of ASR and the optimal choice of its parameter have not been evaluated and reported, especially on real EEG data. This study systematically validates ASR on ten EEG recordings in a simulated driving experiment. Independent component analysis (ICA) is applied to separate artifacts from brain signals to allow a quantitative assessment of ASR's effectiveness in removing various types of artifacts and preserving brain activities. Empirical results show that the optimal ASR parameter is between 10 and 100, which is small enough to remove activities from artifacts and eye-related components and large enough to retain signals from brain-related components. With the appropriate choice of the parameter, ASR can be a powerful and automatic artifact removal approach for offline data analysis or online real-time EEG applications such as clinical monitoring and brain-computer interfaces.

203 citations


Journal ArticleDOI
TL;DR: This protocol can be used to reduce motion-related variance to near zero in studies of functional connectivity, providing up to a 100-fold improvement over minimal-processing approaches in large datasets.
Abstract: Participant motion during functional magnetic resonance image (fMRI) acquisition produces spurious signal fluctuations that can confound measures of functional connectivity. Without mitigation, motion artifact can bias statistical inferences about relationships between connectivity and individual differences. To counteract motion artifact, this protocol describes the implementation of a validated, high-performance denoising strategy that combines a set of model features, including physiological signals, motion estimates, and mathematical expansions, to target both widespread and focal effects of subject movement. This protocol can be used to reduce motion-related variance to near zero in studies of functional connectivity, providing up to a 100-fold improvement over minimal-processing approaches in large datasets. Image denoising requires 40 min to 4 h of computing per image, depending on model specifications and data dimensionality. The protocol additionally includes instructions for assessing the performance of a denoising strategy. Associated software implements all denoising and diagnostic procedures, using a combination of established image-processing libraries and the eXtensible Connectivity Pipeline (XCP) software. Ciric et al. describe a protocol for the removal of motion artifacts from functional MRI data. They introduce a software package that implements common denoising protocols and provides tools for assessing the efficacy of denoising.

193 citations


Journal ArticleDOI
TL;DR: It is shown that the CNN-based information can be displayed in a novel artifact-free image format, enabling us to effectively remove reflection artifacts from photoacoustic images, which is not possible with traditional geometry-based beamforming.
Abstract: Interventional applications of photoacoustic imaging typically require visualization of point-like targets, such as the small, circular, cross-sectional tips of needles, catheters, or brachytherapy seeds. When these point-like targets are imaged in the presence of highly echogenic structures, the resulting photoacoustic wave creates a reflection artifact that may appear as a true signal. We propose to use deep learning techniques to identify these types of noise artifacts for removal in experimental photoacoustic data. To achieve this goal, a convolutional neural network (CNN) was first trained to locate and classify sources and artifacts in pre-beamformed data simulated with $k$ -Wave. Simulations initially contained one source and one artifact with various medium sound speeds and 2-D target locations. Based on 3,468 test images, we achieved a 100% success rate in classifying both sources and artifacts. After adding noise to assess potential performance in more realistic imaging environments, we achieved at least 98% success rates for channel signal-to-noise ratios (SNRs) of −9dB or greater, with a severe decrease in performance below −21dB channel SNR. We then explored training with multiple sources and two types of acoustic receivers and achieved similar success with detecting point sources. Networks trained with simulated data were then transferred to experimental waterbath and phantom data with 100% and 96.67% source classification accuracy, respectively (particularly when networks were tested at depths that were included during training). The corresponding mean ± one standard deviation of the point source location error was 0.40 ± 0.22 mm and 0.38 ± 0.25 mm for waterbath and phantom experimental data, respectively, which provides some indication of the resolution limits of our new CNN-based imaging system. We finally show that the CNN-based information can be displayed in a novel artifact-free image format, enabling us to effectively remove reflection artifacts from photoacoustic images, which is not possible with traditional geometry-based beamforming.

179 citations


Journal ArticleDOI
TL;DR: This paper demonstrates a fast, robust and generic algorithm for removal of EEG artifacts of various types, i.e. those that were annotated as unwanted by the user, with better performance than current state-of-the-art methods.
Abstract: Objective The electroencephalogram (EEG) is an essential neuro-monitoring tool for both clinical and research purposes, but is susceptible to a wide variety of undesired artifacts. Removal of these artifacts is often done using blind source separation techniques, relying on a purely data-driven transformation, which may sometimes fail to sufficiently isolate artifacts in only one or a few components. Furthermore, some algorithms perform well for specific artifacts, but not for others. In this paper, we aim to develop a generic EEG artifact removal algorithm, which allows the user to annotate a few artifact segments in the EEG recordings to inform the algorithm. Approach We propose an algorithm based on the multi-channel Wiener filter (MWF), in which the artifact covariance matrix is replaced by a low-rank approximation based on the generalized eigenvalue decomposition. The algorithm is validated using both hybrid and real EEG data, and is compared to other algorithms frequently used for artifact removal. Main results The MWF-based algorithm successfully removes a wide variety of artifacts with better performance than current state-of-the-art methods. Significance Current EEG artifact removal techniques often have limited applicability due to their specificity to one kind of artifact, their complexity, or simply because they are too 'blind'. This paper demonstrates a fast, robust and generic algorithm for removal of EEG artifacts of various types, i.e. those that were annotated as unwanted by the user.

169 citations


Journal ArticleDOI
TL;DR: A hybrid method that takes advantage of different correction algorithms for hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error, Pearson’s correlation, and the area under the receiver operator characteristic curve is found.
Abstract: Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

119 citations


Journal ArticleDOI
TL;DR: This paper presents an extensive overview of the existing methods for ocular, muscle, and cardiac artifact identification and removal with their comparative advantages and limitations and reviewed the schemes developed for validating the performances of algorithms with simulated and real EEG data.
Abstract: Electroencephalogram (EEG), boasting the advantages of portability, low cost, and high-temporal resolution, is a non-invasive brain-imaging modality that can be used to measure different brain states. However, EEG recordings are always contaminated with artifacts from different sources other than neurons, which renders EEG data analysis more difficult, and which potentially results in misleading findings. Therefore, it is essential for many medical and practical applications to remove these artifacts in the preprocessing stage before analyzing EEG data. In the last thirty years, various methods have been developed to remove different types of artifacts from contaminated EEG data; still though, there is no standard method that can be used optimally, and therefore, the research remains attractive as well as challenging. This paper presents an extensive overview of the existing methods for ocular, muscle, and cardiac artifact identification and removal with their comparative advantages and limitations. We also reviewed the schemes developed for validating the performances of algorithms with simulated and real EEG data. In future studies, researchers should focus not only on the combining of different methods with multiple processing stages for efficient removal of artifactual interferences but also on the development of standard criteria for validation of recorded EEG signals.

119 citations


Journal ArticleDOI
TL;DR: The proposed method, called MEMD-CCA, first utilizes MEMD and CCA to jointly decompose the few-channel EEG recordings into multivariate intrinsic mode functions (IMFs), and is applied to further decomposes the reorganized multivariate IMFs into the underlying sources.
Abstract: Electroencephalography (EEG) recordings are often contaminated by muscle artifacts. In the literature, a number of methods have been proposed to deal with this problem. Yet most denoising muscle artifact methods are designed for either single-channel EEG or hospital-based, high-density multichannel recordings, not the few-channel scenario seen in most ambulatory EEG instruments. In this paper, we propose utilizing interchannel dependence information seen in the few-channel situation by combining multivariate empirical mode decomposition and canonical correlation analysis (MEMD-CCA). The proposed method, called MEMD-CCA, first utilizes MEMD to jointly decompose the few-channel EEG recordings into multivariate intrinsic mode functions (IMFs). Then, CCA is applied to further decompose the reorganized multivariate IMFs into the underlying sources. Reconstructing the data using only artifact-free sources leads to artifact-attenuated EEG. We evaluated the performance of the proposed method through simulated, semisimulated, and real data. The results demonstrate that the proposed method is a promising tool for muscle artifact removal in the few-channel setting.

117 citations


Journal ArticleDOI
Xia Huang1, Jian Wang1, Fan Tang1, Tao Zhong1, Yu Zhang1 
TL;DR: The RL-ARCNN indicates that residual learning of CNN remarkably reduces metal artifacts and improves critical structure visualization and confidence of radiation oncologists in target delineation.
Abstract: Cervical cancer is the fifth most common cancer among women, which is the third leading cause of cancer death in women worldwide. Brachytherapy is the most effective treatment for cervical cancer. For brachytherapy, computed tomography (CT) imaging is necessary since it conveys tissue density information which can be used for dose planning. However, the metal artifacts caused by brachytherapy applicators remain a challenge for the automatic processing of image data for image-guided procedures or accurate dose calculations. Therefore, developing an effective metal artifact reduction (MAR) algorithm in cervical CT images is of high demand. A novel residual learning method based on convolutional neural network (RL-ARCNN) is proposed to reduce metal artifacts in cervical CT images. For MAR, a dataset is generated by simulating various metal artifacts in the first step, which will be applied to train the CNN. This dataset includes artifact-insert, artifact-free, and artifact-residual images. Numerous image patches are extracted from the dataset for training on deep residual learning artifact reduction based on CNN (RL-ARCNN). Afterwards, the trained model can be used for MAR on cervical CT images. The proposed method provides a good MAR result with a PSNR of 38.09 on the test set of simulated artifact images. The PSNR of residual learning (38.09) is higher than that of ordinary learning (37.79) which shows that CNN-based residual images achieve favorable artifact reduction. Moreover, for a 512 × 512 image, the average removal artifact time is less than 1 s. The RL-ARCNN indicates that residual learning of CNN remarkably reduces metal artifacts and improves critical structure visualization and confidence of radiation oncologists in target delineation. Metal artifacts are eliminated efficiently free of sinogram data and complicated post-processing procedure.

Journal ArticleDOI
TL;DR: A pipeline for EEG source estimation is provided, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm, and how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout).
Abstract: Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.

Posted Content
TL;DR: This article proposed an uncertainty estimation algorithm that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered (and way before they are ready for actual classification).
Abstract: We consider the problem of uncertainty estimation in the context of (non-Bayesian) deep neural classification. In this context, all known methods are based on extracting uncertainty signals from a trained network optimized to solve the classification problem at hand. We demonstrate that such techniques tend to introduce biased estimates for instances whose predictions are supposed to be highly confident. We argue that this deficiency is an artifact of the dynamics of training with SGD-like optimizers, and it has some properties similar to overfitting. Based on this observation, we develop an uncertainty estimation algorithm that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered (and way before they are ready for actual classification). We present extensive experiments indicating that the proposed algorithm provides uncertainty estimates that are consistently better than all known methods.

Journal ArticleDOI
TL;DR: The Batch EEG Automated Processing Platform (BEAPP), an automated, flexible EEG processing platform incorporating freely available software tools for batch processing of multiple EEG files across multiple processing steps, aims to streamline batch EEG processing, improve accessibility to computational EEG assessment, and increase reproducibility of results.
Abstract: Electroencephalography (EEG) offers information about brain function relevant to a variety of neurologic and neuropsychiatric disorders. EEG contains complex, high-temporal-resolution information, and computational assessment maximizes our potential to glean insight from this information. Here we present the Batch EEG Automated Processing Platform (BEAPP), an automated, flexible EEG processing platform incorporating freely available software tools for batch processing of multiple EEG files across multiple processing steps. BEAPP does not prescribe a specified EEG processing pipeline; instead, it allows users to choose from a menu of options for EEG processing, including steps to manage EEG files collected across multiple acquisition setups (e.g., for multisite studies), minimize artifact, segment continuous and/or event-related EEG, and perform basic analyses. Overall, BEAPP aims to streamline batch EEG processing, improve accessibility to computational EEG assessment, and increase reproducibility of results.

Journal ArticleDOI
01 Mar 2018
TL;DR: A new data-driven algorithm to effectively remove ocular and muscular artifacts from single-channel EEG: the surrogate-based artifact removal (SuBAR), which provides a relative error 4 to 5 times lower than traditional techniques.
Abstract: Objective: the recent emergence and success of electroencephalography (EEG) in low-cost portable devices, has opened the door to a new generation of applications processing a small number of EEG channels for health monitoring and brain-computer interfacing. These recordings are, however, contaminated by many sources of noise degrading the signals of interest, thus compromising the interpretation of the underlying brain state. In this paper, we propose a new data-driven algorithm to effectively remove ocular and muscular artifacts from single-channel EEG: the surrogate-based artifact removal (SuBAR). Methods: by means of the time-frequency analysis of surrogate data, our approach is able to identify and filter automatically ocular and muscular artifacts embedded in single-channel EEG. Results: in a comparative study using artificially contaminated EEG signals, the efficacy of the algorithm in terms of noise removal and signal distortion was superior to other traditionally-employed single-channel EEG denoizing techniques: wavelet thresholding and the canonical correlation analysis combined with an advanced version of the empirical mode decomposition. Even in the presence of mild and severe artifacts, our artifact removal method provides a relative error 4 to 5 times lower than traditional techniques. Significance: in view of these results, the SuBAR method is a promising solution for mobile environments, such as ambulatory healthcare systems, sleep stage scoring, or anesthesia monitoring, where very few EEG channels or even a single channel is available.

Journal ArticleDOI
TL;DR: Signal quality was restored following noise cancellation when compared to single electrode EEG measurements collected with no phantom head motion, and these methods can be applied to studying electrocortical signals during human locomotion to improve real-world neuroimaging using EEG.
Abstract: Objective. Our purpose was to evaluate the ability of a dual electrode approach to remove motion artifact from electroencephalography (EEG) measurements. Approach. We used a phantom human head model and robotic motion platform to induce motion while collecting scalp EEG. We assembled a dual electrode array capturing (a) artificial neural signals plus noise from scalp EEG electrodes, and (b) electrically isolated motion artifact noise. We recorded artificial neural signals broadcast from antennae in the phantom head during continuous vertical sinusoidal movements (stationary, 1.00, 1.25, 1.50, 1.75, 2.00 Hz movement frequencies). We evaluated signal quality using signal-to-noise ratio (SNR), cross-correlation, and root mean square error (RMSE) between the ground truth broadcast signals and the recovered EEG signals. Main results. Signal quality was restored following noise cancellation when compared to single electrode EEG measurements collected with no phantom head motion. Significance. We achieved substantial motion artifact attenuation using secondary electrodes for noise cancellation. These methods can be applied to studying electrocortical signals during human locomotion to improve real-world neuroimaging using EEG.

Journal ArticleDOI
TL;DR: This paper describes a novel methodology leveraging particle filters for the application of robust heart rate monitoring in the presence of motion artifacts, and formulate the heart rate itself as the only state to be estimated, and do not rely on multiple specific signal features.
Abstract: This paper describes a novel methodology leveraging particle filters for the application of robust heart rate monitoring in the presence of motion artifacts. Motion is a key source of noise that confounds traditional heart rate estimation algorithms for wearable sensors due to the introduction of spurious artifacts in the signals. In contrast to previous particle filtering approaches, we formulate the heart rate itself as the only state to be estimated, and do not rely on multiple specific signal features. Instead, we design observation mechanisms to leverage the known steady, consistent nature of heart rate variations to meet the objective of continuous monitoring of heart rate using wearable sensors. Furthermore, this independence from specific signal features also allows us to fuse information from multiple sensors and signal modalities to further improve estimation accuracy. The signal processing methods described in this work were tested on real motion artifact affected electrocardiogram and photoplethysmogram data with concurrent accelerometer readings. Results show promising average error rates less than 2 beats/min for data collected during intense running activities. Furthermore, a comparison with contemporary signal processing techniques for the same objective shows how the proposed implementation is also computationally more efficient for comparable performance.

Journal ArticleDOI
TL;DR: An adaptive Optimal Basis Set (aOBS) method for BCG artifact removal that enables effective reduction of BCG residuals while preserving brain signals is proposed and it is suggested it may find wide application in simultaneous EEG-fMRI studies.
Abstract: Electroencephalography (EEG) signals recorded during simultaneous functional magnetic resonance imaging (fMRI) are contaminated by strong artifacts. Among these, the ballistocardiographic (BCG) artifact is the most challenging, due to its complex spatio-temporal dynamics associated with ongoing cardiac activity. The presence of BCG residuals in EEG data may hide true, or generate spurious correlations between EEG and fMRI time-courses. Here, we propose an adaptive Optimal Basis Set (aOBS) method for BCG artifact removal. Our method is adaptive, as it can estimate the delay between cardiac activity and BCG occurrence on a beat-to-beat basis. The effective creation of an optimal basis set by principal component analysis (PCA) is therefore ensured by a more accurate alignment of BCG occurrences. Furthermore, aOBS can automatically estimate which components produced by PCA are likely to be BCG artifact-related and therefore need to be removed. The aOBS performance was evaluated on high-density EEG data acquired with simultaneous fMRI in healthy subjects during visual stimulation. As aOBS enables effective reduction of BCG residuals while preserving brain signals, we suggest it may find wide application in simultaneous EEG-fMRI studies.

Journal ArticleDOI
01 Sep 2018-IUCrJ
TL;DR: A three-dimensional reconstruction of the Melbournevirus affected by a strong artifact in the center of the particle was found to be probably caused by background scattering, while particle size and pulse-energy variation did not affect the quality of the reconstruction.

Proceedings ArticleDOI
02 Apr 2018
TL;DR: The proposed network is designed for multi-channel EEG signals and considers spatio-temporal correlation, a feature in epileptic seizure detection, using 1D and 2D convolutional layers, and achieves 90.5% prediction accuracy with SNUH-HYU EEG dataset.
Abstract: A new epileptic seizure detection method based on deep convolutional network is proposed. The proposed network is designed for multi-channel EEG signals and considers spatio-temporal correlation, a feature in epileptic seizure detection, using 1D and 2D convolutional layers. 1D convolutional layer considers temporal evolution of the EEG signal of each channel and 2D convolutional layers considers spatial relationships between EEG channels. We make datasets for training and test by extracting the EEG segments from CHB-MIT EEG Scalp database and SNUH-HYU EEG database: the recordings of long-term EEG monitoring at Seoul National University Hospital and Children's Hospital Boston. Our model is trained and tested using the EEG segments with varying durations. We also investigate the effect of artifact elimination on epileptic seizure detection by applying a low-pass filter to the EEG signals. Our model achieves 90.5% prediction accuracy with SNUH-HYU EEG dataset.

11 Apr 2018
TL;DR: The proposed convolutional neural networks framework enables the network to generalize well to both unseen simulated motion artifacts as well as real motion artifact-affected data and could easily be adapted to estimate a motion severity score, which could be used as a score of quality control or as a nuisance covariate in subsequent statistical analyses.
Abstract: Head motion during MRI acquisition presents significant problems for subsequent neuroimaging analyses. In this work, we propose to use convolutional neural networks (CNNs) to correct motion-corrupted images as well as investigate a possible improvement by augmenting L1 loss with adversarial loss. For training, in order to gain access to a ground-truth, we first selected a large number of motion-free images from the ABIDE dataset. We then added simulated motion artifacts on these images to produce motion corrupted data and a 3D regression CNN was trained to predict the motion-free volume as the output. We tested the CNN on unseen simulated data as well as real motion affected data. Quantitative evaluation was carried out using metrics such as Structural Similarity (SSIM) index, Correlation Coefficient (CC), and Tissue Contrast T-score (TCT). It was found that Gaussian smoothing as a conventional method did not significantly differ in SSIM, CC and RMSE from the uncorrected data. On the other hand, the two CNN models successfully removed the motion-related artifact as their SSIM and CC significantly increased after their correction and the error was reduced. The CNN displayed significantly larger TCT compared to the uncorrected images whereas the adversarial network, while improved did not show a significantly increased TCT, which may be explained also by its over-enhancement of edges. Our results suggest that the proposed CNN framework enables the network to generalize well to both unseen simulated motion artifacts as well as real motion artifact-affected data. The proposed method could easily be adapted to estimate a motion severity score, which could be used as a score of quality control or as a nuisance covariate in subsequent statistical analyses. ∗USC Stevens Neuroimaging and Informatics Institute, University of Southern California, CA 90033 †Department of Computer Engineering and Computer Science, California State University Long Beach, Long Beach, CA 90840 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands.

Journal ArticleDOI
TL;DR: Virtual monoenergetic images from SDCT reduce metal artifacts from dental implants and improve diagnostic assessment of surrounding soft tissue.

Journal ArticleDOI
TL;DR: Two sparsity-based techniques namely morphological component analysis (MCA) and K-SVD-based artifact removal method have been evaluated and it is shown that without using any computationally expensive algorithms, only with the use of over-complete dictionaries the proposed sparsity to eliminate EB artifacts accurately from the EEG signals.
Abstract: Neural activities recorded using electroencephalography (EEG) are mostly contaminated with eye blink (EB) artifact. This results in undesired activation of brain–computer interface (BCI) systems. Hence, removal of EB artifact is an important issue in EEG signal analysis. Of late, several artifact removal methods have been reported in the literature and they are based on independent component analysis (ICA), thresholding, wavelet transformation, etc. These methods are computationally expensive and result in information loss which makes them unsuitable for online BCI system development. To address the above problems, we have investigated sparsity-based EB artifact removal methods. Two sparsity-based techniques namely morphological component analysis (MCA) and K-SVD-based artifact removal method have been evaluated in our work. MCA-based algorithm exploits the morphological characteristics of EEG and EB using predefined Dirac and discrete cosine transform (DCT) dictionaries. Next, in K-SVD-based algorithm an overcomplete dictionary is learned from the EEG data itself and is designed to model EB characteristics. To substantiate the efficacy of the two algorithms, we have carried out our experiments with both synthetic and real EEG data. We observe that the K-SVD algorithm, which uses a learned dictionary, delivers superior performance for suppressing EB artifacts when compared to MCA technique. Finally, the results of both the techniques are compared with the recent state-of-the-art FORCe method. We demonstrate that the proposed sparsity-based algorithms perform equal to the state-of-the-art technique. It is shown that without using any computationally expensive algorithms, only with the use of over-complete dictionaries the proposed sparsity-based algorithms eliminate EB artifacts accurately from the EEG signals.

Journal ArticleDOI
TL;DR: Weak non‐linear transfer characteristics inherent to stimulation and recording hardware can reintroduce spurious artifacts at the modulation frequency and its harmonics, suggesting the need for more linear stimulation devices for AM‐tACS.

Journal ArticleDOI
TL;DR: The results suggest the potential of handheld LSI with an FM as a suitable alternative to mounted LSI, especially in challenging clinical settings with space limitations such as the intensive care unit.
Abstract: Laser speckle imaging (LSI) is a wide-field optical technique that enables superficial blood flow quantification. LSI is normally performed in a mounted configuration to decrease the likelihood of motion artifact. However, mounted LSI systems are cumbersome and difficult to transport quickly in a clinical setting for which portability is essential in providing bedside patient care. To address this issue, we created a handheld LSI device using scientific grade components. To account for motion artifact of the LSI device used in a handheld setup, we incorporated a fiducial marker (FM) into our imaging protocol and determined the difference between highest and lowest speckle contrast values for the FM within each data set (Kbest and Kworst). The difference between Kbest and Kworst in mounted and handheld setups was 8% and 52%, respectively, thereby reinforcing the need for motion artifact quantification. When using a threshold FM speckle contrast value (KFM) to identify a subset of images with an acceptable level of motion artifact, mounted and handheld LSI measurements of speckle contrast of a flow region (KFLOW) in in vitro flow phantom experiments differed by 8%. Without the use of the FM, mounted and handheld KFLOW values differed by 20%. To further validate our handheld LSI device, we compared mounted and handheld data from an in vivo porcine burn model of superficial and full thickness burns. The speckle contrast within the burn region (KBURN) of the mounted and handheld LSI data differed by <4 % when accounting for motion artifact using the FM, which is less than the speckle contrast difference between superficial and full thickness burns. Collectively, our results suggest the potential of handheld LSI with an FM as a suitable alternative to mounted LSI, especially in challenging clinical settings with space limitations such as the intensive care unit.

Book ChapterDOI
16 Sep 2018
TL;DR: Results of a comparative study of the artifact subspace re-construction (ASR) method and two other popular methods dedicated to correct EEG artifacts show a significantly better level of artifact correction for the ASR method.
Abstract: The paper presents the results of a comparative study of the artifact subspace re-construction (ASR) method and two other popular methods dedicated to correct EEG artifacts: independent component analysis (ICA) and principal component analysis (PCA). The comparison is based on automatic rejection of EEG signal epochs performed on a dataset of motor imagery data. ANOVA results show a significantly better level of artifact correction for the ASR method. What is more, the ASR method does not cause serious signal loss compared to other methods.

Journal ArticleDOI
TL;DR: The investigated O-MAR algorithm reduces artifacts from DBS electrodes and should be used in the assessment of postoperative patients; however, combination with VMI does not provide an additional benefit.
Abstract: Objectives The aim of this study was to evaluate the reduction of artifacts from deep brain stimulation electrodes (DBS) using an iterative metal artifact reduction algorithm (O-MAR), virtual monoenergetic images (VMI), and both in combination in postoperative spectral detector computed tomography using a dual-layer detector (spectral detector computed tomography [SDCT]) of the head. Material and methods Nonanthropomorphic phantoms with different DBS leads were examined on SDCT; in 1 phantom periprocedural bleeding was simulated. A total of 20 patients who underwent SDCT after DBS implantation between October 2016 and April 2017 were included in this institutional review board-approved retrospective study. Images were reconstructed using standard-of-care iterative reconstruction (CI) and VMI, each with and without O-MAR processing (IR and MAR). Artifacts were quantified by determining the percentage integrity uniformity in an annular region of 1.4 cm around the DBS lead; a percentage integrity uniformity of 100% indicates the absence of artifacts. In phantoms, conspicuity of blood was determined on a binary scale, whereas in patients, image quality, DBS lead assessment, and extent of artifact reduction were assessed on Likert scales by 2 radiologists. Statistical significance was assessed using analysis of variance and Wilcoxon tests; sensitivity and specificity were calculated. Results The O-MAR processing significantly decreased artifacts in phantom and patients (P ≤ 0.05), whereas VMI did not reduce artifact burden compared with corresponding CI (P > 0.05): for example, CI-IR/MAR and 200 keV-IR/MAR for patients: 76.3%/90.7% and 75.9%/91.2%, respectively. Qualitatively, overall image quality was not improved (P > 0.05) and MAR improved DBS assessment (CI-IR/MAR: 2 [1-3]/3 [2-4]; P ≤ 0.05) and reduced artifacts significantly (P ≤ 0.05). The O-MAR processing increased sensitivity for bleeding by 160%. In some cases, new artifacts were induced through O-MAR processing, none of which impaired diagnostic image assessment. Discussion The investigated O-MAR algorithm reduces artifacts from DBS electrodes and should be used in the assessment of postoperative patients; however, combination with VMI does not provide an additional benefit.

Journal ArticleDOI
TL;DR: A novel software pipeline of real-time image processing suited for closed-loop experiments and a novel method to estimate baseline calcium signal using kernel density estimate, which reduces the number of parameters to be tuned.
Abstract: Two-photon calcium imaging has been extensively used to record neural activity in the brain. It has been long used solely with post-hoc analysis, but the recent efforts began to include closed-loop experiments. Closed-loop experiments pose new challenges because they require fast, real-time image processing without iterative parameter tuning. When imaging awake animals, one of the crucial steps of post hoc image analysis is correction of lateral motion artifacts. In most of the closed-loop experiments, this step has not been implemented and ignored due to technical difficulties. We recently reported the first experiments with real-time processing of calcium imaging that included lateral motion correction. Here, we report the details of the implementation of fast motion correction and present performance analysis across several algorithms with different parameters. Additionally, we introduce a novel method to estimate baseline calcium signal using kernel density estimate, which reduces the number of parameters to be tuned. Combined, we propose a novel software pipeline of real-time image processing suited for closed-loop experiments. The pipeline is also useful for rapid post hoc image processing.

Journal ArticleDOI
TL;DR: A real-time artifact removal algorithm that is based on canonical correlation analysis (CCA), feature extraction, and the Gaussian mixture model (GMM) to improve the quality of EEG signals is proposed.
Abstract: Electroencephalogram (EEG) signals are usually contaminated with various artifacts, such as signal associated with muscle activity, eye movement, and body motion, which have a noncerebral origin. The amplitude of such artifacts is larger than that of the electrical activity of the brain, so they mask the cortical signals of interest, resulting in biased analysis and interpretation. Several blind source separation methods have been developed to remove artifacts from the EEG recordings. However, the iterative process for measuring separation within multichannel recordings is computationally intractable. Moreover, manually excluding the artifact components requires a time-consuming offline process. This work proposes a real-time artifact removal algorithm that is based on canonical correlation analysis (CCA), feature extraction, and the Gaussian mixture model (GMM) to improve the quality of EEG signals. The CCA was used to decompose EEG signals into components followed by feature extraction to extract representative features and GMM to cluster these features into groups to recognize and remove artifacts. The feasibility of the proposed algorithm was demonstrated by effectively removing artifacts caused by blinks, head/body movement, and chewing from EEG recordings while preserving the temporal and spectral characteristics of the signals that are important to cognitive research.

Journal ArticleDOI
TL;DR: This study investigated the optimal EOG signal filtering limits using state-of-the-art artifact removal techniques with fifteen artificially contaminated EEG and EOG datasets and validated the hypothesis that low-pass filtering should be applied to EOG signals for enhancing the performance of each algorithm before using them for artifact removal process.
Abstract: It is a fact that contamination of EEG by ocular artifacts reduces the classification accuracy of a brain-computer interface (BCI) and diagnosis of brain diseases in clinical research. Therefore, for BCI and clinical applications, it is very important to remove/reduce these artifacts before EEG signal analysis. Although, EOG-based methods are simple and fast for removing artifacts but their performance, meanwhile, is highly affected by the bidirectional contamination process. Some studies emphasized that the solution to this problem is low-pass filtering EOG signals before using them in artifact removal algorithm but there is still no evidence on the optimal low-pass frequency limits of EOG signals. In this study, we investigated the optimal EOG signal filtering limits using state-of-the-art artifact removal techniques with fifteen artificially contaminated EEG and EOG datasets. In this comprehensive analysis, unfiltered and twelve different low-pass filtering of EOG signals were used with five different algorithms, namely, simple regression, least mean squares, recursive least squares, REGICA, and AIR. Results from statistical testing of time and frequency domain metrics suggested that a low-pass frequency between 6 and 8 Hz could be used as the most optimal filtering frequency of EOG signals, both to maximally overcome/minimize the effect of bidirectional contamination and to achieve good results from artifact removal algorithms. Furthermore, we also used BCI competition IV datasets to show the efficacy of the proposed framework on real EEG signals. The motor-imagery-based BCI achieved statistically significant high-classification accuracies when artifacts from EEG were removed by using 7 Hz low-pass filtering as compared to all other filterings of EOG signals. These results also validated our hypothesis that low-pass filtering should be applied to EOG signals for enhancing the performance of each algorithm before using them for artifact removal process. Moreover, the comparison results indicated that the hybrid algorithms outperformed the performance of single algorithms for both simulated and experimental EEG datasets.