scispace - formally typeset
Search or ask a question

Showing papers on "Artifact (error) published in 2020"


Journal ArticleDOI
TL;DR: This article reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016 and group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately.
Abstract: A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals. The most common non-invasive BCI modality, electroencephalogram (EEG), is sensitive to noise/artifact and suffers between-subject/within-subject non-stationarity. Therefore, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, during different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time-consuming and user unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort. This paper reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016. Six paradigms and applications – motor imagery, event-related potentials, steady-state visual evoked potentials, affective BCIs, regression problems, and adversarial attacks – are considered. For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. Observations and conclusions are made at the end of the paper, which may point to future research directions.

128 citations


Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed an unsupervised learning approach to CT metal artifact reduction. But, their method is not suitable for clinical applications and it requires a large amount of synthesized data.
Abstract: Current deep neural network based approaches to computed tomography (CT) metal artifact reduction (MAR) are supervised methods that rely on synthesized metal artifacts for training. However, as synthesized data may not accurately simulate the underlying physical mechanisms of CT imaging, the supervised methods often generalize poorly to clinical applications. To address this problem, we propose, to the best of our knowledge, the first unsupervised learning approach to MAR. Specifically, we introduce a novel artifact disentanglement network that disentangles the metal artifacts from CT images in the latent space. It supports different forms of generations (artifact reduction, artifact transfer, and self-reconstruction, etc.) with specialized loss functions to obviate the need for supervision with synthesized data. Extensive experiments show that when applied to a synthesized dataset, our method addresses metal artifacts significantly better than the existing unsupervised models designed for natural image-to-image translation problems, and achieves comparable performance to existing supervised models for MAR. When applied to clinical datasets, our method demonstrates better generalization ability over the supervised models. The source code of this paper is publicly available at https:// github.com/liaohaofu/adn .

122 citations


Journal ArticleDOI
TL;DR: With optimized procedures, ICA removed virtually all artifacts, including the SP and its associated spectral broadband artifact from both viewing paradigms, with little distortion of neural activity.

108 citations


Journal ArticleDOI
TL;DR: The Maryland analysis of developmental EEG (MADE) pipeline is developed as an automated preprocessing pipeline compatible with EEG data recorded with different hardware systems, different populations, levels of artifact contamination, and length of recordings.
Abstract: Compared to adult EEG, EEG signals recorded from pediatric populations have shorter recording periods and contain more artifact contamination. Therefore, pediatric EEG data necessitate specific preprocessing approaches in order to remove environmental noise and physiological artifacts without losing large amounts of data. However, there is presently a scarcity of standard automated preprocessing pipelines suitable for pediatric EEG. In an effort to achieve greater standardization of EEG preprocessing, and in particular, for the analysis of pediatric data, we developed the Maryland analysis of developmental EEG (MADE) pipeline as an automated preprocessing pipeline compatible with EEG data recorded with different hardware systems, different populations, levels of artifact contamination, and length of recordings. MADE uses EEGLAB and functions from some EEGLAB plugins and includes additional customized features particularly useful for EEG data collected from pediatric populations. MADE processes event-related and resting state EEG from raw data files through a series of preprocessing steps and outputs processed clean data ready to be analyzed in time, frequency, or time-frequency domain. MADE provides a report file at the end of the preprocessing that describes a variety of features of the processed data to facilitate the assessment of the quality of processed data. In this article, we discuss some practical issues, which are specifically relevant to pediatric EEG preprocessing. We also provide custom-written scripts to address these practical issues. MADE is freely available under the terms of the GNU General Public License at https://github.com/ChildDevLab/MADE-EEG-preprocessing-pipeline.

89 citations


Journal ArticleDOI
TL;DR: The authors present a method for eliminating stimulation artifacts in high-density micro-LED optoelectrodes for accurate functional mapping of local circuits in mice without any artifact-induced signal quality degradation during in vivo experiments.
Abstract: The combination of in vivo extracellular recording and genetic-engineering-assisted optical stimulation is a powerful tool for the study of neuronal circuits. Precise analysis of complex neural circuits requires high-density integration of multiple cellular-size light sources and recording electrodes. However, high-density integration inevitably introduces stimulation artifact. We present minimal-stimulation-artifact (miniSTAR) μLED optoelectrodes that enable effective elimination of stimulation artifact. A multi-metal-layer structure with a shielding layer effectively suppresses capacitive coupling of stimulation signals. A heavily boron-doped silicon substrate silences the photovoltaic effect induced from LED illumination. With transient stimulation pulse shaping, we reduced stimulation artifact on miniSTAR μLED optoelectrodes to below 50 μVpp, much smaller than a typical spike detection threshold, at optical stimulation of >50 mW mm–2 irradiance. We demonstrated high-temporal resolution (<1 ms) opto-electrophysiology without any artifact-induced signal quality degradation during in vivo experiments. MiniSTAR μLED optoelectrodes will facilitate functional mapping of local circuits and discoveries in the brain. Artifact-free opto-electrophysiology is key for precise modulation and monitoring of individual neurons at high spatio-temporal resolution. The authors present a method for eliminating stimulation artifacts in high-density micro-LED optoelectrodes for accurate functional mapping of local circuits.

77 citations


Journal ArticleDOI
TL;DR: A one-dimensional residual Convolutional Neural Networks (1D-ResCNN) model for raw waveform-based EEG denoising is proposed to solve the above problem and can yield cleaner waveforms and achieve significant improvement in SNR and RMSE.

73 citations


Journal ArticleDOI
TL;DR: A plausible genesis of some important lung artifacts is suggested and a convincing physical explanation of the genesis of important ultrasound lung artifacts does not exist yet.
Abstract: In standard B mode imaging, a set of ultrasound pulses is used to reconstruct a 2-D image even though some of the assumptions needed to do this are not fully satisfied. For this reason, ultrasound medical images show numerous artifacts which physicians recognize and evaluate as part of their diagnosis since even one artifact can provide clinical information. Understanding the physical mechanisms at the basis of the formation of an artifact is important to identify the physiopathological state of the biological medium which generated the artifact. Ultrasound lung images are a significant example of this challenge since everything that is represented beyond the thickness of the chest wall ( $\approx ~2$ cm) is artifactual information. A convincing physical explanation of the genesis of important ultrasound lung artifacts does not exist yet. Physicians simply base their diagnosis on a correlation observed over the years between the manifestation of some artifacts and the occurrence of particular lung pathologies. In this article, a plausible genesis of some important lung artifacts is suggested.

70 citations


Journal ArticleDOI
Tri Vu1, Mucong Li1, Hannah Humayun1, Yuan Zhou1, Yuan Zhou2, Junjie Yao1 
TL;DR: A deep-learning-based method that explores the Wasserstein generative adversarial network with gradient penalty (WGAN-GP) to reduce the limited-view and limited-bandwidth artifacts in PACT shows unprecedented artifact removal ability for in vivo image, which may enable important applications such as imaging tumor angiogenesis and hypoxia.
Abstract: With balanced spatial resolution, penetration depth, and imaging speed, photoacoustic computed tomography (PACT) is promising for clinical translation such as in breast cancer screening, functional...

69 citations


Journal ArticleDOI
TL;DR: A simple method that combines the advantages of spectral and spatial filtering, while minimizing their downsides is presented, applicable to multichannel data such as electroencephalography (EEG), magnetoencephalographic (MEG), or multich channel local field potentials (LFP).

64 citations


Journal ArticleDOI
01 Nov 2020
TL;DR: In this paper, a method of wavelet ICA (WICA) using fuzzy kernel support vector machine (FKSVM) is proposed for removing and classifying the EEG artifacts automatically.
Abstract: Electroencephalography (EEG) is almost contaminated with many artifacts while recording the brain signal activity. Clinical diagnostic and brain computer interface applications frequently require the automated removal of artifacts. In digital signal processing and visual assessment, EEG artifact removal is considered to be the key analysis technique. Nowadays, a standard method of dimensionality reduction technique like independent component analysis (ICA) and wavelet transform combination can be explored for removing the EEG signal artifacts. Manual artifact removal is time-consuming; in order to avoid this, a novel method of wavelet ICA (WICA) using fuzzy kernel support vector machine (FKSVM) is proposed for removing and classifying the EEG artifacts automatically. Proposed method presents an efficient and robust system to adopt the robotic classification and artifact computation from EEG signal without explicitly providing the cutoff value. Furthermore, the target artifacts are removed successfully in combination with WICA and FKSVM. Additionally, proposes the various descriptive statistical features such as mean, standard deviation, variance, kurtosis and range provides the model creation technique in which the training and testing the data of FKSVM is used to classify the EEG signal artifacts. The future work to implement various machine learning algorithm to improve performance of the system.

62 citations


Posted Content
TL;DR: The EEGdenoiseNet as discussed by the authors is a benchmark EEG dataset that is suited for training and testing deep learning-based denoising models, as well as for performance comparisons across models.
Abstract: Deep learning networks are increasingly attracting attention in various fields, including electroencephalography (EEG) signal processing. These models provided comparable performance with that of traditional techniques. At present, however, lacks of well-structured and standardized datasets with specific benchmark limit the development of deep learning solutions for EEG denoising. Here, we present EEGdenoiseNet, a benchmark EEG dataset that is suited for training and testing deep learning-based denoising models, as well as for performance comparisons across models. EEGdenoiseNet contains 4514 clean EEG segments, 3400 ocular artifact segments and 5598 muscular artifact segments, allowing users to synthesize noisy EEG segments with the ground-truth clean EEG. We used EEGdenoiseNet to evaluate denoising performance of four classical networks (a fully-connected network, a simple and a complex convolution network, and a recurrent neural network). Our analysis suggested that deep learning methods have great potential for EEG denoising even under high noise contamination. Through EEGdenoiseNet, we hope to accelerate the development of the emerging field of deep learning-based EEG denoising.

Journal ArticleDOI
TL;DR: Dual-layer EEG enabled us to isolate changes in human sensorimotor electrocortical dynamics across walking speeds, and allowed us to document and rule out residual artifacts, which exposed sensorsimotor spectral power changes across gait speeds.
Abstract: Objective: Our aim was to determine if walking speed affected human sensorimotor electrocortical dynamics using mobile high-density electroencephalography (EEG). Methods: To overcome limitations associated with motion and muscle artifact contamination in EEG recordings, we compared solutions for artifact removal using novel dual-layer EEG electrodes and alternative signal processing methods. Dual-layer EEG simultaneously recorded human electrocortical signals and isolated motion artifacts using pairs of mechanically coupled and electrically independent electrodes. For electrical muscle activity removal, we incorporated electromyographic (EMG) recordings from the neck into our mobile EEG data processing pipeline. We compared artifact removal methods during treadmill walking at four speeds (0.5, 1.0, 1.5, and 2.0 m/s). Results: Left and right sensorimotor alpha and beta spectral power increased in contralateral limb single support and push off, and decreased during contralateral limb swing at each speed. At faster walking speeds, sensorimotor spectral power fluctuations were less pronounced across the gait cycle with reduced alpha and beta power ( p Conclusion and significance: Dual-layer EEG enabled us to isolate changes in human sensorimotor electrocortical dynamics across walking speeds. A comparison of signal processing approaches revealed similar intrastride cortical fluctuations when applying common (e.g., artifact subspace reconstruction) and novel artifact rejection methods. Dual-layer EEG, however, allowed us to document and rule out residual artifacts, which exposed sensorimotor spectral power changes across gait speeds.

Journal ArticleDOI
TL;DR: A gold nanonetwork (Au NN)‐based transparent neural electrocorticogram (ECoG) monitoring system is proposed as implantable neural electronics and demonstrates that the transparent microelectrode array records multichannel in vivo neural activities with no photoelectric artifact and a high signal‐to‐noise ratio.

Proceedings ArticleDOI
12 Oct 2020
TL;DR: Through reducing artifact patterns, the FakePolisher technique significantly reduces the accuracy of the 3 state-of-the-art fake image detection methods, i.e., 47% on average and up to 93% in the worst case.
Abstract: At this moment, GAN-based image generation methods are still imperfect, whose upsampling design has limitations in leaving some certain artifact patterns in the synthesized image. Such artifact patterns can be easily exploited (by recent methods) for difference detection of real and GAN-synthesized images. However, the existing detection methods put much emphasis on the artifact patterns, which can become futile if such artifact patterns were reduced. Towards reducing the artifacts in the synthesized images, in this paper, we devise a simple yet powerful approach termed FakePolisher that performs shallow reconstruction of fake images through a learned linear dictionary, intending to effectively and efficiently reduce the artifacts introduced during image synthesis. In particular, we first train a dictionary model to capture the patterns of real images. Based on this dictionary, we seek the representation of DeepFake images in a low dimensional subspace through linear projection or sparse coding. Then, we are able to perform shallow reconstruction of the 'fake-free' version of the DeepFake image, which largely reduces the artifact patterns DeepFake introduces. The comprehensive evaluation on 3 state-of-the-art DeepFake detection methods and fake images generated by 16 popular GAN-based fake image generation techniques, demonstrates the effectiveness of our technique. Overall, through reducing artifact patterns, our technique significantly reduces the accuracy of the 3 state-of-the-art fake image detection methods, i.e., 47% on average and up to 93% in the worst case. Our results confirm the limitation of current fake detection methods and calls the attention of DeepFake researchers and practitioners for more general-purpose fake detection techniques.

Journal ArticleDOI
TL;DR: Verification results based on numerical, experimental and clinical data confirm that the proposed convolutional neural network can significantly reduce serious artifacts in limited-angle computed tomography.
Abstract: The suppression of streak artifacts in computed tomography with a limited-angle configuration is challenging. Conventional analytical algorithms, such as filtered backprojection (FBP), are not successful due to incomplete projection data. Moreover, model-based iterative total variation algorithms effectively reduce small streaks but do not work well at eliminating large streaks. In contrast, FBP mapping networks and deep-learning-based postprocessing networks are outstanding at removing large streak artifacts; however, these methods perform processing in separate domains, and the advantages of multiple deep learning algorithms operating in different domains have not been simultaneously explored. In this paper, we present a hybrid-domain convolutional neural network (hdNet) for the reduction of streak artifacts in limited-angle computed tomography. The network consists of three components: the first component is a convolutional neural network operating in the sinogram domain, the second is a domain transformation operation, and the last is a convolutional neural network operating in the CT image domain. After training the network, we can obtain artifact-suppressed CT images directly from the sinogram domain. Verification results based on numerical, experimental and clinical data confirm that the proposed method can significantly reduce serious artifacts.

Journal ArticleDOI
TL;DR: This work modified ADJUST's algorithm to automate artifact selection for pediatric data collected with geodesic nets, and results indicate that optimizing existing algorithms improves artifact classification and retains more trials, potentially facilitating EEG studies with pediatric populations.
Abstract: A major challenge for electroencephalograph (EEG) studies on pediatric populations is that large amounts of data are lost due to artifacts (e.g., movement and blinks). Independent component analysis (ICA) can separate artifactual and neural activity, allowing researchers to remove such artifactual activity and retain a greater percentage of EEG data for analyses. However, manual identification of artifactual components is time-consuming and requires subjective judgment. Automated algorithms, like ADJUST and ICLabel, have been validated on adults, but to our knowledge, no such algorithms have been optimized for pediatric data. Therefore, in an attempt to automate artifact selection for pediatric data collected with geodesic nets, we modified ADJUST's algorithm. Our "adjusted-ADJUST" algorithm was compared to the "original-ADJUST" algorithm and ICLabel in adults, children, and infants on three different performance measures: respective classification agreement with expert coders, the number of trials retained following artifact removal, and the reliability of the EEG signal after preprocessing with each algorithm. Overall, the adjusted-ADJUST algorithm performed better than the original-ADJUST algorithm and no ICA correction with adult and pediatric data. Moreover, in some measures, it performed better than ICLabel for pediatric data. These results indicate that optimizing existing algorithms improves artifact classification and retains more trials, potentially facilitating EEG studies with pediatric populations. Adjusted-ADJUST is freely available under the terms of the GNU General Public License at: https://github.com/ChildDevLab/MADE-EEG-preprocessing-pipeline/tree/master/adjusted_adjust_scripts.

Journal ArticleDOI
TL;DR: An algorithm based on wavelet packet decomposition (WPD) that allows controlling the suppression or removal of presumed artifacts, by tuning intuitive parameters is proposed and performs better than ICA-based approach and performance can be further improved by properly tuning the parameters for an individual predictive model.

Journal ArticleDOI
TL;DR: An ambiguity about the formation of B-lines is investigated, leading to the formulation of two main hypotheses: the first hypothesis states that the visualization of these artifacts is linked only to the dimension of the emitted beam, whereas the second associates their appearance to specific resonance phenomena.
Abstract: The clinical relevance of lung ultrasonography (LUS) has been rapidly growing since the 1990s. However, LUS is mainly based on the evaluation of visual artifacts (also called B-lines), leading to subjective and qualitative diagnoses. The formation of B-lines remains unknown and, hence, researchers need to study their origin to allow clinicians to quantitatively evaluate the state of lungs. This paper investigates an ambiguity about the formation of B-lines, leading to the formulation of two main hypotheses. The first hypothesis states that the visualization of these artifacts is linked only to the dimension of the emitted beam, whereas the second associates their appearance to specific resonance phenomena. To verify these hypotheses, the frequency spectrum of B-lines was studied by using dedicated lung-phantoms. A research programmable platform connected to an LA533 linear array probe was exploited both to implement a multifrequency approach and to acquire raw radio frequency data. The strength of each artifact was measured as a function of frequency, focal point, and transmitting aperture by means of the artifact total intensity. The results show that the main parameter that influences the visualization of B-lines is the frequency rather than the focal point or the number of transmitting elements.

Proceedings ArticleDOI
04 May 2020
TL;DR: In this paper, a distortion-specific no-reference video quality model for predicting banding artifacts, called the Blind BANding Detector (BBAND index), was proposed.
Abstract: Banding artifact, or false contouring, is a common video compression impairment that tends to appear on large flat regions in encoded videos. These staircase-shaped color bands can be very noticeable in high-definition videos. Here we study this artifact, and propose a new distortion-specific no-reference video quality model for predicting banding artifacts, called the Blind BANding Detector (BBAND index). BBAND is inspired by human visual models. The proposed detector can generate a pixel-wise banding visibility map and output a banding severity score at both the frame and video levels. Experimental results show that our proposed method outperforms state-of-the-art banding detection algorithms and delivers better consistency with subjective evaluations.

Journal ArticleDOI
TL;DR: The proposed approach first includes the recording of EEG signals using a wearable EEG headset, which is colored by some motion artifacts generated in a lab-controlled experiment, followed by temporal and spectral characterization of the signals and artifact removal using independent component analysis (ICA).

Posted Content
TL;DR: A novel deep learning-based magnetic resonance imaging reconstruction pipeline that includes a deep convolutional neural network to aid in the reconstruction of raw data, ultimately producing clean, sharp images.
Abstract: A novel deep learning-based magnetic resonance imaging reconstruction pipeline was designed to address fundamental image quality limitations of conventional reconstruction to provide high-resolution, low-noise MR images. This pipeline's unique aims were to convert truncation artifact into improved image sharpness while jointly denoising images to improve image quality. This new approach, now commercially available at AIR Recon DL (GE Healthcare, Waukesha, WI), includes a deep convolutional neural network (CNN) to aid in the reconstruction of raw data, ultimately producing clean, sharp images. Here we describe key features of this pipeline and its CNN, characterize its performance in digital reference objects, phantoms, and in-vivo, and present sample images and protocol optimization strategies that leverage image quality improvement for reduced scan time. This new deep learning-based reconstruction pipeline represents a powerful new tool to increase the diagnostic and operational performance of an MRI scanner.

Proceedings ArticleDOI
19 Jul 2020
TL;DR: This work proposes a pattern recognition neural network based single-channel automatic artifact detection tool capable of detecting the artifacts with an 93.2% of overall accuracy and requires an average computing time of 2.57 seconds to analyse LFPs of one minute duration, making it a strong candidate for online deployment without the need for employing high performance computing equipment.
Abstract: The neural recordings known as Local Field Potentials (LFPs) provide important information on how neural circuits operate and relate. Due to the involvement of complex electronic apparatuses in the recording setups, these signals are often significantly contaminated by artifacts generated by a number of internal and external sources. To make the best use of these signals, it is imperative to detect and remove the artifacts from these signals. Hence, this work proposes a pattern recognition neural network based single-channel automatic artifact detection tool. The tool is capable of detecting the artifacts with an 93.2% of overall accuracy and requires an average computing time of 2.57 seconds to analyse LFPs of one minute duration, making it a strong candidate for online deployment without the need for employing high performance computing equipment.

Journal ArticleDOI
TL;DR: Retrospective correction of motion artifacts using a multiscale fully convolutional network is promising and may mitigate the substantial motion-related problems in the clinical MRI workflow.
Abstract: BACKGROUND AND PURPOSE: Motion artifacts are a frequent source of image degradation in the clinical application of MR imaging (MRI). Here we implement and validate an MRI motion-artifact correction method using a multiscale fully convolutional neural network. MATERIALS AND METHODS: The network was trained to identify motion artifacts in axial T2-weighted spin-echo images of the brain. Using an extensive data augmentation scheme and a motion artifact simulation pipeline, we created a synthetic training dataset of 93,600 images based on only 16 artifact-free clinical MRI cases. A blinded reader study using a unique test dataset of 28 additional clinical MRI cases with real patient motion was conducted to evaluate the performance of the network. RESULTS: Application of the network resulted in notably improved image quality without the loss of morphologic information. For synthetic test data, the average reduction in mean squared error was 41.84%. The blinded reader study on the real-world test data resulted in significant reduction in mean artifact scores across all cases (P CONCLUSIONS: Retrospective correction of motion artifacts using a multiscale fully convolutional network is promising and may mitigate the substantial motion-related problems in the clinical MRI workflow.

Journal ArticleDOI
TL;DR: The DRN-DCMB model significantly improved the overall image quality, reduced the severity of the motion artifacts, and improved the image sharpness, while kept the image contrast.

Journal ArticleDOI
TL;DR: This paper proposes a robust framework for the detection and removal of OAs based on variational mode decomposition (VMD) and turning point count and demonstrates that this framework outperforms few existing OAs removal techniques in removing OAs from single-channel EEG signal.
Abstract: Removal of ocular artifacts (OAs) from electroencephalogram (EEG) signal is crucial for accurate and effective EEG analysis and brain-computer interface research. The elimination of OAs is quite challenging in absence of reference electro-oculogram and in single-channel EEG signal using existing independent component analysis based OA removal techniques. Though few of the recent OAs removal techniques suppress the OAs in the single-channel significantly, these techniques introduce distortion in clinical features of the EEG signal during artifact removal process. To address these issues, in this paper, we propose a robust framework for the detection and removal of OAs based on variational mode decomposition (VMD) and turning point count. The proposed framework exploits the effectiveness of VMD in two stages denoted as VMD-I and VMD-II respectively. The proposed framework has four components: EEG signal decomposition into two modes using VMD-I; rejection of low-frequency baseline components; processed EEG signal decomposition into three modes using VMD-II; rejection of mode containing OAs based on turning point count based threshold criteria. We evaluate the effectiveness of the proposed framework using the EEG signals in presence of various ocular artifacts with different amplitudes and shapes taken from three standard databases including, Mendeley database, MIT-BIH Polysmnographic database and EEG during mental arithmetic tasks database. Evaluation results demonstrate that proposed framework eliminates OAs with minimal loss in valuable clinical features in both reconstructed EEG signal and in all local rhythms. Furthermore, subjective and objective comparative analysis demonstrate that our framework outperforms few existing OAs removal techniques in removing OAs from single-channel EEG signal.

Journal ArticleDOI
TL;DR: This article presents a 320 $\times $ 240 indirect time-of-flight (iToF) CMOS image sensor with on-chip motion artifact suppression and background light cancelling with a pseudo-four-tap (P4-tap) demodulation method with alternate phase driving using a conventional two-tap pixel structure with a high fill factor.
Abstract: This article presents a 320 $\times $ 240 indirect time-of-flight (iToF) CMOS image sensor (CIS) with on-chip motion artifact suppression and background light cancelling (BGLC). The proposed iToF CIS uses a backside-illuminated trident pinned photodiode (PPD) that assists charge transfer with a built-in lateral electric field for enhanced depth accuracy. To overcome the limitation of the conventional iToF CIS that exhibits motion artifact, we propose a pseudo-four-tap (P4-tap) demodulation method with alternate phase driving using a conventional two-tap pixel structure with a high fill factor of over 43%. In addition, by combining the advantages of both the P4-tap and conventional two-tap demodulation schemes, we propose hybrid depth imaging with reduced motion artifact for moving objects, while providing high-depth precision for the static background. For outdoor mobile applications of the iToF CIS, we integrated on-chip BGLC circuits to eliminate the BGL-induced depth error. A prototype chip is fabricated using a 90-nm backside illumination (BSI) CIS process. The BSI trident PPD enabled a low depth error of under 0.54% over the range of 0.75–4 m, with a modulation frequency of 100 MHz. Motion artifact was suppressed at 60 frames/s of hybrid depth imaging owing to the proposed P4-tap scheme. With the on-chip BGLC circuit, the experimental results demonstrate a 0.55% depth error at a 1-m distance, even under 120-klx BGL illumination.

Journal ArticleDOI
TL;DR: This paper proposes to extend EEMD-CCA to include an EMG array as information to aid the removal of artifacts, and recommends the use of EMG electrodes to filter components, as it is a computationally inexpensive enhancement that impacts significantly on performance using only a few electrodes.
Abstract: Removal of artifacts induced by muscle activity is crucial for analysis of the electroencephalogram (EEG), and continues to be a challenge in experiments where the subject may speak, change facial expressions, or move. Ensemble empirical mode decomposition with canonical correlation analysis (EEMD-CCA) has been proven to be an efficient method for denoising of EEG contaminated with muscle artifacts. EEMD-CCA, likewise the majority of algorithms, does not incorporate any statistical information of the artifact, namely, electromyogram (EMG) recorded over the muscles actively contaminating the EEG. In this paper, we propose to extend EEMD-CCA in order to include an EMG array as information to aid the removal of artifacts, assessing the performance gain achieved when the number of EMG channels grow. By filtering adaptively (recursive least squares, EMG array as reference) each component resulting from CCA, we aim to ameliorate the distortion of brain signals induced by artifacts and denoising methods. We simulated several noise scenarios based on a linear contamination model, between real and synthetic EEG and EMG signals, and varied the number of EMG channels available to the filter. Our results exhibit a substantial improvement in the performance as the number of EMG electrodes increase from 2 to 16. Further increasing the number of EMG channels up to 128 did not have a significant impact on the performance. We conclude by recommending the use of EMG electrodes to filter components, as it is a computationally inexpensive enhancement that impacts significantly on performance using only a few electrodes.

Journal ArticleDOI
TL;DR: A system to detect a subject's sympathetic reaction, which is related to unexpected or challenging events during a car drive, using the ECG and EDA signals and a slightly invasive setup to demonstrate the possibility to classify the emotional state of the driver.
Abstract: Objective: in this paper we propose a system to detect a subject's sympathetic reaction, which is related to unexpected or challenging events during a car drive. Methods: we use the Electrocardiogram (ECG) signal and the Skin Potential Response (SPR) signal, which has several advantages with respect to other Electrodermal (EDA) signals. We record one SPR signal for each hand, and use an algorithm that, selecting the smoother signal, is able to remove motion artifacts. We extract statistical features from the ECG and SPR signals in order to classify signal segments and identify the presence or absence of emotional events via a Supervised Learning Algorithm. The experiments were carried out in a company which specializes in driving simulator equipment, using a motorized platform and a driving simulator. Different subjects were tested with this setup, with different challenging events happening on predetermined locations on the track. Results: we obtain an Accuracy as high as 79.10% for signal blocks and as high as 91.27% for events. Conclusion: results demonstrate the good performance of the presented system in detecting sympathetic reactions, and the effectiveness of the motion artifact removal procedure. Significance: our work demonstrates the possibility to classify the emotional state of the driver, using the ECG and EDA signals and a slightly invasive setup. In particular, the proposed use of SPR and of the motion artifact removal procedure are crucial for the effectiveness of the system.

Journal ArticleDOI
TL;DR: The results show that the footprint can provide a detailed assessment of gait‐related artifacts and can be used to estimate the sensitivity of different artifact reduction strategies, and the analysis of button‐press ERPs demonstrated its specificity, as processing did not only reduce gait-related artifacts but ERPs of interest remained largely unchanged.
Abstract: Brain activity during natural walking outdoors can be captured using mobile electroencephalography (EEG). However, EEG recorded during gait is confounded with artifacts from various sources, possibly obstructing the interpretation of brain activity patterns. Currently, there is no consensus on how the amount of artifact present in these recordings should be quantified, or is there a systematic description of gait artifact properties. In the current study, we expand several features into a seven-dimensional footprint of gait-related artifacts, combining features of time, time-frequency, spatial, and source domains. EEG of N = 26 participants was recorded while standing and walking outdoors. Footprints of gait-related artifacts before and after two different artifact attenuation strategies (after artifact subspace reconstruction (ASR) and after subsequent independent component analysis [ICA]) were systematically different. We also evaluated topographies, morphologies, and signal-to-noise ratios (SNR) of button-press event-related potentials (ERP) before and after artifact handling, to confirm gait-artifact reduction specificity. Morphologies and SNR remained unchanged after artifact attenuation, whereas topographies improved in quality. Our results show that the footprint can provide a detailed assessment of gait-related artifacts and can be used to estimate the sensitivity of different artifact reduction strategies. Moreover, the analysis of button-press ERPs demonstrated its specificity, as processing did not only reduce gait-related artifacts but ERPs of interest remained largely unchanged. We conclude that the proposed footprint is well suited to characterize individual differences in gait-related artifact extent. In the future, it could be used to compare and optimize recording setups and processing pipelines comprehensively.

Posted ContentDOI
30 Jan 2020-bioRxiv
TL;DR: The Maryland Analysis of Developmental EEG (MADE) pipeline is developed as an automated preprocessing pipeline compatible with EEG data recorded with different hardware systems, different populations, levels of artifact contamination, and length of recordings.
Abstract: Compared to adult EEG, EEG signals recorded from pediatric populations have shorter recording periods and contain more artifact contamination. Therefore, pediatric EEG data necessitate specific preprocessing approaches in order to remove environmental noise and physiological artifacts without losing large amounts of data. However, there is presently a scarcity of standard automated preprocessing pipelines suitable for pediatric EEG. In an effort to achieve greater standardization of EEG preprocessing, and in particular for the analysis of pediatric data, we developed the Maryland Analysis of Developmental EEG (MADE) pipeline as an automated preprocessing pipeline compatible with EEG data recorded with different hardware systems, different populations, levels of artifact contamination, and length of recordings. MADE uses EEGLAB and functions from some EEGLAB plugins, and includes additional customizable features particularly useful for EEG data collected from pediatric populations. MADE processes event-related and resting state EEG from raw data files through a series of preprocessing steps and outputs processed clean data ready to be analyzed in time, frequency, or time-frequency domain. MADE provides a report file at the end of the preprocessing that describes a variety of features of the processed data to facilitate the assessment of the quality of processed data. In this paper we discuss some practical issues, which are specifically relevant to pediatric EEG preprocessing. We also provide custom-written scripts to address these practical issues. MADE is freely available under the terms of the GNU General Public License at https://github.com/ChildDevLab/MADE-EEG-preprocessing-pipeline.