scispace - formally typeset
Search or ask a question

Showing papers on "Artifact (error) published in 2017"


Journal ArticleDOI
TL;DR: In this work, a 13-layer deep convolutional neural network (CNN) algorithm is implemented to detect normal, preictal, and seizure classes and achieved an accuracy, specificity, and sensitivity of 88.67%, 90.00% and 95.00%, respectively.

1,117 citations


Journal ArticleDOI
TL;DR: The causes of artifacts in EEG recordings resulting from TMS, as well as artifacts introduced during analysis (e.g. as the result of filtering over high‐frequency, large amplitude artifacts) are reviewed and methods for removing them are discussed.

233 citations


Journal ArticleDOI
TL;DR: Understanding of the main principles and techniques including their limitations allows a considerate application of these techniques in clinical practice and improves MARS imaging in a clinically feasible scan time.
Abstract: The prevalence of orthopedic metal implants is continuously rising in the aging society. Particularly the number of joint replacements is increasing. Although satisfying long-term results are encountered, patients may suffer from complaints or complications during follow-up, and often undergo magnetic resonance imaging (MRI). Yet metal implants cause severe artifacts on MRI, resulting in signal-loss, signal-pileup, geometric distortion, and failure of fat suppression. In order to allow for adequate treatment decisions, metal artifact reduction sequences (MARS) are essential for proper radiological evaluation of postoperative findings in these patients. During recent years, developments of musculoskeletal imaging have addressed this particular technical challenge of postoperative MRI around metal. Besides implant material composition, configuration and location, selection of appropriate MRI hardware, sequences, and parameters influence artifact genesis and reduction. Application of dedicated metal artifact reduction techniques including high bandwidth optimization, view angle tilting (VAT), and the multispectral imaging techniques multiacquisition variable-resonance image combination (MAVRIC) and slice-encoding for metal artifact correction (SEMAC) may significantly reduce metal-induced artifacts, although at the expense of signal-to-noise ratio and/or acquisition time. Adding advanced image acquisition techniques such as parallel imaging, partial Fourier transformation, and advanced reconstruction techniques such as compressed sensing further improves MARS imaging in a clinically feasible scan time. This review focuses on current clinically applicable MARS techniques. Understanding of the main principles and techniques including their limitations allows a considerate application of these techniques in clinical practice. Essential orthopedic metal implants and postoperative MR findings around metal are presented and highlighted with clinical examples. Level of Evidence: 4 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:972–991.

130 citations


Journal ArticleDOI
TL;DR: Issues in data acquisition and analysis of EEG and MEG data are discussed and the increase in methodological complexity in EEG/MEG is discussed, which is important to gather data that are of high quality and that are as artifact free as possible.
Abstract: Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive electrophysiological methods, which record electric potentials and magnetic fields due to electric currents in synchronously-active neurons. With MEG being more sensitive to neural activity from tangential currents and EEG being able to detect both radial and tangential sources, the two methods are complementary. Over the years, neurophysiological studies have changed considerably: high-density recordings are becoming de rigueur; there is interest in both spontaneous and evoked activity; and sophisticated artifact detection and removal methods are available. Improved head models for source estimation have also increased the precision of the current estimates, particularly for EEG and combined EEG/MEG. Because of their complementarity, more investigators are beginning to perform simultaneous EEG/MEG studies to gain more complete information about neural activity. Given the increase in methodological complexity in EEG/MEG, it is important to gather data that are of high quality and that are as artifact free as possible. Here, we discuss some issues in data acquisition and analysis of EEG and MEG data. Practical considerations for different types of EEG and MEG studies are also discussed.

120 citations


Journal ArticleDOI
TL;DR: The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.
Abstract: Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

95 citations


Journal ArticleDOI
15 May 2017
TL;DR: The frequency of artifacts is higher and the repeatability of the measurements is lower with lower image quality, and the impact of image quality index should be always considered in OCTA based quantitative measurements.
Abstract: To study the impact of image quality on quantitative measurements and the frequency of segmentation error with optical coherence tomography angiography (OCTA). Seventeen eyes of 10 healthy individuals were included in this study. OCTA was performed using a swept-source device (Triton, Topcon). Each subject underwent three scanning sessions 1–2 min apart; the first two scans were obtained under standard conditions and for the third session, the image quality index was reduced using application of a topical ointment. En face OCTA images of the retinal vasculature were generated using the default segmentation for the superficial and deep retinal layer (SRL, DRL). Intraclass correlation coefficient (ICC) was used as a measure for repeatability. The frequency of segmentation error, motion artifact, banding artifact and projection artifact was also compared among the three sessions. The frequency of segmentation error, and motion artifact was statistically similar between high and low image quality sessions (P = 0.707, and P = 1 respectively). However, the frequency of projection and banding artifact was higher with a lower image quality. The vessel density in the SRL was highly repeatable in the high image quality sessions (ICC = 0.8), however, the repeatability was low, comparing the high and low image quality measurements (ICC = 0.3). In the DRL, the repeatability of the vessel density measurements was fair in the high quality sessions (ICC = 0.6 and ICC = 0.5, with and without automatic artifact removal, respectively) and poor comparing high and low image quality sessions (ICC = 0.3 and ICC = 0.06, with and without automatic artifact removal, respectively). The frequency of artifacts is higher and the repeatability of the measurements is lower with lower image quality. The impact of image quality index should be always considered in OCTA based quantitative measurements.

91 citations


Journal ArticleDOI
TL;DR: It is shown that stimulation artifacts are not pure in-phase or anti-phase signals, but that non-linear mechanisms induce steady phase deflections relative to the stimulation current, and this model can be used in simulations to design and evaluate artifact rejection techniques.

86 citations


Journal ArticleDOI
TL;DR: A large simulation study was used to create a realistic model of baseline wander and found that the best performing method was the wavelet-based baseline cancellation, however, for medical applications, the Butterworth high-pass filter is the better choice because it is computationally cheap and almost as accurate.
Abstract: The most important ECG marker for the diagnosis of ischemia or infarction is a change in the ST segment. Baseline wander is a typical artifact that corrupts the recorded ECG and can hinder the correct diagnosis of such diseases. For the purpose of finding the best suited filter for the removal of baseline wander, the ground truth about the ST change prior to the corrupting artifact and the subsequent filtering process is needed. In order to create the desired reference, we used a large simulation study that allowed us to represent the ischemic heart at a multiscale level from the cardiac myocyte to the surface ECG. We also created a realistic model of baseline wander to evaluate five filtering techniques commonly used in literature. In the simulation study, we included a total of 5.5 million signals coming from 765 electrophysiological setups. We found that the best performing method was the wavelet-based baseline cancellation. However, for medical applications, the Butterworth high-pass filter is the better choice because it is computationally cheap and almost as accurate. Even though all methods modify the ST segment up to some extent, they were all proved to be better than leaving baseline wander unfiltered.

81 citations


Journal ArticleDOI
TL;DR: It is concluded that even with current approaches, brain oscillations recorded during tACS can be meaningfully studied in many practical cases and technical limits of the stimulator are reached.

80 citations


Journal ArticleDOI
TL;DR: The PWF analysis seems to be a suitable method for PPG signal quality determination, real-time annotation, data compression, and calculation of additional pulse wave metrics such as amplitude, duration, and rise time.
Abstract: Photoplethysmography has been used in a wide range of medical devices for measuring oxygen saturation, cardiac output, assessing autonomic function, and detecting peripheral vascular disease. Artifacts can render the photoplethysmogram (PPG) useless. Thus, algorithms capable of identifying artifacts are critically important. However, the published PPG algorithms are limited in algorithm and study design. Therefore, the authors developed a novel embedded algorithm for real-time pulse waveform (PWF) segmentation and artifact detection based on a contour analysis in the time domain. This paper provides an overview about PWF and artifact classifications, presents the developed PWF analysis, and demonstrates the implementation on a 32-bit ARM core microcontroller. The PWF analysis was validated with data records from 63 subjects acquired in a sleep laboratory, ergometry laboratory, and intensive care unit in equal parts. The output of the algorithm was compared with harmonized experts’ annotations of the PPG with a total duration of 31.5 h. The algorithm achieved a beat-to-beat comparison sensitivity of 99.6%, specificity of 90.5%, precision of 98.5%, and accuracy of 98.3%. The interrater agreement expressed as Cohen's kappa coefficient was 0.927 and as F-measure was 0.990. In conclusion, the PWF analysis seems to be a suitable method for PPG signal quality determination, real-time annotation, data compression, and calculation of additional pulse wave metrics such as amplitude, duration, and rise time.

70 citations


Proceedings ArticleDOI
TL;DR: An interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data is developed and compared its performances with the other interpolation techniques.
Abstract: Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.

Journal ArticleDOI
07 Sep 2017-PLOS ONE
TL;DR: Estimated head motion was reduced by 10–50% or more following temporal interpolation, and reductions were often visible to the naked eye, and it is sensible to obtain motion estimates prior to any image processing.
Abstract: Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion).

Proceedings ArticleDOI
01 Dec 2017
TL;DR: It is shown that the more commonly used floor or ceiling operators (but not the round operator) introduce a periodic artifact in the form of a single darker or brighter pixel — which is term a dimple — in 8 × 8 pixel blocks.
Abstract: Previous forensic techniques have exploited various characteristics of JPEG compression to reveal traces of manipulation in digital images. We describe a JPEG artifact that can arise depending on the choice of the mathematical operator used to convert DCT coefficients from floating-point to integer values. We show that the more commonly used floor or ceiling operators (but not the round operator) introduce a periodic artifact in the form of a single darker or brighter pixel — which we term a dimple — in 8 × 8 pixel blocks. We describe the nature of this artifact, its prevalence in commercial cameras, and how this artifact can be quantified and used to detect a wide range of digital manipulations from content-aware fill to re-sampling, airbrushing, and compositing.

Proceedings ArticleDOI
TL;DR: A convolutional neural network was trained to identify the location of individual point targets from pre-beamformed data simulated with k-Wave to contain various medium sound speeds, target locations, and absorber sizes, demonstrating strong promise to identify point targets without requiring traditional geometry-based beamforming.
Abstract: Interventional applications of photoacoustic imaging often require visualization of point-like targets, including the circular cross sectional tips of needles and catheters or the circular cross sectional views of small cylindrical implants such as brachytherapy seeds. When these point-like targets are imaged in the presence of highly echogenic structures, the resulting photoacoustic wave creates a reflection artifact that may appear as a true signal. We propose to use machine learning principles to identify these type of noise artifacts for removal. A convolutional neural network was trained to identify the location of individual point targets from pre-beamformed data simulated with k-Wave to contain various medium sound speeds (1440-1640 m/s), target locations (5-25 mm), and absorber sizes (1-5 mm). Based on 2,412 randomly selected test images, the mean axial and lateral point location errors were 0.28 mm and 0.37 mm, respectively, which can be regarded as the average imaging system resolution for our trained network. This trained network successfully identified the location of two point targets in a single image with mean axial and lateral errors of 2.6 mm and 2.1 mm, respectively. A true signal and a corresponding reflection artifact were then simulated. The same trained network identified the location of the artifact with mean axial and lateral errors of 2.1 mm and 3.0 mm, respectively. Identified artifacts may be rejected based on wavefront shape differences. These results demonstrate strong promise to identify point targets without requiring traditional geometry-based beamforming, leading to the eventual elimination of reflection artifacts from interventional images.

Journal ArticleDOI
08 Jun 2017-Sensors
TL;DR: The objective of this study is to provide information on post-stroke dementia particularly VaD and stroke-related MCI patients through spectral analysis of EEG background activities that can help to provide useful diagnostic indexes by using EEG signal processing.
Abstract: Characterizing dementia is a global challenge in supporting personalized health care. The electroencephalogram (EEG) is a promising tool to support the diagnosis and evaluation of abnormalities in the human brain. The EEG sensors record the brain activity directly with excellent time resolution. In this study, EEG sensor with 19 electrodes were used to test the background activities of the brains of five vascular dementia (VaD), 15 stroke-related patients with mild cognitive impairment (MCI), and 15 healthy subjects during a working memory (WM) task. The objective of this study is twofold. First, it aims to enhance the recorded EEG signals using a novel technique that combines automatic independent component analysis (AICA) and wavelet transform (WT), that is, the AICA-WT technique; second, it aims to extract and investigate the spectral features that characterize the post-stroke dementia patients compared to the control subjects. The proposed AICA-WT technique is a four-stage approach. In the first stage, the independent components (ICs) were estimated. In the second stage, three-step artifact identification metrics were applied to detect the artifactual components. The components identified as artifacts were marked as critical and denoised through DWT in the third stage. In the fourth stage, the corrected ICs were reconstructed to obtain artifact-free EEG signals. The performance of the proposed AICA-WT technique was compared with those of two other techniques based on AICA and WT denoising methods using cross-correlation X C o r r and peak signal to noise ratio ( P S N R ) (ANOVA, p ˂ 0.05). The AICA-WT technique exhibited the best artifact removal performance. The assumption that there would be a deceleration of EEG dominant frequencies in VaD and MCI patients compared with control subjects was assessed with AICA-WT (ANOVA, p ˂ 0.05). Therefore, this study may provide information on post-stroke dementia particularly VaD and stroke-related MCI patients through spectral analysis of EEG background activities that can help to provide useful diagnostic indexes by using EEG signal processing.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: In this paper, the performance of a deep learning algorithm, CNN-LSTM, on several channel configurations was investigated, and each configuration was designed to minimize the amount of spatial information lost compared to a standard 22-channel EEG.
Abstract: Interpretation of electroencephalogram (EEG) signals can be complicated by obfuscating artifacts. Artifact detection plays an important role in the observation and analysis of EEG signals. Spatial information contained in the placement of the electrodes can be exploited to accurately detect artifacts. However, when fewer electrodes are used, less spatial information is available, making it harder to detect artifacts. In this study, we investigate the performance of a deep learning algorithm, CNN-LSTM, on several channel configurations. Each configuration was designed to minimize the amount of spatial information lost compared to a standard 22-channel EEG. Systems using a reduced number of channels ranging from 8 to 20 achieved sensitivities between 33% and 37% with false alarms in the range of [38, 50] per 24 hours. False alarms increased dramatically (e.g., over 300 per 24 hours) when the number of channels was further reduced. Baseline performance of a system that used all 22 channels was 39% sensitivity with 23 false alarms. Since the 22-channel system was the only system that included referential channels, the rapid increase in the false alarm rate as the number of channels was reduced underscores the importance of retaining referential channels for artifact reduction. This cautionary result is important because one of the biggest differences between various types of EEGs administered is the type of referential channel used.

Journal ArticleDOI
TL;DR: This work proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary".
Abstract: EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of "dictionary." MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies.

Journal ArticleDOI
TL;DR: Investigation of ICA decompositions of EEG data from 32 college-aged young adults revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel.
Abstract: Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies.

Journal ArticleDOI
TL;DR: IVA was superior in isolating both ocular and muscle artifacts, especially for raw EEG data with low signal-to-noise ratio, and also integrated usually separate SOS and HOS steps into a single unified step.

Journal ArticleDOI
15 Sep 2017
TL;DR: This paper proposed to category and analysis existing biascorrection methods, provide a complete review article that enables comparative studies on bias correction in medical images.
Abstract: Bias field in medical images is an undesirable artifact primarily arises from the improper image acquisition process or the specific properties of the imaged object. This artifact can be characterized by a smooth variation of intensities across the image and significantly degrade many medical image analysis techniques. Studies on bias correction have been investigated extensively over these years. In this paper, we proposed to category and analysis existing biascorrection methods, provide a complete review article that enables comparative studies on bias correction in medical images.

Journal ArticleDOI
TL;DR: The iterative approach for ring artifact removal in cone-beam CT is practical and attractive for CBCT guided radiation therapy and shows high efficiency in ring artifacts removal while preserving the image structure and detail.
Abstract: Ring artifacts in cone beam computed tomography (CBCT) images are caused by pixel gain variations using flat-panel detectors, and may lead to structured non-uniformities and deterioration of image quality. The purpose of this study is to propose a method of general ring artifact removal in CBCT images. This method is based on the polar coordinate system, where the ring artifacts manifest as stripe artifacts. Using relative total variation, the CBCT images are first smoothed to generate template images with fewer image details and ring artifacts. By subtracting the template images from the CBCT images, residual images with image details and ring artifacts are generated. As the ring artifact manifests as a stripe artifact in a polar coordinate system, the artifact image can be extracted by mean value from the residual image; the image details are generated by subtracting the artifact image from the residual image. Finally, the image details are compensated to the template image to generate the corrected images. The proposed framework is iterated until the differences in the extracted ring artifacts are minimized. We use a 3D Shepp-Logan phantom, Catphan©504 phantom, uniform acrylic cylinder, and images from a head patient to evaluate the proposed method. In the experiments using simulated data, the spatial uniformity is increased by 1.68 times and the structural similarity index is increased from 87.12% to 95.50% using the proposed method. In the experiment using clinical data, our method shows high efficiency in ring artifact removal while preserving the image structure and detail. The iterative approach we propose for ring artifact removal in cone-beam CT is practical and attractive for CBCT guided radiation therapy.

Journal ArticleDOI
TL;DR: The findings support the use of IMAR as a valuable complement to, but not a replacement for, standard wFBP image reconstruction and reduce metal artifact both subjectively and objectively and improved visualization of adjacent soft tissues.
Abstract: BackgroundDental hardware produces streak artifacts on computed tomography (CT) images reconstructed with the standard weighted filtered back projection (wFBP) method.PurposeTo perform a preliminar...

Posted Content
TL;DR: In this paper, a deep learning approach with domain adaptation is proposed to restore high-resolution MR images from under-sampled k-space data, where the proposed deep network removes the streaking artifacts from the artifact corrupted images.
Abstract: Purpose: The radial k-space trajectory is a well-established sampling trajectory used in conjunction with magnetic resonance imaging. However, the radial k-space trajectory requires a large number of radial lines for high-resolution reconstruction. Increasing the number of radial lines causes longer acquisition time, making it more difficult for routine clinical use. On the other hand, if we reduce the number of radial lines, streaking artifact patterns are unavoidable. To solve this problem, we propose a novel deep learning approach with domain adaptation to restore high-resolution MR images from under-sampled k-space data. Methods: The proposed deep network removes the streaking artifacts from the artifact corrupted images. To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of x-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets. Results: The proposed method outperforms existing compressed sensing algorithms, such as the total variation and PR-FOCUSS methods. In addition, the calculation time is several orders of magnitude faster than the total variation and PR-FOCUSS methods.Moreover, we found that pre-training using CT or MR data from similar organ data is more important than pre-training using data from the same modality for different organ. Conclusion: We demonstrate the possibility of a domain-adaptation when only a limited amount of MR data is available. The proposed method surpasses the existing compressed sensing algorithms in terms of the image quality and computation time.

Journal ArticleDOI
09 Aug 2017
TL;DR: Two heuristics are proposed that offer superior spatiotemporal-frequency performance in automatic artifacts removal and are able to reconstruct clean EEG signals and are compared against state-of-the-art EEG ARTs.
Abstract: Electroencephalography (EEG) data are used to design useful indicators that act as proxies for detecting humans’ mental activities. However, these electrical signals are susceptible to different forms of interferences—known as artifacts—from voluntarily and involuntarily muscle movements that greatly obscure the information in the signal. It is pertinent to design effective artifact removal techniques (ARTs) capable of removing or reducing the impact of these artifacts. However, most ARTs have been focusing on handling a few specific types, or a single type, of EEG artifacts. EEG processing that generalizes to multiple types of artifacts remains a major challenge. In this paper, we investigate a variety of eight different and typical artifacts that occur in practice. We characterize the spatiotemporal-frequency influence of these EEG artifacts and offer two heuristics. The proposed heuristics extend influential independent component analysis to clean the contaminated EEG signal. These proposed heuristics are compared against four state-of-the-art EEG ARTs using both real and synthesized EEG, collected in the presence of multiple artifacts. The results show that both heuristics offer superior spatiotemporal-frequency performance in automatic artifacts removal and are able to reconstruct clean EEG signals.

Posted Content
TL;DR: A computationally fast and accurate deep learning algorithm for the reconstruction of MR images from highly down-sampled k-space data and shows minimal errors by removing the coherent aliasing artifacts.
Abstract: Purpose: Compressed sensing MRI (CS-MRI) from single and parallel coils is one of the powerful ways to reduce the scan time of MR imaging with performance guarantee. However, the computational costs are usually expensive. This paper aims to propose a computationally fast and accurate deep learning algorithm for the reconstruction of MR images from highly down-sampled k-space data. Theory: Based on the topological analysis, we show that the data manifold of the aliasing artifact is easier to learn from a uniform subsampling pattern with additional low-frequency k-space data. Thus, we develop deep aliasing artifact learning networks for the magnitude and phase images to estimate and remove the aliasing artifacts from highly accelerated MR acquisition. Methods: The aliasing artifacts are directly estimated from the distorted magnitude and phase images reconstructed from subsampled k-space data so that we can get an aliasing-free images by subtracting the estimated aliasing artifact from corrupted inputs. Moreover, to deal with the globally distributed aliasing artifact, we develop a multi-scale deep neural network with a large receptive field. Results: The experimental results confirm that the proposed deep artifact learning network effectively estimates and removes the aliasing artifacts. Compared to existing CS methods from single and multi-coli data, the proposed network shows minimal errors by removing the coherent aliasing artifacts. Furthermore, the computational time is by order of magnitude faster. Conclusion: As the proposed deep artifact learning network immediately generates accurate reconstruction, it has great potential for clinical applications.

Journal ArticleDOI
TL;DR: In this paper, a novel method combining a median filter and fractional order calculus can be used for automatic filtering of electrocardiography artifacts in the surface electromyography signal envelopes recorded in trunk muscles.

Journal ArticleDOI
TL;DR: The results demonstrate that the proposed approach outperforms the compared methods in terms of removal of OA and recovery of the underlying EEG and is capable of effectively reducing the ocular artifacts in a negligible time delay well applicable in real-time BCI.

Journal ArticleDOI
TL;DR: The proposed blind source separation-based method achieved dynamic measurement of RR and HR, and the extension and revision of it may have the potentials for more physiological signs detection, such as heart rate variability, eye blinking, nose wrinkling, yawn, as well as other muscular movements.
Abstract: Currently, many imaging photoplethysmography (IPPG) researches have reported non-contact measurements of physiological parameters, such as heart rate (HR), respiratory rate (RR), etc. However, it is accepted that only HR measurement has been mature for applications, and other estimations are relatively incapable for reliable applications. Thus, it is worth keeping on persistent studies. Besides, there are some issues commonly involved in these approaches need to be explored further. For example, motion artifact attenuation, an intractable problem, which is being attempted to be resolved by sophisticated video tracking and detection algorithms. This paper proposed a blind source separation-based method that could synchronously measure RR and HR in non-contact way. A dual region of interest on facial video image was selected to yield 6-channels Red/Green/Blue signals. By applying Second-Order Blind Identification algorithm to those signals generated above, we obtained 6-channels outputs that contain blood volume pulse (BVP) and respiratory motion artifact. We defined this motion artifact as respiratory signal (RS). For the automatic selections of the RS and BVP among these outputs, we devised a kurtosis-based identification strategy, which guarantees the dynamic RR and HR monitoring available. The experimental results indicated that, the estimation by the proposed method has an impressive performance compared with the measurement of the commercial medical sensors. The proposed method achieved dynamic measurement of RR and HR, and the extension and revision of it may have the potentials for more physiological signs detection, such as heart rate variability, eye blinking, nose wrinkling, yawn, as well as other muscular movements. Thus, it might provide a promising approach for IPPG-based applications such as emotion computation and fatigue detection, etc.

Journal ArticleDOI
TL;DR: Temporal muscle activity, e.g., by clenching the teeth induces a large hemodynamic-like artifact in fNIRS measurements which should be avoided by specific subject instructions and data should be screened for this artifact might be corrected by exclusion of contaminated blocks/events.
Abstract: Background: Extracranial signals are the main source of noise in functional near-infrared spectroscopy (fNIRS) as light is penetrating the cortex but also skin and muscles of the head. Aim: Here we performed three experiments to investigate the contamination of fNIRS measurements by temporal muscle activity. Material and methods: For experiment 1, we provoked temporal muscle activity by instructing 31 healthy subjects to clench their teeth three times. We measured fNIRS signals over left temporal and frontal channels with an interoptode distance of 3cm, in one short optode distance (SOD) channel (1cm) and electromyography (EMG) over the edge of the temporal muscle. In experiment 2, we screened resting state fNIRS-fMRI (functional magnetic resonance imaging) data of one healthy subject for temporal muscle artifacts. In experiment 3, we screened a dataset of sound-evoked activity (n=33) using bi-temporal probe-sets and systematically contrasted subjects presenting vs. not presenting artifacts and blocks/events contaminated or not contaminated with artifacts. Results: In experiment 1, we could demonstrate a hemodynamic-response-like increase in oxygenated (O2Hb) and decrease in deoxygenated (HHb) hemoglobin with a large amplitude and large spatial extent highly exceeding normal cortical activity. Correlations between EMG, SOD and fNIRS artifact activity showed only limited evidence for associations on a group level with rather clear associations in a sub-group of subjects. The fNIRS-fMRI experiment showed that during the temporal muscle artifact, fNIRS is completely saturated by muscle oxygenation. Experiment 3 showed hints for contamination of sound-evoked oxygenation by the temporal muscle artefact. This was of low relevance in analysing the whole sample. Discussion: Temporal muscle activity e.g. by clenching the teeth induces a large hemodynamic-like artefact in fNIRS measurements which should be avoided by specific subject instructions. Data should be screened for this artifact might be corrected by exclusion of contaminated blocks/events. The usefulness of established artifact correction methods should be evaluated in future studies. Conclusion: Temporal muscle activity, e.g. by clenching the teeth is one major source of noise in fNIRS measurements.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: Results are promising as they show that a network trained with only simulated data can distinguish experimental sources and artifacts in photoacoustic channel data and display this information in a novel artifact-free image format.
Abstract: Photoacoustic imaging is often used to visualize point-like targets, including circular cross sections of small cylindrical implants like brachytherapy seeds as well as circular cross sections of metal needles. When imaging these pointlike targets in the presence of highly echogenic structures, the resulting image will suffer due to reflection artifacts which appear as true signals in the traditional beamformed image. We propose to use machine learning methods to identify these types of noise artifacts for removal. A deep convolutional neural network was trained to locate and classify source and reflection artifacts in photoacoustic channel data simulated in k-Wave. Simulated channel data contained one source and one artifact with varying target locations, medium sound speeds, and −3dB channel noise. In testing 3,998 simulated images, we achieved a 99.1% and 98.8% success rate in classifying sources and artifacts, respectively, while obtaining a misclassification rate below 3.1%, where a misclassification was defined as a source or artifact detected as an artifact or source, respectively. The network, which was only trained on simulated data, was then transferred to experimental data with 100% source classification accuracy and 0.40 mm mean source location accuracy. These results are promising as they show that a network trained with only simulated data can distinguish experimental sources and artifacts in photoacoustic channel data and display this information in a novel artifact-free image format.