scispace - formally typeset
Search or ask a question

Showing papers on "Artifact (error) published in 2022"


Journal ArticleDOI
TL;DR: Transfer learning (TL) as discussed by the authors utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort.
Abstract: A brain–computer interface (BCI) enables a user to communicate with a computer directly using brain signals. The most common noninvasive BCI modality, electroencephalogram (EEG), is sensitive to noise/artifact and suffers between-subject/within-subject nonstationarity. Therefore, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, during different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time consuming and user unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort. This article reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016. Six paradigms and applications—motor imagery, event-related potentials, steady-state visual evoked potentials, affective BCIs, regression problems, and adversarial attacks—are considered. For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. Observations and conclusions are made at the end of the article, which may point to future research directions.

54 citations


Journal ArticleDOI
TL;DR: MicroFluID as discussed by the authors is a novel RFID artifact based on a multiple-chip structure and microfluidic switches, which informs the input state by directly reading variable ID information instead of retrieving primitive signals.
Abstract: RFID has been widely used for activity and gesture recognition in emerging interaction paradigms given its low cost, lightweight, and pervasiveness. However, current learning-based approaches on RFID sensing require significant efforts in data collection, feature extraction, and model training. To save data processing effort, we present MicroFluID, a novel RFID artifact based on a multiple-chip structure and microfluidic switches, which informs the input state by directly reading variable ID information instead of retrieving primitive signals. Fabricated on flexible substrates, four types of microfluidic switch circuits are designed to respond to external physical events, including pressure, bend, temperature, and gravity. By default, chips are disconnected into the circuit owing to the reserved gaps in transmission line. While external input or status change occurs, conductive liquid floating in the microfluidics channels will fill the gap(s), creating a connection to certain chip(s). In prototyping the device, we conducted a series of simulations and experiments to explore the feasibility of the multi-chip tag design, key fabrication parameters, interaction performance, and users' perceptions.

44 citations


Proceedings ArticleDOI
01 May 2022
TL;DR: This talk will present CloudMine, one of Microsoft's main data mining platforms serving data sets and dependency graphs of more than 270 different engineering artifacts gathered on an hourly basis, and highlight the benefits and opportunities a data mining framework like CloudMine provides the company.
Abstract: As any other US software maker, Microsoft is bound by the “Executive Order on Improving the Nation's Cybersecurity” [2] which dictates a clear mandate to “enhance the software supply chain security” and to generally improve the cyber security practices. However, this is much easier written down than enforced. The executive order imposes new rules and requirements that will impact engineering practices and evidence collection for most projects and engineering teams in a relatively short period of time. Part of the response is the requirement to build up comprehensive inventories of software artifacts contributing to US government systems, which is a massive task when done manually would be tedious and fragile as software eco-systems change rapidly. Required is a system that will constantly monitor and update the inventory of software artifacts and contributors so that at any given point of time, the scope and involved teams for any software security incident can be notified and response plans activated. The front line of this security battle includes data mining platforms providing the security and compliance teams with engineering artifacts and insights into artifact dependencies and engineering practices of the corresponding engineering teams. The data provided does not only allow Microsoft to build an accurate engineering artifact inventory, but also enables Microsoft�s teams to initiate so called “get-clean” initiatives to start issue remediation before proper policy tools and pipelines (“stay-clean”) can be developed, tested, and deployed. In this talk we will present CloudMine1, one of Microsoft's main data mining platforms serving data sets and dependency graphs of more than 270 different engineering artifacts (e.g., builds, releases, commits, pull requests, etc.) gathered on an hourly basis. During the talk we will provide some insights into CloudMine, its engineering team and operational costs-which is significant. We will then highlight the benefits and opportunities a data mining framework like CloudMine provides the company including insights into how inventory and automation bots use CloudMine data to impact thousands of Microsoft engineers daily, saving the company significant costs and response times to security incidents: the ability to scan more than 100,000 code repositories across the enterprise within hours; building up an artifact engineering inventory enabling us to flag any known security vulnerability in any of the software components within hours; or spotting non-compliant build and release pipelines across Microsoft's 500,000 pipelines. In addition, we will also present open challenges the CloudMine engineering team is facing during operating and growing CloudMine as a platform, which will hopefully provide motivation and inspiration for researcher and other companies to start a dialog with us and other companies about these challenges and latest research results that may help us solve these issues. From the talk it should become clear that running enterprise scale systems is not cheap but worth the effort as it enables Microsoft and its engineering teams to respond to current cyber security threads even before we can build and test best in class built-in defense systems.

38 citations


Journal ArticleDOI
TL;DR: In this article , a simple anatomical artifact model based upon known anatomical variations was introduced to help distinguish these artifacts from actual glaucomatous damage, and the model helps account for the success of an AI deep learning model on the retinal nerve fiber layer (RNFL) p-map.

25 citations


Journal ArticleDOI
05 Jan 2022-Sensors
TL;DR: A novel multi-stage EEG denoising method is proposed for the first time in which wavelet packet decomposition (WPD) is combined with a modified non-local means (NLM) algorithm, which indicates that the proposed approach is better in terms of quality of reconstruction and is fully automatic.
Abstract: Electroencephalogram (EEG) signals may get easily contaminated by muscle artifacts, which may lead to wrong interpretation in the brain–computer interface (BCI) system as well as in various medical diagnoses. The main objective of this paper is to remove muscle artifacts without distorting the information contained in the EEG. A novel multi-stage EEG denoising method is proposed for the first time in which wavelet packet decomposition (WPD) is combined with a modified non-local means (NLM) algorithm. At first, the artifact EEG signal is identified through a pre-trained classifier. Next, the identified EEG signal is decomposed into wavelet coefficients and corrected through a modified NLM filter. Finally, the artifact-free EEG is reconstructed from corrected wavelet coefficients through inverse WPD. To optimize the filter parameters, two meta-heuristic algorithms are used in this paper for the first time. The proposed system is first validated on simulated EEG data and then tested on real EEG data. The proposed approach achieved average mutual information (MI) as 2.9684 ± 0.7045 on real EEG data. The result reveals that the proposed system outperforms recently developed denoising techniques with higher average MI, which indicates that the proposed approach is better in terms of quality of reconstruction and is fully automatic.

24 citations


Proceedings ArticleDOI
10 May 2022
TL;DR: An application of sensemaking in organizations as a template for discussing design guidelines for sensible AI, AI that factors in the nuances of human cognition when trying to explain itself.
Abstract: Understanding how ML models work is a prerequisite for responsibly designing, deploying, and using ML-based systems. With interpretability approaches, ML can now offer explanations for its outputs to aid human understanding. Though these approaches rely on guidelines for how humans explain things to each other, they ultimately solve for improving the artifact—an explanation. In this paper, we propose an alternate framework for interpretability grounded in Weick’s sensemaking theory, which focuses on who the explanation is intended for. Recent work has advocated for the importance of understanding stakeholders’ needs—we build on this by providing concrete properties (e.g., identity, social context, environmental cues, etc.) that shape human understanding. We use an application of sensemaking in organizations as a template for discussing design guidelines for sensible AI, AI that factors in the nuances of human cognition when trying to explain itself.

23 citations


Journal ArticleDOI
TL;DR: In this article, a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit Clever Hans behavior is proposed, and several approaches are collectively termed as Class Artifact Compensation, which are able to effectively reduce a model's Clever Hans behaviour.

23 citations


Journal ArticleDOI
TL;DR: In this article , a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit Clever Hans behavior is proposed, and several approaches are collectively termed as Class Artifact Compensation, which are able to effectively reduce a model's Clever Hans behaviour.

21 citations


Journal ArticleDOI
TL;DR: Experimental values of the different sets of performance assessment metrics reveal that the proposed algorithm outperforms the existing state-of-the-art EEG signal denoising algorithms for the suppression of motion artifacts from EEG signals.
Abstract: Motion artifacts are one of the most challenging non-physiological noise sources present in the biomedical signal, which can hinder the true performance of EEG-based neuro-engineering applications. Therefore, motion artifact removal from EEG signals can be potentially the utmost protuberant research topic. To solve this issue, a hybrid signal denoising framework which comprises modified empirical mode decomposition (EMD) and an optimized Laplacian of Gaussian (LoG) filter is proposed for the suppression of motion artifacts from EEG signals. The modified-EMD decomposes the single-channel noisy EEG signal into a set of the optimal number of intrinsic mode functions (IMFs). Furthermore, the optimized LoG filter has been applied to the motion artifact intermixed EEG signal which is considered as low-frequency noise likely to present at low-frequency IMFs. This filter performs smoothing of the signal and removes background noises or artifacts from the EEG signal. The denoised signal is reconstructed by adding filtered output signal and high-frequency IMFs. The robustness of the proposed method is demonstrated through simulated and real EEG data with artifacts. The experimental values of the different sets of performance assessment metrics reveal that the proposed algorithm outperforms the existing state-of-the-art EEG signal denoising algorithms for the suppression of motion artifacts from EEG signals.

19 citations


Journal ArticleDOI
TL;DR: DuDoDR-Net as discussed by the authors proposes a dual-domain data consistent recurrent network for SVMAR, which can reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations.

19 citations


Journal ArticleDOI
TL;DR: The proposed HDR imaging approach that aggregates the information from multiple LDR images with guidance from image gradient domain generates artifact-free images by integrating the image gradient information and the image context information in the pixel domain.

Journal ArticleDOI
TL;DR: DuDoDR-Net as discussed by the authors proposes a dual-domain data consistent recurrent network for SVMAR, which can reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations.

Posted ContentDOI
10 Mar 2022-bioRxiv
TL;DR: RelAX (the Reduction of Electroencephalographic Artifacts), an automated EEG cleaning pipeline implemented within EEGLAB that reduces all artifact types, is developed and recommended for data cleaning across EEG studies.
Abstract: Electroencephalographic (EEG) data is typically contaminated with non-neural artifacts which can confound the results of experiments. Artifact cleaning approaches are available, but often require time-consuming manual input and significant expertise. Advancements in artifact cleaning often only address a single artifact, are only compared against a small selection of pre-existing methods, and seldom assess whether a proposed advancement improves experimental outcomes. To address these issues, we developed RELAX (the Reduction of Electroencephalographic Artifacts), an automated EEG cleaning pipeline implemented within EEGLAB that reduces all artifact types. RELAX cleans continuous data using Multiple Wiener filtering [MWF] and/or wavelet enhanced independent component analysis [wICA] applied to artifacts identified by ICLabel [wICA_ICLabel]). Several versions of RELAX were tested using three datasets containing a mix of cognitive and resting recordings (N = 213, 60 and 23 respectively). RELAX was compared against six commonly used EEG cleaning approaches across a wide range of artifact cleaning quality metrics, including signal-to-error and artifact-to-residue ratios, measures of remaining blink and muscle activity, and the amount of variance explained by experimental manipulations after cleaning. RELAX with MWF and wICA_ICLabel showed amongst the best performance for cleaning blink and muscle artifacts while still preserving neural signal. RELAX with wICA_ICLabel (and no MWF) may perform better at detecting the effect of experimental manipulations on alpha oscillations in working memory tasks. The pipeline is easy to implement in MATLAB and freely available on GitHub. Given its high cleaning performance, objectivity, and ease of use, we recommend RELAX for data cleaning across EEG studies.

Proceedings ArticleDOI
27 Apr 2022
TL;DR: The Logic Bonbon as discussed by the authors , a dessert that can hydrodynamically regulate its flavor via a fluidic logic system, is an example of a food-computation integration.
Abstract: In recognition of food's significant experiential pleasures, culinary practitioners and designers are increasingly exploring novel combinations of computing technologies and food. However, despite much creative endeavors, proposals and prototypes have so far largely maintained a traditional divide, treating food and technology as separate entities. In contrast, we present a “Research through Design” exploration of the notion of food as computational artifact: wherein food itself is the material of computation. We describe the Logic Bonbon, a dessert that can hydrodynamically regulate its flavor via a fluidic logic system. Through a study of experiencing the Logic Bonbon and reflection on our design practice, we offer a provisional account of how food as computational artifact can mediate new interactions through a novel approach to food-computation integration, that promotes an enriched future of Human-Food Interaction.

Journal ArticleDOI
TL;DR: In this paper , an automated pipeline for infants continuous EEG (APICE) is proposed, which is fully automated, flexible, and modular for artifact detection and data preprocessing on continuous EEG data.

Posted ContentDOI
10 Mar 2022-bioRxiv
TL;DR: This companion article introduced RELAX (the Reduction of Electroencephalographic Artifacts), an automated and modular cleaning pipeline that reduces artifacts with Multiple Wiener Filtering and wavelet enhanced independent component analysis ( wICA) applied to artifact components detected with ICLabel (wICA_ICLabel) (Bailey et al., 2022).
Abstract: Electroencephalography (EEG) is commonly used to examine neural activity time-locked to the presentation of a stimulus, referred to as an Event-Related Potential (ERP). However, EEG is also influenced by non-neural artifacts, which can confound ERP comparisons. Artifact cleaning can reduce artifacts, but often requires time-consuming manual decisions. Most automated cleaning methods require frequencies <1Hz to be filtered out of the data, so are not recommended for ERPs (which often contain <1Hz frequencies). In our companion article, we introduced RELAX (the Reduction of Electroencephalographic Artifacts), an automated and modular cleaning pipeline that reduces artifacts with Multiple Wiener Filtering (MWF) and/or wavelet enhanced independent component analysis (wICA) applied to artifact components detected with ICLabel (wICA_ICLabel) (Bailey et al., 2022). To evaluate the suitability of RELAX for data cleaning prior to ERP analysis, multiple versions of RELAX were compared to four commonly used EEG cleaning pipelines. Cleaning performance was compared across a range of artifact cleaning metrics and in the amount of variance in ERPs explained by different conditions in a Go-Nogo task. RELAX with MWF and wICA_ICLabel cleaned the data the most effectively and produced amongst the most dependable ERP estimates. RELAX with wICA_ICLabel only or MWF_only may detect experimental effects better for some ERP measures. Importantly, RELAX can high-pass filter data at 0.25Hz, so is applicable to analyses involving ERPs. The pipeline is easy to implement via EEGLAB in MATLAB and is freely available on GitHub. Given its performance, objectivity, and ease of use, we recommend RELAX for EEG data cleaning. The MATLAB code, the supplementary materials, and a simple instruction manual explaining how to implement the RELAX pipeline can be downloaded from https://github.com/NeilwBailey/RELAX/releases. A condition of use of the pipeline is that the version of the pipeline used is referred to as RELAX_[pipeline], for example “RELAX_MWF_wICA” or “RELAX_wICA_ICLabel”, and that the current paper be cited, as well as the dependencies used. These dependencies are likely to include: EEGLAB (Delorme & Makeig, 2004), fieldtrip (Oostenveld et al., 2011), the MWF toolbox (Somers et al., 2019), fastICA (Hyvarinen, 1999), wICA (Castellanos & Makarov, 2006), ICLabel (Pion-Tonachini et al., 2019), and PREP (Bigdely-Shamlo et al., 2015) See our companion article for the application of RELAX to the study of oscillatory power (Bailey et al., 2022).

Journal ArticleDOI
TL;DR: In this paper , the authors proposed EEGANet, a framework based on generative adversarial networks (GANs) to address this issue as a data-driven assistive tool for ocular artifacts removal, which can be applied calibration-free without relying on the EOG channels or the eye blink detection algorithms.
Abstract: The elimination of ocular artifacts is critical in analyzing electroencephalography (EEG) data for various brain-computer interface (BCI) applications. Despite numerous promising solutions, electrooculography (EOG) recording or an eye-blink detection algorithm is required for the majority of artifact removal algorithms. This reliance can hinder the model's implementation in real-world applications. This paper proposes EEGANet, a framework based on generative adversarial networks (GANs), to address this issue as a data-driven assistive tool for ocular artifacts removal (source code is available at https://github.com/IoBT-VISTEC/EEGANet). After the model was trained, the removal of ocular artifacts could be applied calibration-free without relying on the EOG channels or the eye blink detection algorithms. First, we tested EEGANet's ability to generate multi-channel EEG signals, artifacts removal performance, and robustness using the EEG eye artifact dataset, which contains a significant degree of data fluctuation. According to the results, EEGANet is comparable to state-of-the-art approaches that utilize EOG channels for artifact removal. Moreover, we demonstrated the effectiveness of EEGANet in BCI applications utilizing two distinct datasets under inter-day and subject-independent schemes. Despite the absence of EOG signals, the classification performance of the signals processed by EEGANet is equivalent to that of traditional baseline methods. This study demonstrates the potential for further use of GANs as a data-driven artifact removal technique for any multivariate time-series bio-signal, which might be a valuable step towards building next-generation healthcare technology.

Journal ArticleDOI
TL;DR: The PCCT provided excellent image contrast and low-noise profiles for the differentiation of the grey and white matter, and only the artifacts below the calvarium and in the posterior fossa still underperform, which is attributable to the lack of an artifact reduction algorithm in image postprocessing.
Abstract: In 2021, the first clinical photon-counting CT (PCCT) was introduced. The purpose of this study is to evaluate the image quality of polyenergetic and virtual monoenergetic reconstructions in unenhanced PCCTs of the head. A total of 49 consecutive patients with unenhanced PCCTs of the head were retrospectively included. The signals ± standard deviations of the gray and white matter were measured at three different locations in axial slices, and a measure of the artifacts below the cranial calvaria and in the posterior fossa between the petrous bones was also obtained. The signal-to-noise ratios (SNRs) and contrast-to-noise ratios (CNRs) were calculated for all reconstructions. In terms of the SNRs and CNRs, the polyenergetic reconstruction is superior to all virtual monoenergetic reconstructions (p < 0.001). In the MERs, the highest SNR is found in the 70 keV MER, and the highest CNR is in the 65 keV MER. In terms of artifacts below the cranial calvaria and in the posterior fossa, certain MERs are superior to polyenergetic reconstruction (p < 0.001). The PCCT provided excellent image contrast and low-noise profiles for the differentiation of the grey and white matter. Only the artifacts below the calvarium and in the posterior fossa still underperform, which is attributable to the lack of an artifact reduction algorithm in image postprocessing. It is conceivable that the usual improvements in image postprocessing, especially with regard to glaring artifacts, will lead to further improvements in image quality.

Journal ArticleDOI
TL;DR: In this paper , the authors classified the currently available TMS-EEG artifact removal methods into spatial and temporal filters based on their properties and introduced beamforming as a unified framework of the most popular spatial filtering techniques.

Journal ArticleDOI
TL;DR: Recommendations for the use of each method depending on the intensity of the movement in EEG data during locomotion are provided and the advantages and disadvantages of the methods are highlighted.
Abstract: Objective: Electroencephalography (EEG) is a non-invasive technique used to record cortical neurons’ electrical activity using electrodes placed on the scalp. It has become a promising avenue for research beyond state-of-the-art EEG research that is conducted under static conditions. EEG signals are always contaminated by artifacts and other physiological signals. Artifact contamination increases with the intensity of movement. Approach: In the last decade (since 2010), researchers have started to implement EEG measurements in dynamic setups to increase the overall ecological validity of the studies. Many different methods are used to remove non-brain activity from the EEG signal, and there are no clear guidelines on which method should be used in dynamic setups and for specific movement intensities. Main results: Currently, the most common methods for removing artifacts in movement studies are methods based on independent component analysis. However, the choice of method for artifact removal depends on the type and intensity of movement, which affects the characteristics of the artifacts and the EEG parameters of interest. When dealing with EEG under non-static conditions, special care must be taken already in the designing period of an experiment. Software and hardware solutions must be combined to achieve sufficient removal of unwanted signals from EEG measurements. Significance: We have provided recommendations for the use of each method depending on the intensity of the movement and highlighted the advantages and disadvantages of the methods. However, due to the current gap in the literature, further development and evaluation of methods for artifact removal in EEG data during locomotion is needed.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: Recently, Liu et al. as mentioned in this paper proposed a locally discriminative learning (LDL) method to discriminate between GAN-generated artifacts and realistic details, and consequently generate an artifact map to regularize and stabilize the model training process.
Abstract: Single image super-resolution (SISR) with generative adversarial networks (GAN) has recently attracted increasing attention due to its potentials to generate rich details. However, the training of GAN is unstable, and it often introduces many perceptually unpleasant artifacts along with the generated details. In this paper, we demonstrate that it is possible to train a GAN-based SISR model which can stably generate perceptually realistic details while inhibiting visual artifacts. Based on the observation that the local statistics (e.g., residual variance) of artifact areas are often different from the areas of perceptually friendly details, we develop a framework to discriminate between GAN-generated artifacts and realistic details, and consequently generate an artifact map to regularize and stabilize the model training process. Our proposed locally discriminative learning (LDL) method is simple yet effective, which can be easily plugged in off-the-shelf SISR methods and boost their performance. Experiments demonstrate that LDL outperforms the state-of-the-art GAN based SISR methods, achieving not only higher reconstruction accuracy but also superior perceptual quality on both synthetic and real-world datasets. Codes and models are available at https://github.com/csjliang/LDL.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper adopted key-route main path analysis (MPA) to track the most significant development trajectory from 2855 digital divide-related academic articles and the multiple global MPA to explore the present and future research trends.

Journal ArticleDOI
TL;DR: In this article , a machine learning framework for automatic motion artifact detection on electrodermal activity signals is presented. But the detection of motion artifacts (MA) hinders accurate analysis of EDA signals.

Journal ArticleDOI
TL;DR: In this paper , three sources of artifact (and potential mitigations) in local field potential (LFP) signals collected by the Medtronic "Percept" device were investigated and the potential impact of artifact on the future development of adaptive DBS using this device.
Abstract: Background: The Medtronic “Percept” is the first FDA-approved deep brain stimulation (DBS) device with sensing capabilities during active stimulation. Its real-world signal-recording properties have yet to be fully described. Objective: This study details three sources of artifact (and potential mitigations) in local field potential (LFP) signals collected by the Percept and assesses the potential impact of artifact on the future development of adaptive DBS (aDBS) using this device. Methods: LFP signals were collected from 7 subjects in both experimental and clinical settings. The presence of artifacts and their effect on the spectral content of neural signals were evaluated in both the stimulation ON and OFF states using three distinct offline artifact removal techniques. Results: Template subtraction successfully removed multiple sources of artifact, including (1) electrocardiogram (ECG), (2) nonphysiologic polyphasic artifacts, and (3) ramping-related artifacts seen when changing stimulation amplitudes. ECG removal from stimulation ON (at 0 mA) signals resulted in spectral shapes similar to OFF stimulation spectra (averaged difference in normalized power in theta, alpha, and beta bands ≤3.5%). ECG removal using singular value decomposition was similarly successful, though required subjective researcher input. QRS interpolation produced similar recovery of beta-band signal but resulted in residual low-frequency artifact. Conclusions: Artifacts present when stimulation is enabled notably affected the spectral properties of sensed signals using the Percept. Multiple discrete artifacts could be successfully removed offline using an automated template subtraction method. The presence of unrejected artifact likely influences online power estimates, with the potential to affect aDBS algorithm performance.

Journal ArticleDOI
TL;DR: This study aimed to introduce and validate an automated artifact detection and rejection system for clinical BSGM applications.
Abstract: Body surface gastric mapping (BSGM) is a new clinical tool for gastric motility diagnostics, providing high‐resolution data on gastric myoelectrical activity. Artifact contamination was a key challenge to reliable test interpretation in traditional electrogastrography. This study aimed to introduce and validate an automated artifact detection and rejection system for clinical BSGM applications.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an HDR imaging approach that aggregates the information from multiple LDR images with guidance from image gradient domain, which generates artifact-free images by integrating the image gradient information and the image context information in the pixel domain.

Journal ArticleDOI
25 Jan 2022-Sensors
TL;DR: A new methodology is proposed by integrating the SSA with continuous wavelet transform (CWT) and the k-means clustering algorithm that removes the eye-blink artifact from the single-channel EEG signals without altering the low frequencies of the EEG signal.
Abstract: Recently, the use of portable electroencephalogram (EEG) devices to record brain signals in both health care monitoring and in other applications, such as fatigue detection in drivers, has been increased due to its low cost and ease of use. However, the measured EEG signals always mix with the electrooculogram (EOG), which are results due to eyelid blinking or eye movements. The eye-blinking/movement is an uncontrollable activity that results in a high-amplitude slow-time varying component that is mixed in the measured EEG signal. The presence of these artifacts misled our understanding of the underlying brain state. As the portable EEG devices comprise few EEG channels or sometimes a single EEG channel, classical artifact removal techniques such as blind source separation methods cannot be used to remove these artifacts from a single-channel EEG signal. Hence, there is a demand for the development of new single-channel-based artifact removal techniques. Singular spectrum analysis (SSA) has been widely used as a single-channel-based eye-blink artifact removal technique. However, while removing the artifact, the low-frequency components from the non-artifact region of the EEG signal are also removed by SSA. To preserve these low-frequency components, in this paper, we have proposed a new methodology by integrating the SSA with continuous wavelet transform (CWT) and the k-means clustering algorithm that removes the eye-blink artifact from the single-channel EEG signals without altering the low frequencies of the EEG signal. The proposed method is evaluated on both synthetic and real EEG signals. The results also show the superiority of the proposed method over the existing methods.

Journal ArticleDOI
06 Oct 2022-ACS Nano
TL;DR: In this paper , the authors proposed in situ forming hydrogel electrodes or electronics (ISF-HEs) that can establish highly conformal interfaces on curved biological surfaces without auxiliary adhesions.
Abstract: Conventional epidermal bioelectronics usually do not conform well with natural skin surfaces and are susceptible to motion artifact interference, due to incompatible dimensions, insufficient adhesion, imperfect compliance, and usually require complex manufacturing and high costs. We propose in situ forming hydrogel electrodes or electronics (ISF-HEs) that can establish highly conformal interfaces on curved biological surfaces without auxiliary adhesions. The ISF-HEs also have favorable flexibility and soft compliance comparable to human skin (≈0.02 kPa-1), which can stably maintain synchronous movements with deformed skins. Thus, the as-prepared ISF-HEs can accurately monitor large and tiny human motions with short response time (≈180 ms), good biocompatibility, and excellent performance. The as-obtained nongapped hydrogel electrode-skin interfaces achieve ultralow interfacial impedance (≈50 KΩ), nearly an order of magnitude lower than commercial Ag|AgCl electrodes as well as other reported dry and wet electrodes, regardless of the intrinsic micro-obstacles (wrinkles, hair) and skin deformation interference. Therefore, the ISF-HEs can collect high-quality electrocardiography and surface electromyography (sEMG) signals, with high signal-to-noise ratio (SNR ≈ 32.04 dB), reduced signal crosstalk, and minimized motion artifact interference. Simultaneously monitoring human motions and sEMG signals have also been implemented for the general exercise status assessment, such as the shooting competition in the Olympics. The as-prepared ISF-HEs can be considered as supplements/substitutes of conventional electrodes in percutaneously noninvasive monitoring of multifunctional physiological signals for health and exercise status.

Journal ArticleDOI
TL;DR: In this article , metal artifacts (MAs) were reduced using metal artifact reduction software (iMAR), tin prefilter (Sn), dual-energy (DE), and conventional protocols.
Abstract: With the aging population and thus rising numbers of orthopedic implants (OIs), metal artifacts (MAs) increasingly pose a problem for computed tomography (CT) examinations. In the study presented here, different MA reduction techniques (iterative metal artifact reduction software [iMAR], tin prefilter technique, and dual-energy CT [DECT]) were compared.Four human cadaver pelvises with OIs were scanned on a third-generation DECT scanner using tin prefilter (Sn), dual-energy (DE), and conventional protocols. Virtual monoenergetic CT images were generated from DE data sets. Postprocessing of CT images was performed using iMAR. Qualitative (bony structures, MA, image noise) image analysis using a 6-point Likert scale and quantitative image analysis (contrast-to-noise ratio, standard deviation of background noise) were performed by 2 observers. Statistical testing was performed using Friedman test with Nemenyi test as a post hoc test.The iMAR Sn 150 kV protocol provided the best overall assessability of bony structures and the lowest subjective image noise. The iMAR DE protocol and virtual monochromatic image (VMI) ± iMAR achieved the most effective metal artifact reduction (MAR) (P < 0.05 compared with conventional protocols). Bony structures were rated worse in VMI ± iMAR (P < 0.05) than in tin prefilter protocols ± iMAR. The DE protocol ± iMAR had the lowest contrast-to-noise ratio (P < 0.05 compared with iMAR standard) and the highest image noise (P < 0.05 compared with iMAR VMI). The iMAR reduced MA very efficiently.When considering MAR and image quality, the iMAR Sn 150 kV protocol performed best overall in CT images with OI. The iMAR generated new artifacts that impaired image quality. The DECT/VMI reduced MA best, but experienced from a lack of resolution of bony fine structures.

Journal ArticleDOI
24 Feb 2022-iScience
TL;DR: The authors used simulated genomic-scale datasets and showed that recoding amino acid data improves accuracy when the model does not account for the compositional heterogeneity of the amino acid alignment, and applied their findings to three datasets addressing the root of the animal tree, where the debate centers on whether sponges or comb jellies (Ctenophora) represent the sister of all other animals.