scispace - formally typeset
Search or ask a question

Showing papers on "Artifact (error) published in 2012"


Journal ArticleDOI
TL;DR: Methods for reducing noise and out-of-field artifacts may enable ultra-high resolution limited field of view imaging of tumors and other structures and result in a more accurate diagnosis.
Abstract: Artifacts are commonly encountered in clinical CT and may obscure or simulate pathology. There are many different types of CT artifacts, including noise, beam hardening, scatter, pseudoenhancement, motion, cone-beam, helical, ring and metal artifacts. We review the cause and appearance of each type of artifact, correct some popular misconceptions and describe modern techniques for artifact reduction. Noise can be reduced using iterative reconstruction or by combining data from multiple scans. This enables lower radiation dose and higher resolution scans. Metal artifacts can also be reduced using iterative reconstruction, resulting in a more accurate diagnosis. Dual- and multi-energy (photon counting) CT can reduce beam hardening and provide better tissue contrast. Methods for reducing noise and out-of-field artifacts may enable ultra-high resolution limited field of view imaging of tumors and other structures.

658 citations


Journal ArticleDOI
TL;DR: A new wavelet-based method for removing motion artifacts from fNIRS signals based on a gaussian distribution and modifies wavelet coefficients in levels adaptively selected based on the degree of contamination with motion artifact is proposed.
Abstract: Functional near-infrared spectroscopy (fNIRS) is a powerful tool for monitoring brain functional activities. Due to its non-invasive and non-restraining nature, fNIRS has found broad applications in brain functional studies. However, for fNIRS to work well, it is important to reduce its sensitivity to motion artifacts. We propose a new wavelet-based method for removing motion artifacts from fNIRS signals. The method relies on differences between artifacts and fNIRS signal in terms of duration and amplitude and is specifically designed for spike artifacts. We assume a Gaussian distribution for the wavelet coefficients corresponding to the underlying hemodynamic signal in detail levels and identify the artifact coefficients using this distribution. An input parameter controls the intensity of artifact attenuation in trade-off with the level of distortion introduced in the signal. The method only modifies wavelet coefficients in levels adaptively selected based on the degree of contamination with motion artifact. To demonstrate the feasibility of the method, we tested it on experimental fNIRS data collected from three infant subjects. Normalized mean-square error and artifact energy attenuation were used as criteria for performance evaluation. The results show 18.29 and 16.42 dB attenuation in motion artifacts energy for 700 and 830 nm wavelength signals in a total of 29 motion events with no more than −16.7 dB distortion in terms of normalized mean-square error in the artifact-free regions of the signal.

391 citations


Journal ArticleDOI
TL;DR: The merit of the method is clearly demonstrated using convergence and correlation analysis, thus making it best suitable for present-day pulse oximeters utilizing PPG sensor head with a single pair of source and detector, which does not have any extra hardware meant for capturing noise reference signal.
Abstract: The performance of pulse oximeters is highly influenced by motion artifacts (MAs) in photoplethysmographic (PPG) signals. In this paper, we propose a simple and efficient approach based on adaptive step-size least mean squares (AS-LMS) adaptive filter for reducing MA in corrupted PPG signals. The presented method is an extension to our prior work on efficient use of adaptive filters for reduction of MA in PPG signals. The novelty of the method lies in the fact that a synthetic noise reference signal for an adaptive filtering process, representing MA noise, is generated internally from the MA-corrupted PPG signal itself instead of using any additional hardware such as accelerometer or source-detector pair for acquiring noise reference signal. Thus, the generated noise reference signal is then applied to the AS-LMS adaptive filter for artifact removal. While experimental results proved the efficacy of the proposed scheme, the merit of the method is clearly demonstrated using convergence and correlation analysis, thus making it best suitable for present-day pulse oximeters utilizing PPG sensor head with a single pair of source and detector, which does not have any extra hardware meant for capturing noise reference signal. In addition to arterial oxygen saturation estimation, the artifact reduction method facilitated the waveform contour analysis on artifact-reduced PPG, and the conventional parameters were evaluated for assessing the arterial stiffness.

308 citations


Journal ArticleDOI
TL;DR: An algorithm, which uses eye tracker information to objectively identify eye-artifact related ICA-components (ICs) in an automated manner is proposed, which performed very similar to human experts when those were given both, the topographies of the ICs and their respective activations in a large amount of trials.
Abstract: Eye movements introduce large artifacts to electroencephalographic recordings (EEG) and thus render data analysis difficult or even impossible. Trials contaminated by eye movement and blink artifacts have to be discarded, hence in standard EEG-paradigms subjects are required to fixate on the screen. To overcome this restriction, several correction methods including regression and blind source separation have been proposed. Yet, there is no automated standard procedure established. By simultaneously recording eye movements and 64-channel-EEG during a guided eye movement paradigm, we investigate and review the properties of eye movement artifacts, including corneo-retinal dipole changes, saccadic spike potentials and eyelid artifacts, and study their interrelations during different types of eye- and eyelid movements. In concordance with earlier studies our results confirm that these artifacts arise from different independent sources and that depending on electrode site, gaze direction, and choice of reference these sources contribute differently to the measured signal. We assess the respective implications for artifact correction methods and therefore compare the performance of two prominent approaches, namely linear regression and independent component analysis (ICA). We show and discuss that due to the independence of eye artifact sources, regression-based correction methods inevitably over- or under-correct individual artifact components, while ICA is in principle suited to address such mixtures of different types of artifacts. Finally, we propose an algorithm, which uses eye tracker information to objectively identify eye-artifact related ICA-components (ICs) in an automated manner. In the data presented here, the algorithm performed very similar to human experts when those were given both, the topographies of the ICs and their respective activations in a large amount of trials. Moreover it performed more reliable and almost twice as effective than human experts when those had to base their decision on IC topographies only. Furthermore, a receiver operating characteristic (ROC) analysis demonstrated an optimal balance of false positive and false negative at an area under curve (AUC) of more than 0.99. Removing the automatically detected ICs from the data resulted in removal or substantial suppression of ocular artifacts including microsaccadic spike potentials, while the relevant neural signal remained unaffected. In conclusion the present work aims at a better understanding of individual eye movement artifacts, their interrelations and the respective implications for eye artifact correction. Additionally, the proposed ICA-procedure provides a tool for optimized detection and correction of eye movement-related artifact components.

279 citations


Journal ArticleDOI
TL;DR: This paper employs concept of the spatially constrained ICA (SCICA) to extract artifact-only independent components (ICs) from the given EEG data, use WD to remove any cerebral activity from the extracted-artifacts ICs, and finally project back the artifacts to be subtracted from EEG signals to get clean EEG data.

180 citations


Journal ArticleDOI
TL;DR: EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity and suggests that the performance of muscle artifact correction methods strongly depend on the level of data contamination, and of the source configuration underlying EEG signals.
Abstract: Electroencephalographic (EEG) recordings are often contaminated with muscle artifacts. This disturbing myogenic activity not only strongly affects the visual analysis of EEG, but also most surely impairs the results of EEG signal processing tools such as source localization. This article focuses on the particular context of the contamination epileptic signals (interictal spikes) by muscle artifact, as EEG is a key diagnosis tool for this pathology. In this context, our aim was to compare the ability of two stochastic approaches of blind source separation, namely independent component analysis (ICA) and canonical correlation analysis (CCA), and of two deterministic approaches namely empirical mode decomposition (EMD) and wavelet transform (WT) to remove muscle artifacts from EEG signals. To quantitatively compare the performance of these four algorithms, epileptic spike-like EEG signals were simulated from two different source configurations and artificially contaminated with different levels of real EEG-recorded myogenic activity. The efficiency of CCA, ICA, EMD, and WT to correct the muscular artifact was evaluated both by calculating the normalized mean-squared error between denoised and original signals and by comparing the results of source localization obtained from artifact-free as well as noisy signals, before and after artifact correction. Tests on real data recorded in an epileptic patient are also presented. The results obtained in the context of simulations and real data show that EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity. For less noisy data, and when spikes arose from a single cortical source, the myogenic artifact was best corrected with CCA and ICA. Otherwise when spikes originated from two distinct sources, either EMD or ICA offered the most reliable denoising result for highly noisy data, while WT offered the better denoising result for less noisy data. These results suggest that the performance of muscle artifact correction methods strongly depend on the level of data contamination, and of the source configuration underlying EEG signals. Eventually, some insights into the numerical complexity of these four algorithms are given.

160 citations


Journal ArticleDOI
TL;DR: A two-step processing scheme called 'artifact suppressed large-scale nonlocal means' for suppressing both noise and artifacts in thoracic LDCT images is described, which allows conclusion on the efficacy of the method in improving thoraci LDCT data.
Abstract: The x-ray exposure to patients has become a major concern in computed tomography (CT) and minimizing the radiation exposure has been one of the major efforts in the CT field. Due to plenty high-attenuation tissues in the human chest, under low-dose scan protocols, thoracic low-dose CT (LDCT) images tend to be severely degraded by excessive mottled noise and non-stationary streak artifacts. Their removal is rather a challenging task because the streak artifacts with directional prominence are often hard to discriminate from the attenuation information of normal tissues. This paper describes a two-step processing scheme called 'artifact suppressed large-scale nonlocal means' for suppressing both noise and artifacts in thoracic LDCT images. Specific scale and direction properties were exploited to discriminate the noise and artifacts from image structures. Parallel implementation has been introduced to speed up the whole processing by more than 100 times. Phantom and patient CT images were both acquired for evaluation purpose. Comparative qualitative and quantitative analyses were both performed that allows conclusion on the efficacy of our method in improving thoracic LDCT data.

129 citations


Journal ArticleDOI
TL;DR: The results show that the SF in MEG closely resembles neuronal activity in frontal and temporal sensors, and the source configurations of the SF were comparable for regular and miniature saccades.

117 citations


Journal ArticleDOI
TL;DR: The performance showed that multi-class MI tasks can be reliably discriminated using artifact-contaminated EEG recordings from a few channels, and may be a promising avenue for online robust EEG-based BCI applications.
Abstract: Recent studies show that scalp electroencephalography (EEG) as a non-invasive interface has great potential for brain-computer interfaces (BCIs). However, one factor that has limited practical applications for EEG-based BCI so far is the difficulty to decode brain signals in a reliable and efficient way. This paper proposes a new robust processing framework for decoding of multi-class motor imagery (MI) that is based on five main processing steps. (i) Raw EEG segmentation without the need of visual artifact inspection. (ii) Considering that EEG recordings are often contaminated not just by electrooculography (EOG) but also other types of artifacts, we propose to first implement an automatic artifact correction method that combines regression analysis with independent component analysis (ICA) for recovering the original source signals. (iii) The significant difference between frequency components based on event-related (de-) synchronization and sample entropy is then used to find non-continuous discriminating rhythms. After spectral filtering using the discriminating rhythms, a channel selection algorithm is used to select only relevant channels. (iv) Feature vectors are extracted based on the inter-class diversity and time-varying dynamic characteristics of the signals. (v) Finally, a support vector machine (SVM) is employed for four-class classification. We tested our proposed algorithm on experimental data that was obtained from dataset 2a of BCI competition IV (2008). The overall four-class kappa values (between 0.41 and 0.80) were comparable to other models but without requiring any artifact-contaminated trial removal. The performance showed that multi-class MI tasks can be reliably discriminated using artifact-contaminated EEG recordings from a few channels. This may be a promising avenue for online robust EEG-based BCI applications.

115 citations


Journal ArticleDOI
01 Sep 2012
TL;DR: A more empirical approach to the modeling of the desired signal is described that is demonstrated for functional brain monitoring tasks which allows for the procurement of a “ground truth” signal which is highly correlated to a true desired signal that has been contaminated with artifacts.
Abstract: Artifact removal from physiological signals is an essential component of the biosignal processing pipeline. The need for powerful and robust methods for this process has become particularly acute as healthcare technology deployment undergoes transition from the current hospital-centric setting toward a wearable and ubiquitous monitoring environment. Currently, determining the relative efficacy and performance of the multiple artifact removal techniques available on real world data can be problematic, due to incomplete information on the uncorrupted desired signal. The majority of techniques are presently evaluated using simulated data, and therefore, the quality of the conclusions is contingent on the fidelity of the model used. Consequently, in the biomedical signal processing community, there is considerable focus on the generation and validation of appropriate signal models for use in artifact suppression. Most approaches rely on mathematical models which capture suitable approximations to the signal dynamics or underlying physiology and, therefore, introduce some uncertainty to subsequent predictions of algorithm performance. This paper describes a more empirical approach to the modeling of the desired signal that we demonstrate for functional brain monitoring tasks which allows for the procurement of a “ground truth” signal which is highly correlated to a true desired signal that has been contaminated with artifacts. The availability of this “ground truth,” together with the corrupted signal, can then aid in determining the efficacy of selected artifact removal techniques. A number of commonly implemented artifact removal techniques were evaluated using the described methodology to validate the proposed novel test platform.

103 citations


Journal ArticleDOI
TL;DR: Experimental results show that the WNN algorithm can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy datasets.

Journal ArticleDOI
TL;DR: Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable with the proposed methods.

Journal ArticleDOI
TL;DR: Electroencephalography can be used to image the brain during locomotion provided that signal processing techniques, such as independent Component Analysis (ICA), are used to parse electrocortical activity from artifact contaminated EEG, but for certain applications the number of EEG sensors used for mobile brain imaging could be vastly reduced.
Abstract: A noninvasive method for imaging the human brain during mobile activities could have far reaching benefits for studies of human motor control, for research and treatment of neurological disabilities, and for brain-controlled powered prosthetic limbs or orthoses. Several recent studies have demonstrated that electroencephalography (EEG) can be used to image the brain during locomotion provided that signal processing techniques, such as independent Component Analysis (ICA), are used to parse electrocortical activity from artifact contaminated EEG. However, these studies used high-density 256-channel EEG sensor arrays, which are likely too time-consuming to setup in a clinical or field setting. Therefore, it is important to evaluate how reducing the number of EEG channel signals affects the electrocortical source signals that can be parsed from EEG recorded during standing and walking while concurrently performing a visual oddball discrimination task. Specifically, we computed temporal and spatial correlations between electrocortical sources parsed from high-density EEG and electrocortical sources parsed from reduced-channel subsets of the original high-density EEG. For this task, our results indicate that on average an EEG montage with as few as 35 channels may be sufficient to record the two most dominate electrocortical sources (temporal and spatial R2 > 0.9). Correlations for additional electrocortical sources decreased linearly such that the least dominant sources extracted from the 35 channel dataset had temporal and spatial correlations of approximately 0.7. This suggests that for certain applications the number of EEG sensors used for mobile brain imaging could be vastly reduced, but researchers and clinicians must consider the expected distribution of relevant electrocortical sources when determining the number of EEG sensors necessary for a particular application.

Proceedings ArticleDOI
01 Jan 2012
TL;DR: The photoplethysmogram (PPG) recorded from pulse oximetry is often corrupted with artifacts, which render the derived vital signs inaccurate.
Abstract: Pulse oximeters non-invasively measure heart rate and oxygen saturation and have great potential for predicting critical illness. The photoplethysmogram (PPG) recorded from pulse oximetry is often corrupted with artifacts. These artifacts render the derived vital signs inaccurate.

Proceedings ArticleDOI
Xuxue Sun1, Ping Yang1, Yulin Li1, Zhifan Gao1, Yuan-Ting Zhang1 
08 Jun 2012
TL;DR: A signal processing method based on multi-scale data analysis using Empirical Mode Decomposition (EMD) for the purpose of accurate heart rate extraction and can improve the accuracy of heart beat detection with period recovery rate at 84.68%.
Abstract: Many vital physiological features are embedded in photoplethysmography (PPG). Among them, heart beat carries the most significant importance for physiological monitoring in both the clinical and mobile health-care settings. However, motion artifact induced by finger and arm movement can corrupt the PPG signal significantly and cause serious false recognition of physiological features, leading to erroneous medical decision. In this paper, we propose a signal processing method based on multi-scale data analysis using Empirical Mode Decomposition (EMD) for the purpose of accurate heart rate extraction. Experiments with signals from Physionet database and the signals collected in our lab showed that our method can improve the accuracy of heart beat detection with period recovery rate at 84.68%.

Proceedings ArticleDOI
12 Nov 2012
TL;DR: An algorithm is presented for identifying clean EEG epochs by thresholding statistical properties of the EEG by trained on EEG datasets from both healthy subjects and stroke/spinal cord injury patient populations via differential evolution (DE).
Abstract: Lack of a clear analytical metric for identifying artifact free, clean electroencephalographic (EEG) signals inhibits robust comparison of different artifact removal methods and lowers confidence in the results of EEG analysis. An algorithm is presented for identifying clean EEG epochs by thresholding statistical properties of the EEG. Thresholds are trained on EEG datasets from both healthy subjects and stroke / spinal cord injury patient populations via differential evolution (DE).


Journal ArticleDOI
TL;DR: CIAC enables the objective and efficient attenuation of the CI artifact in EEG recordings, as it provided a reasonable reconstruction of individual AEPs, and the systematic pattern of individual differences in N1 amplitudes and latencies observed with different stimuli at different sessions suggests that CIAC can overcome the electrical artifact problem.

Proceedings ArticleDOI
12 Nov 2012
TL;DR: A robust real-time technique to estimate fundamental frequency and generate a noise reference signal is proposed and a Least Mean Square adaptive noise canceler is designed and validated using a synthetic noise generator.
Abstract: The performance of wearable biosensors is highly influenced by motion artifact. In this paper, a model is proposed for analysis of motion artifact in wearable photoplethysmography (PPG) sensors. Using this model, we proposed a robust real-time technique to estimate fundamental frequency and generate a noise reference signal. A Least Mean Square (LMS) adaptive noise canceler is then designed and validated using our synthetic noise generator. The analysis and results on proposed technique for noise cancellation shows promising performance.

Journal ArticleDOI
TL;DR: The results show that the electrode-tissue impedance can correlate with the motion artifacts for local disturbance of the electrodes and that the impedance signals can be used in motion artifact removal techniques such as adaptive filtering.
Abstract: Ambulatory monitoring of the electrocardiogram (ECG) is a highly relevant topic in personal healthcare. A key technical challenge is overcoming artifacts from motion in order to produce ECG signals capable of being used in clinical diagnosis by a cardiologist. An electrode-tissue impedance is a signal of significant interest in reducing the motion artifact in ECG recordings on the go. A wireless system containing an ultralow-power analog front-end ECG signal acquisition, as well as the electrode-tissue impedance, is used in a validation study on multiple subjects. The goal of this paper is to study the correlation between motion artifacts and skin electrode impedance for a variety of motion types and electrodes. We have found that the correlation of the electrode-tissue impedance with the motion artifact is highly dependent on the electrode design the impedance signal (real, imaginary, absolute impedance), and artifact types (e.g., push or pull electrodes). With the chosen electrodes, we found that the highest correlation was obtained for local electrode artifacts (push, pull, electrode) followed by local skin (stretch, twist, skin) and global artifacts (walk, jog, jump). The results show that the electrode-tissue impedance can correlate with the motion artifacts for local disturbance of the electrodes and that the impedance signals can be used in motion artifact removal techniques such as adaptive filtering.

Journal ArticleDOI
28 Sep 2012-Science
TL;DR: The results suggest that the grid pattern in the brain is most likely an artifact attributable to the limitations of their method, and the whole-brain fiber crossing quantification does not support their theory.
Abstract: Wedeen et al. (Reports, 30 March 2012, p. 1628) proposed a geometrical grid pattern in the brain that could help the understanding of the brain's organization and connectivity. We show that whole-brain fiber crossing quantification does not support their theory. Our results suggest that the grid pattern is most likely an artifact attributable to the limitations of their method.

Proceedings ArticleDOI
18 Oct 2012
TL;DR: A new method which removes motion induced artifacts in ECG recording devices by obtaining an estimate of the artifacts using the stationary wavelet transformation and an automatic multi-resolution thresholding scheme which uses a robustified QRS detection is proposed.
Abstract: The electrocardiogram (ECG) is a powerful non-invasive tool which allows for diagnosis of a wide range of heart conditions. Today, portable ECG recording devices, equipped with a transmitter, can be used to provide health related information and to trigger alarms in case of life threatening situations. However, these devices suffer from motion induced artifacts. While much research has been conducted to remove time invariant noise, the removal of motion induced artifacts remains an unsolved problem. We therefore introduce a new method which removes these artifacts. This is done by obtaining an estimate of the artifacts using the stationary wavelet transformation. An automatic multi-resolution thresholding scheme which uses a robustified QRS detection is proposed. Real data examples as well as simulations are given which illustrate the performance of the method.

Journal ArticleDOI
TL;DR: The experimental results show that the data adaptive filtering approach to separate the electrooculograph (EOG) artifact from the recorded electroencephalograph (EEG) signal performs better than the wavelet based approach.

Journal ArticleDOI
TL;DR: It is found that microphone-derived MMG spectra were significantly less influenced by motion artifact than corresponding accelerometer-derived spectra, and condenser microphones are preferred for MMG recordings when the mitigation of motion artifact effects is important.

Journal ArticleDOI
TL;DR: A dose-dependent, nonlinear propagation artifact known as pseudoenhancement occurs in the far wall adventitia of the carotid artery and should not be mistaken as a marker of plaque vulnerability.
Abstract: At dynamic contrast-enhanced US, a dose-dependent artifact occurs in the far wall of the carotid artery, which limits the assessment of plaque vulnerability using neovascularization.

Journal ArticleDOI
TL;DR: The characteristics of unintentional muscle activities align with the reported characteristics of controlled muscle activities and the ICA-SR method provides an urgently needed solution with validated performance for efficiently processing large volumes of clinical EEG.

Proceedings ArticleDOI
01 Jan 2012
TL;DR: A motion artifact removal method with a two-stage cascade LMS adaptive filter and an adaptive step-size LMS algorithm can achieve fast convergence to track large sudden motion artifact quickly, while preventing the distortion of the ECG component.
Abstract: A motion artifact removal method with a two-stage cascade LMS adaptive filter is proposed for an ambulatory ECG monitoring system. The first LMS stage consisting of analog feedback prevents the signal saturation to reduce the input dynamic range. An adaptive step-size LMS algorithm is introduced and employed for the second LMS stage. The adaptive step-size algorithm can achieve fast convergence to track large sudden motion artifact quickly, while preventing the distortion of the ECG component. The filtering performance is evaluated by the heart beat detection, measured by sensitivity (Se) and positive predictive value (+p), and the performance is increased by 9.8% and 6.48%, respectively, compared to the unfiltered signal at the worst case with -25dB SNR. The proposed motion artifact method is implemented on an ambulatory ECG monitoring module, and the real-time measurement shows a significant performance improvement.

Proceedings ArticleDOI
12 Nov 2012
TL;DR: An algorithm for motion artifact detection, which is based on the analysis of the variations in the time and period domain characteristics of the PPG signal, shows that both time and especially period domain features play an important role in the discrimination of motion artifacts from clean PPG pulses.
Abstract: The presence of motion artifacts in the photoplethysmographic (PPG) signals is one of the major obstacles in the extraction of reliable cardiovascular parameters in real time and continuous monitoring applications. In the current paper we present an algorithm for motion artifact detection, which is based on the analysis of the variations in the time and period domain characteristics of the PPG signal. The extracted features are ranked using a feature selection algorithm (NMIFS) and the best features are used in a Support Vector Machine classification model to distinguish between clean and corrupted sections of the PPG signal. The results achieved by the current algorithm (SE: 0.827 and SP: 0.927) show that both time and especially period domain features play an important role in the discrimination of motion artifacts from clean PPG pulses.

Journal ArticleDOI
TL;DR: To characterize cardiac motion artifacts in the liver and assess the use of a postprocessing method to mitigate these artifacts in repeat measurements.
Abstract: Purpose: To characterize cardiac motion artifacts in the liver and assess the use of a postprocessing method to mitigate these artifacts in repeat measurements. Materials and Methods: Three subjects underwent breathhold diffusion-weighted (DW) scans consisting of 25 repetitions for three b-values (0, 500, 1000 sec/mm2). Statistical maps computed from these repetitions were used to assess the distribution and behavior of cardiac motion artifacts in the liver. An objective postprocessing method to reduce the artifacts was compared with radiologist-defined gold standards. Results: Signal dropout is pronounced in areas proximal to the heart, such as the left lobe, but also present in the right lobe and in distal liver segments. The dropout worsens with b-value and leads to overestimation of the diffusivity. By reference to a radiologist-defined gold standard, a postprocessing correction method is shown to reduce cardiac motion artifact. Conclusion: Cardiac motion leads to significant artifacts in liver DW imaging; we propose a postprocessing method that may be used to mitigate the artifact and is advantageous to standard signal averaging in acquisitions with multiple repetitions. J. Magn. Reson. Imaging 2012;318-327. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: Experiments on various JPEG compressed images with various bit rates demonstrated that the proposed blocking artifacts measuring value matches well with the subjective image quality judged by human observers.
Abstract: Block based transform coding is one of the most popular techniques for image and video compression. However it suffers from several visual quality degradation factors, most notably from blocking artifacts. The subjective picture quality degradation caused by blocking artifacts, in general, does not agree well with the popular objective quality measure such as PSNR. A new image quality assessment method that detects and measures strength of blocking artifacts for block based transform coded images is proposed. In order to characterize the blocking artifacts, we utilize two observations: if blocking artifacts occur on the block boundary, the pixel value changes abruptly across the boundary and the same pixel values usually span along the entire length of the boundary. The proposed method operates only on a single block boundary to detect blocking artifacts. When a boundary is classified as having blocking artifacts, corresponding blocking artifact strength is also computed. Average values of those blocking artifact strengths are converted into a single number representing the subjective image quality. Experiments on various JPEG compressed images with various bit rates demonstrated that the proposed blocking artifacts measuring value matches well with the subjective image quality judged by human observers.