scispace - formally typeset
Search or ask a question

Showing papers on "Median filter published in 2018"


Journal ArticleDOI
TL;DR: This work presents a novel background subtraction from video sequences algorithm that uses a deep Convolutional Neural Network (CNN) to perform the segmentation, and it outperforms the existing algorithms with respect to the average ranking over different evaluation metrics announced in CDnet 2014.

331 citations


Journal ArticleDOI
TL;DR: DAMF could be successfully removed SAP noise at all densities and was compared with other methods by using Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) for some images such as Cameraman and Lena.

145 citations


Journal ArticleDOI
TL;DR: Qualitative and quantitative study and analysis indicate that the proposed technique can be used as an effective tool for denoising of ECG signals and hence can serve for better diagnostic in computer-based automated medical system.

144 citations


Journal ArticleDOI
TL;DR: A chaotic encryption-based blind digital image watermarking technique applicable to both grayscale and color images that can be used in applications like e-healthcare and telemedicine to robustly hide electronic health records in medical images.
Abstract: This paper presents a chaotic encryption-based blind digital image watermarking technique applicable to both grayscale and color images. Discrete cosine transform (DCT) is used before embedding the watermark in the host image. The host image is divided into $8\times 8$ nonoverlapping blocks prior to DCT application, and the watermark bit is embedded by modifying difference between DCT coefficients of adjacent blocks. Arnold transform is used in addition to chaotic encryption to add double-layer security to the watermark. Three different variants of the proposed algorithm have been tested and analyzed. The simulation results show that the proposed scheme is robust to most of the image processing operations like joint picture expert group compression, sharpening, cropping, and median filtering. To validate the efficiency of the proposed technique, the simulation results are compared with certain state-of-art techniques. The comparison results illustrate that the proposed scheme performs better in terms of robustness, security, and imperceptivity. Given the merits of the proposed scheme, it can be used in applications like e-healthcare and telemedicine to robustly hide electronic health records in medical images.

141 citations


Journal ArticleDOI
TL;DR: A NILM algorithm based on the Deep Neural Networks is proposed, which outperforms the AFAMAP algorithm both in seen and unseen condition, and that it exhibits a significant robustness in presence of noise.

134 citations


Journal ArticleDOI
TL;DR: A new method to remove salt and pepper noise, which is based on pixel density filter (BPDF), which shows that BPDF produces better results than the above-mentioned methods at low and medium noise density.
Abstract: In this paper, we deliver a new method to remove salt and pepper noise, which we refer to as based on pixel density filter (BPDF) The first step of the method is to determine whether or not a pixel is noisy, and then we decide on an adaptive window size that accepts the noisy pixel as the center The most repetitive noiseless pixel value within the window is set as the new pixel value By using 18 test images, we give the results of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), image enhancement factor (IEF), standard median filter (SMF), adaptive median filter (AMF), adaptive fuzzy filter (AFM), progressive switching median filter (PSMF), decision-based algorithm (DBA), modified decision-based unsymmetrical trimmed median filter (MDBUTMF), noise adaptive fuzzy switching median filter (NAFSM), and BPDF The results show that BPDF produces better results than the above-mentioned methods at low and medium noise density

89 citations


Proceedings ArticleDOI
02 Mar 2018
TL;DR: This paper will survey various median filtering techniques for excluding noisy pixel from a digital image by using various types of median filters such as recursive median filter, iterative median filters, directional medianfilter, weighted median filter), adaptive median filter progressive switching median filter and threshold median filter.
Abstract: The elimination of noise from images becomes a trending field in image processing. Imagesmay got corrupted by random change in pixel intensity, illumination, or due to poor contrast and can't be used directly. Therefore, it is necessary to get rid of impulse noise presented inan image. In order to remove such impulse noise, Median based filters are commonly used. However, we use various types of median filters such as recursive median filter, iterative median filter, directional median filter, weighted median filter, adaptive median filter progressive switching median filter and threshold median filter. This paper will survey various median filtering techniques for excluding noisy pixel from a digital image.

84 citations


Journal ArticleDOI
TL;DR: A new median filtering detection method based on CNN is proposed and achieves significant improved detection performance and performs well for highly compressed image of size as small as 16 × 16.

62 citations


Journal ArticleDOI
Dongkyu Kim1, Han-Ul Jang1, Seung-Min Mun1, Sunghee Choi1, Heung-Kyu Lee1 
TL;DR: This work presents a median filtering anti-forensic method based on deep convolutional neural networks, which can effectively remove traces from median filtered images and adopts the framework of generative adversarial networks to generate images that follow the underlying statistics of unaltered images, significantly enhancing forensic undetectability.
Abstract: Median filtering is used as an anti-forensic technique to erase processing history of some image manipulations such as JPEG, resampling, etc. Thus, various detectors have been proposed to detect median filtered images. To counter these techniques, several anti-forensic methods have been devised as well. However, restoring the median filtered image is a typical ill-posed problem, and thus it is still difficult to reconstruct the image visually close to the original image. Also, it is further hard to make the restored image have the statistical characteristic of the raw image for the anti-forensic purpose. To solve this problem, we present a median filtering anti-forensic method based on deep convolutional neural networks, which can effectively remove traces from median filtered images. We adopt the framework of generative adversarial networks to generate images that follow the underlying statistics of unaltered images, significantly enhancing forensic undetectability. Through extensive experiments, we demonstrate that our method successfully deceives the existing median filtering forensic techniques.

61 citations


Journal ArticleDOI
TL;DR: The uncertainties encountered in the impulse noise detection are addressed using the theory of belief functions, and a multi-criteria detection strategy based on evidential reasoning is proposed, which has superior performance compared with several state-of-the-art denoising methods.

52 citations


Journal ArticleDOI
TL;DR: Numerical experiments demonstrate that the iterative debl lending based on the SMF constraint obtains a better performance and a faster convergence than the low-rank and compressed sensing constraint-based deblending approaches.
Abstract: Simultaneous-source shooting can help reduce the acquisition time cost, but at the expense of introducing strong interference (blending noise) into the acquired seismic data. It has been demonstrated previously that the deblending problem can be considered as an inversion process. In this letter, we propose a new iterative approach to solve this inversion problem. In the proposed approach, a new coherency-promoting constraint, called structuring median filtering (SMF), is proposed and used to regularize the estimated model in each iteration. The SMF processes the signal by the interactions of the input signal and another given small section of signal, namely, the structuring element. The SMF is more robust than other coherency-promoting filtering such as the median filtering and mathematical morphological filtering. Numerical experiments demonstrate that the iterative deblending based on the SMF constraint obtains a better performance and a faster convergence than the low-rank and compressed sensing constraint-based deblending approaches.

Journal ArticleDOI
TL;DR: Experimental results show that proposed scheme is not only efficient in terms of computational cost and memory requirement but also achieve good imperceptibility and robustness against geometric and non geometric attacks like JPEG compression, median filtering, average filtering, addition of noise, sharpening, scaling, cropping and rotation compared with the state-of-art techniques.
Abstract: This paper presents an imperceptible, robust, secure and efficient image watermarking scheme in lifting wavelet domain using combination of genetic algorithm (GA) and Lagrangian support vector regression (LSVR). First, four subbands low–low (LL), low–high (LH), high–low (HL) and high–high (HH) are obtained by decomposing the host image from spatial domain to frequency domain using one level lifting wavelet transform. Second, the approximate image (LL subband) is divided into non overlapping blocks and the selected blocks based on the fuzzy entropy are used to embed the binary watermark. Third, based on the correlation property of each transformed selected block, significant lifting wavelet coefficient act as target to LSVR and its neighboring coefficients (called feature vector) are set as input to LSVR to find optimal regression function. This optimal regression function is used to embed and extract the scrambled watermark. In the proposed scheme, GA is used to solve the problem of optimal watermark embedding strength, based on the noise sensitivity of each selected block, in order to increase the imperceptibility of the watermark. Due to the good learning capability and high generalization property of LSVR against noisy datasets, high degree of robustness is achieved and is well suited for copyright protection applications. Experimental results on standard and real world images show that proposed scheme not only efficient in terms of computational cost and memory requirement but also achieve good imperceptibility and robustness against geometric and non geometric attacks like JPEG compression, median filtering, average filtering, addition of noise, sharpening, scaling, cropping and rotation compared with the state-of-art techniques.

Journal ArticleDOI
TL;DR: In this paper, the seam geometrical properties from a low-quality laser image captured without the conventional narrow band filter are extracted by a sequential image processing and feature extraction algorithm.
Abstract: Intelligent robotic welding requires automatic finding of the seam geometrical features in order for an efficient intelligent control. Performance of the system, therefore, heavily depends on the success of the seam finding stage. Among various seam finding techniques, active laser vision is the most effective approach. It typically requires high-quality lasers, camera and optical filters. The success of the algorithm is highly sensitive to the image processing and feature extraction algorithms. In this work, sequential image processing and feature extraction algorithms are proposed to effectively extract the seam geometrical properties from a low-quality laser image captured without the conventional narrow band filter. A novel method of laser segmentation and detection is proposed. The segmentation method involves averaging, colour processing and blob analysis. The detection method is based on a novel median filtering technique that involves enhancing of the image object based on its underlying structure and orientation in the image. The method when applied enhances the vertically oriented laser stripe in the image which improves the laser peak detection. The image processing steps are performed to make sure that the laser profile is accurately extracted within the region of interest (ROI). Feature extraction algorithm based on pixels’ intensity distribution and neighbourhood search is also proposed that can effectively extract the seam feature points. The proposed algorithms have been implemented and evaluated on various background complexities, seam sizes, material type and laser types before and during the welding operation.

Journal ArticleDOI
TL;DR: This study relies on the differential flower pollination algorithm as a metaheuristic to optimize the image processing-based crack detection model and points out that the newly constructed approach can achieve a good prediction outcome.
Abstract: Crack detection is a crucial task in the periodic survey of high-rise buildings and infrastructure. Manual survey is notorious for low productivity. This study is aimed at establishing an image processing-based method for detecting cracks on concrete wall surfaces in an automatic manner. The Roberts, Prewitt, Canny, and Sobel algorithms are employed as the edge detection methods for revealing the crack textures appearing in concrete walls. The median filtering and object cleaning operations are used to enhance the image and facilitate the crack recognition outcome. Since the edge detectors, the median filter, and the object cleaning operation all require the appropriate selection of tuning parameters, this study relies on the differential flower pollination algorithm as a metaheuristic to optimize the image processing-based crack detection model. Experimental results point out that the newly constructed approach that employs the Prewitt algorithm can achieve a good prediction outcome with classification accuracy rate = 89.95% and area under the curve = 0.90. Therefore, the proposed metaheuristic optimized image processing approach can be a promising alternative for automatic recognition of cracks on the concrete wall surface.

Journal ArticleDOI
TL;DR: The overall accuracy offered by neutrosophic sets is accurate, less time consuming and less sensitive to noise and performs well on non-uniform CT images.

Journal ArticleDOI
14 Jul 2018
TL;DR: A methodology of salt and pepper noise elimination for color images using median filter providing the reconstruction of an image in order to accept result with minimum loss of information is proposed.
Abstract: Noises degrade image quality which causes information losing and unsatisfying visual effects. Salt and Pepper noise is one of the most popular noises that affect image quality. In RGB color image Salt and pepper noise changes the number of occurrences of colors combination depending on the noise ratio. Many methods have been proposed to eliminate Salt and Pepper noise from color image with minimum loss of information. In this paper we will investigate the effects of adding salt and pepper noise to RGB color image, the experimental noise ratio will be calculated and the color combination with maximum and minimum numbers of occurrence will be calculated and detected in RGB color image. In addition this paper proposed a methodology of salt and pepper noise elimination for color images using median filter providing the reconstruction of an image in order to accept result with minimum loss of information. The proposed methodology is to be implemented, tested and experimental results will be analyzed using the calculated values of RMSE and PSNR.

Journal ArticleDOI
TL;DR: The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction.
Abstract: When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

Journal ArticleDOI
TL;DR: A novel method to solve the challenging task of detecting median filtering in JPEG compressed images by using two-dimensional autoregressive (2D-AR) model to characterize MFR, AFR and GFR separately, and further combine the 2D- AR coefficients of these three residuals into a set of features.
Abstract: Median filtering, being an order statistic filtering, has been widely used in image denoising and recently also in image anti-forensics and anti-steganalysis. In the past few years, several methods have been developed for median filtering detection. However, it is still a challenging task to detect median filtering in JPEG compressed images. In this paper, we propose a novel method to solve this challenging task. We first generate median filtered residual (MFR), average filtered residual (AFR) and Gaussian filtered residual (GFR) by calculating the differences between an original image and its filtered images. Then, we propose to use two-dimensional autoregressive (2D-AR) model to characterize MFR, AFR and GFR separately, and further combine the 2D-AR coefficients of these three residuals into a set of features. Finally, the extracted feature set is fed into a support vector machine classifier for training and detection. Extensive experiments have demonstrated that compared with existing methods, the proposed one can achieve a considerable improvement in detecting median filtering in heavily compressed images.

Proceedings ArticleDOI
20 Apr 2018
TL;DR: The experimental results show that the proposed CNN model can effectively remove Gaussian noise and improve the performance of traditional image filtering methods significantly.
Abstract: In digital image processing, filtering noise to reconstruct a high quality image is an important work for further image processing such as object segmentation, detection, recognition and tracking, etc. In this paper, we will use a CNN model in deep learning for image denoising. Compared with traditional image denoising methods such as average filtering, Wiener filtering and median filtering, the advantage of using this CNN model is that the parameters of this model can be optimized through network training; whereas in traditional image denoising, the parameters of these algorithms are fixed and cannot be adjusted during the filtering, namely, lack of adaptivity. In this paper, we design and implement the denoising method based on a linear CNN model. Our experimental results show that the proposed CNN model can effectively remove Gaussian noise and improve the performance of traditional image filtering methods significantly.

Journal ArticleDOI
TL;DR: In this paper, an improved median filter and a novel magnitude bandpass filter are proposed to suppress the sampling noise caused by EMI without any extra hardware cost, and the design tradeoffs among noises filter capability, delay effect and the computation time have been discussed for proposed filters.
Abstract: Silicon carbide (SiC) power devices are beneficial to the converters in terms of size reduction and efficiency increase Nevertheless, the fast switching of SiC devices results in more serious electromagnetic interference (EMI) noise issue Optical fibers based isolation provides a reliable solution to block the EMI noises from the power circuit to the control circuit but with additional cost and size penalty, especially for multilevel converters This letter aims at the digital filters based solution to suppress the sampling noise caused by EMI without any extra hardware cost An improved median filter and a novel magnitude bandpass filter are proposed The design tradeoffs among noises filter capability, delay effect and the computation time have been discussed for proposed filters The anti-EMI noise function of proposed filters has been experimentally verified in a 60-kW five-level SiC inverter

Journal ArticleDOI
TL;DR: This work explains about the various kinds of noises present within the ultrasound medical images and also the filters that are used for the noise removal purpose, and the performance and comparisons of different filters supported their PSNR, MSE, and RMSE values.

Journal ArticleDOI
TL;DR: A new proposal based on an ensemble of noise filters with the goal of accurately filtering the mislabeled instances, but also correcting them when possible, is advanced, able to deliver a quality training instance set that overcomes the limitations of such techniques, both in terms of classification accuracy and properly treated instances.
Abstract: Obtaining data in the real world is subject to imperfections and the appearance of noise is a common consequence of such flaws. In classification, class noise will deteriorate the performance of a classifier, as it may severely mislead the model building. Among the strategies emerged to deal with class noise, the most popular is that of filtering. However, instance filtering can be harmful as it may eliminate more examples than necessary or produce loss of information. An ideal option would be relabeling the noisy instances, avoiding losing data, but instance correcting is harder to achieve and may lead to wrong information being introduced in the dataset. For this reason, we advance a new proposal based on an ensemble of noise filters with the goal not only of accurately filtering the mislabeled instances, but also correcting them when possible. A noise score is also applied to support the filtering and relabeling process. The proposal, named CNC-NOS (Class Noise Cleaner with Noise Scoring), is compared against state-of-the-art noise filters and correctors, showing that it is able to deliver a quality training instance set that overcomes the limitations of such techniques, both in terms of classification accuracy and properly treated instances.

Journal ArticleDOI
TL;DR: A fast adaptive and selective mean filter is presented to remove salt and pepper noise effectively from images corrupted with higher noise densities, achieving better results than many existing state-of-the-art algorithms at all noise density.
Abstract: A fast adaptive and selective mean filter is presented to remove salt and pepper noise effectively from images corrupted with higher noise densities. The algorithm achieves better results in terms of visual quality and in terms of peak signal-to-noise ratio, mean absolute error, mean structural similarity index measure, image enhancement factor, and edge preservation ratio than many existing state-of-the-art algorithms at all noise densities. Adaptive filters that use variable window size produce better restoration of salt and pepper noise at higher noise densities than filters that use fixed window size, but they consume more time. This makes them practically impossible to implement them in digital image acquisition devices. Hence, reducing the execution time of adaptive filters is vital. The proposed algorithm consumes around 90% less time for lower noise densities and 50% less time for higher noise densities than the adaptive weighted mean filter, one of the best available adaptive filters in the literature for high-density salt and pepper noise removal.

Journal ArticleDOI
TL;DR: The proposed method enables the detection of most ship targets from X-band SAR images with a reduced number of false detections from negative effects, and generates two optimal input data neurons that strengthened ship targets and mitigated noise effects by image processing techniques.
Abstract: In this paper, an automatic ship detection method using the artificial neural network (ANN) and support vector machine (SVM) from X-band SAR satellite images is proposed. When using machine learning techniques, the most important points to consider are (i) defining the proper input neurons and (ii) selecting the correct training data. We focused on generating two optimal input data neurons that (i) strengthened ship targets and (ii) mitigated noise effects by image processing techniques, including median filtering, multi-looking, etc. The median filter and multi-look operations were used to reduce the background noise, and the median filter operation was also used to remove ships in an image in order to maximize the difference between the pixel values of ships and the sea. Through the root-mean-square difference calculation, most ship targets, even including small ships, were emphasized in the images. We tested the performance of the proposed method using X-band high-resolution SAR images including COSMO-SkyMed, KOMPSAT-5, and TerraSAR-X images. An intensity difference map and a texture difference map were extracted from the X-band SAR single-look complex (SLC) images, and then, the maps were used as input neurons for the ANN and SVM machine learning techniques. Finally, we created ship-probability maps through the machine learning techniques. To validate the ANN and SVM results, optimal threshold values were obtained by using the statistical approach and then used to identify ships from the ship-probability maps. Consequently, the level of recall achieved was greater than 90% in most cases. This means that the proposed method enables the detection of most ship targets from X-band SAR images with a reduced number of false detections from negative effects.

Journal ArticleDOI
TL;DR: The aim of this study was to develop an efficient filter for positioning data measured from dairy cows with UWB-based indoor positioning system in a free stall barn using a heuristic jump filter combined with median filter and extended Kalman filter.

Journal ArticleDOI
TL;DR: A new algorithm for automatic live FED using radial basis function support Haar Wavelet Transform is used for feature extraction and RBF-SVM for classification and the proposed algorithm is better than previous algorithms.
Abstract: Facial expression detection (FED) and extraction show the most important role in face recognition. In this research, we proposed a new algorithm for automatic live FED using radial basis function support Haar Wavelet Transform is used for feature extraction and RBF-SVM for classification. Edges of the facial image are detected by genetic algorithm and fuzzy-C-means. The experimental results used CK+ database and JAFEE database for facial expression. The other database used for face detection process namely FEI, LFW-a, CMU + MIT and own database. In this algorithm, the face is detected by fdlibmex technique but we improved the limitations of this algorithm using contrast enhancement. In the pre-processing stage, apply median filtering for removing noise from an image. This stage improves the feature extraction process. Finding an image from the image components is a typical task in pattern recognition. The detection rate has reached up to approximately 100% for expression recognition. The proposed system estimates the value of precision and recall. This algorithm is compared with the previous algorithm and our proposed algorithm is better than previous algorithms.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: A multi-label convolutional neural network approach to determine the number of speakers when using a single microphone which is more challenging than when using multiple microphones is presented.
Abstract: This paper presents a multi-label convolutional neural network approach to determine the number of speakers when using a single microphone which is more challenging than when using multiple microphones. Spectrograms of windowed noisy speech signals for 1talker, 2talkers and 3+talkers are used as inputs to a multi-label convolutional neural network. The architecture of the developed multi-label convolutional neural network is discussed and it is shown that this network with median filtering can achieve an overall accuracy of about 81% for the noisy speech dataset examined.

Journal ArticleDOI
Abstract: Terrestrial laser scanning (TLS) provides a rapid remote sensing technique to model 3D objects but can also be used to assess the surface condition of structures. In this study, an effective image processing technique is proposed for crack detection on images extracted from the octree structure of TLS data. To efficiently utilize TLS for the surface condition assessment of large structures, a process was constructed to compress the original scanned data based on the octree structure. The point cloud data obtained by TLS was converted into voxel data, and further converted into an octree data structure, which significantly reduced the data size but minimized the loss of resolution to detect cracks on the surface. The compressed data was then used to detect cracks on the surface using a combination of image processing algorithms. The crack detection procedure involved the following main steps: (1) classification of an image into three categories (i.e., background, structural joints and sediments, and surface) using K-means clustering according to color similarity, (2) deletion of non-crack parts on the surface using improved subtraction combined with median filtering and K-means clustering results, (3) detection of major crack objects on the surface based on Otsu’s binarization method, and (4) highlighting crack objects by morphological operations. The proposed technique was validated on a spillway wall of a concrete dam structure in South Korea. The scanned data was compressed up to 50% of the original scanned data, while showing good performance in detecting cracks with various shapes.

Book ChapterDOI
TL;DR: The obtained results show that using the present method the watermark image can be extracted properly even when the watermarked image is under various attacks like rotation, motion blur, Gaussian noise, gamma correction, rescaling, cropping,Gaussian blur, contrast adjustment, histogram equalization etc.
Abstract: A robust digital image watermarking method based on Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD) is proposed in the present work. In this method, first, the original image of size 256 × 256 is DWT decomposed into the third level using Haar wavelet providing the four sub-bands LL3, LH3, HL3, and HH3. After that, SVD is applied on these sub-bands to get the diagonal matrices of singular values. The watermark image is then embedded in these singular values of the four sub-bands. Proposed algorithm is simulated using MATLAB v. 2013 and the results show that the PSNR value obtained is 84.25 which is in the range of 0.1–0.11 (of the scale factor). The PSNR value obtained for the current work is better compared to the previous approaches. Furthermore, the obtained results also show that using the present method the watermark image can be extracted properly even when the watermarked image is under various attacks like rotation, motion blur, Gaussian noise, gamma correction, rescaling, cropping, Gaussian blur, contrast adjustment, histogram equalization etc.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an effective non-uniformity correction (NUC) method to remove strip noise without loss of fine image details in uncooled long-wave infrared imaging systems.
Abstract: In uncooled long-wave infrared (LWIR) imaging systems, non-uniformity of the amplifier in readout circuit will generate significant noise in captured infrared images. This type of noise, if not eliminated, may manifest as vertical and horizontal strips in the raw image and human observers are particularly sensitive to these types of image artifacts. In this paper we propose an effective non-uniformity correction (NUC) method to remove strip noise without loss of fine image details. This multi-scale destriping method consists of two consecutive steps. Firstly, wavelet-based image decomposition is applied to separate the original input image into three individual scale levels: large, median and small scales. In each scale level, the extracted vertical image component contains strip noise and vertical-orientated image textures. Secondly, a novel multi-scale 1D guided filter is proposed to further separate strip noise from image textures in each individual scale level. More specifically, in the small scale level, we choose a small filtering window for guided filter to eliminate strip noise. On the contrary, a large filtering window is used to better preserve image details from blurring in large scale level. Our proposed algorithm is systematically evaluated using real-captured infrared images and the quantitative comparison results with the state-of-the-art destriping algorithms demonstrate that our proposed method can better remove the strip noise without blurring image fine details.