scispace - formally typeset
Search or ask a question

Showing papers on "Median filter published in 2019"


Posted Content
TL;DR: A general framework for denoising high-dimensional measurements which requires no prior on the signal, no estimate of the noise, and no clean training data is proposed, which allows us to calibrate $\mathcal{J}$-invariant versions of any parameterised Denoising algorithm, from the single hyperparameter of a median filter to the millions of weights of a deep neural network.
Abstract: We propose a general framework for denoising high-dimensional measurements which requires no prior on the signal, no estimate of the noise, and no clean training data. The only assumption is that the noise exhibits statistical independence across different dimensions of the measurement, while the true signal exhibits some correlation. For a broad class of functions ("$\mathcal{J}$-invariant"), it is then possible to estimate the performance of a denoiser from noisy data alone. This allows us to calibrate $\mathcal{J}$-invariant versions of any parameterised denoising algorithm, from the single hyperparameter of a median filter to the millions of weights of a deep neural network. We demonstrate this on natural image and microscopy data, where we exploit noise independence between pixels, and on single-cell gene expression data, where we exploit independence between detections of individual molecules. This framework generalizes recent work on training neural nets from noisy images and on cross-validation for matrix factorization.

267 citations


Proceedings Article
24 May 2019
TL;DR: In this article, the authors propose a general framework for denoising high-dimensional measurements which requires no prior on the signal, no estimate of the noise, and no clean training data.
Abstract: We propose a general framework for denoising high-dimensional measurements which requires no prior on the signal, no estimate of the noise, and no clean training data. The only assumption is that the noise exhibits statistical independence across different dimensions of the measurement, while the true signal exhibits some correlation. For a broad class of functions ("$\mathcal{J}$-invariant"), it is then possible to estimate the performance of a denoiser from noisy data alone. This allows us to calibrate $\mathcal{J}$-invariant versions of any parameterised denoising algorithm, from the single hyperparameter of a median filter to the millions of weights of a deep neural network. We demonstrate this on natural image and microscopy data, where we exploit noise independence between pixels, and on single-cell gene expression data, where we exploit independence between detections of individual molecules. This framework generalizes recent work on training neural nets from noisy images and on cross-validation for matrix factorization.

158 citations


Journal ArticleDOI
TL;DR: A method for detecting rapid rice disease based on FCM-KM and Faster R-CNN fusion is proposed to address various problems with the rice disease images, such as noise, blurred image edge, large background interference and low detection accuracy.
Abstract: In this paper, a method for detecting rapid rice disease based on FCM-KM and Faster R-CNN fusion is proposed to address various problems with the rice disease images, such as noise, blurred image edge, large background interference and low detection accuracy. Firstly, the method uses a two-dimensional filtering mask combined with a weighted multilevel median filter (2DFM-AMMF) for noise reduction, and uses a faster two-dimensional Otsu threshold segmentation algorithm (Faster 2D-Otsu) to reduce the interference of complex background with the detection of target blade in the image. Then the dynamic population firefly algorithm based on the chaos theory as well as the maximum and minimum distance algorithm is applied for optimization of the K-Means clustering algorithm (FCM-KM) to determine the optimal clustering class k value while addressing the tendency of the algorithm to fall into the local optimum problem. Combined with the R-CNN algorithm for the identification of rice diseases, FCM-KM analysis is conducted to determine the different sizes of the Faster R-CNN target frame. As revealed by the application results of 3010 images, the accuracy and time required for detection of rice blast, bacterial blight and blight were 96.71%/0.65s, 97.53%/0.82s and 98.26%/0.53s, respectively, indicating clearly that the method is more capable of detecting rice diseases and improving the identification accuracy of Faster R-CNN algorithm, while reducing the time required for identification.

137 citations


Journal ArticleDOI
TL;DR: A novel focus region detection method is presented, which uses guided filter to refine the rough focus maps obtained by mean filter and difference operator and is optimized to generate final decision map by using guided filter again.
Abstract: Being an efficient method of information fusion, multi-focus image fusion has attracted increasing interests in image processing and computer vision. This paper proposes a multi-focus image fusion method based on focus region detection using mean filter and guided filter. Firstly, a novel focus region detection method is presented, which uses guided filter to refine the rough focus maps obtained by mean filter and difference operator. Then, An initial decision map is got via the pixel-wise maximum rule, and optimized to generate final decision map by using guided filter again. Finally, the fused image is obtained by the pixel-wise weighted-averaging rule with the final decision map. Experimental results demonstrate that the novel focus region detection method has stronger robustness to different noises, and higher computational efficiency than other focus measures. Furthermore, the proposed fusion method implements efficiently and outperforms some state-of-the-art approaches both in visual effect and objective evaluation.

109 citations


Journal ArticleDOI
TL;DR: An Iterative Mean Filter (IMF) is proposed to eliminate the salt-and-pepper noise by using the mean of gray values of noise-free pixels in a fixed-size window and outperforms the other state-of-the-art methods.
Abstract: We propose an Iterative Mean Filter (IMF) to eliminate the salt-and-pepper noise. IMF uses the mean of gray values of noise-free pixels in a fixed-size window. Unlike other nonlinear filters, IMF does not enlarge the window size. A large size reduces the accuracy of noise removal. Therefore, IMF only uses a window with a size of $3\times3$ . This feature is helpful for IMF to be able to more precisely evaluate a new gray value for the center pixel. To process high-density noise effectively, we propose an iterative procedure for IMF. In the experiments, we operationalize Peak Signal-to-Noise Ratio (PSNR), Visual Information Fidelity, Image Enhancement Factor, Structural Similarity (SSIM), and Multiscale Structure Similarity to assess image quality. Furthermore, we compare denoising results of IMF with ones of the other state-of-the-art methods. A comprehensive comparison of execution time is also provided. The qualitative results by PSNR and SSIM showed that IMF outperforms the other methods such as Based-on Pixel Density Filter (BPDF), Decision-Based Algorithm (DBA), Modified Decision-Based Untrimmed Median Filter (MDBUTMF), Noise Adaptive Fuzzy Switching Median Filter (NAFSMF), Adaptive Weighted Mean Filter (AWMF), Different Applied Median Filter (DAMF), Adaptive Type-2 Fuzzy Filter (FDS): for the IMAGESTEST dataset - BPDF (25.36/0.756), DBA (28.72/0.8426), MDBUTMF (25.93/0.8426), NAFSMF (29.32/0.8735), AWMF (32.25/0.9177), DAMF (31.65/0.9154), FDS (27.98/0.8338), and IMF (33.67/0.9252); and for the BSDS dataset - BPDF (24.95/0.7469), DBA (26.84/0.8061), MDBUTMF (26.25/0.7732), NAFSMF (27.26/0.8191), AWMF (28.89/0.8672), DAMF (29.11/0.8667), FDS (26.85/0.8095), and IMF (30.04/0.8753).

77 citations


Journal ArticleDOI
TL;DR: An improved segmentation approach based on watershed algorithm, neutrosophic sets (NS), and fast fuzzy c-mean clustering algorithm (FFCM) for CT liver tumor segmentation is proposed and shows that the overall accuracy offered by the employed neutrosophile sets is accurate, less time consuming, less sensitive to noise and performs better on non-uniform CT images.

75 citations


Journal ArticleDOI
TL;DR: An expedient image segmentation algorithm for medical images to curtail the physicians' interpretation of computer tomography (CT) scan images is explored and it was proved that the adaptive median filter is most suitable for medical CT images.
Abstract: The objective of this paper is to explore an expedient image segmentation algorithm for medical images to curtail the physicians’ interpretation of computer tomography (CT) scan images. Modern medical imaging modalities generate large images that are extremely grim to analyze manually. The consequences of segmentation algorithms rely on the exactitude and convergence time. At this moment, there is a compelling necessity to explore and implement new evolutionary algorithms to solve the problems associated with medical image segmentation. Lung cancer is the frequently diagnosed cancer across the world among men. Early detection of lung cancer navigates towards apposite treatment to save human lives. CT is one of the modest medical imaging methods to diagnose the lung cancer. In the present study, the performance of five optimization algorithms, namely, k-means clustering, k-median clustering, particle swarm optimization, inertia-weighted particle swarm optimization, and guaranteed convergence particle swarm optimization (GCPSO), to extract the tumor from the lung image has been implemented and analyzed. The performance of median, adaptive median, and average filters in the preprocessing stage was compared, and it was proved that the adaptive median filter is most suitable for medical CT images. Furthermore, the image contrast is enhanced by using adaptive histogram equalization. The preprocessed image with improved quality is subject to four algorithms. The practical results are verified for 20 sample images of the lung using MATLAB, and it was observed that the GCPSO has the highest accuracy of 95.89%.

62 citations


Journal ArticleDOI
TL;DR: Experimental results prove the superiority of the proposed technique over existing state-of-the-art methods in terms of both subjective and objective evaluation.

55 citations


Journal ArticleDOI
TL;DR: Synthetic and real data examples show that structure-oriented space-varying median filter can significantly improve the signal preserving performance for curving events in the seismic data.
Abstract: In seismic data processing, the median filter is usually applied along the structural direction of seismic data in order to attenuate erratic or spike-like noise. The performance of a structure-oriented median filter highly depends on the accuracy of the estimated local slope from the noisy data. When local slope contains significant error, which is usually the case for noisy data, the structure-oriented median filter will still cause severe damages to useful energy. We propose a type of structure-oriented median filter that can effectively attenuate spike-like noise even when the local slope is not accurately estimated, which we call structureoriented space-varying median filter. A structure-oriented space-varying median filter can adaptively squeeze and stretch the window length of the median filter when applied in the locally flattened dimension of an input seismic data in order to deal with the dipping events caused by inaccurate slope estimation. We show the key difference among different types of median filters in detail and demonstrate the principle of the structure-oriented space-varying median filter method. We apply the structure-oriented space-varying median filter method to remove the spike-like blending noise arising from the simultaneous source acquisition. Synthetic and real data examples show that structure-oriented space-varying median filter can significantly improve the signal preserving performance for curving events in the seismic data. The structure-oriented space-varying median filter can also be easily embedded into an iterative deblending procedure based on the shaping regularization framework and can help obtain much improved deblending performance.

50 citations


Journal ArticleDOI
TL;DR: Results show that Adaptive Riesz Mean Filter outperforms the methods mentioned above, and the need for further research is discussed.
Abstract: In this study, we propose a new method, i.e. Adaptive Riesz Mean Filter (ARmF), by operationalizing pixel similarity for salt-and-pepper noise (SPN) removal. Afterwards, we compare the results of ARmF, A New Adaptive Weighted Mean Filter (AWMF), Different Applied Median Filter (DAMF), Noise Adaptive Fuzzy Switching Median Filter (NAFSMF), Based on Pixel Density Filter (BPDF), Modified Decision-Based Unsymmetric Trimmed Median Filter (MDBUTMF) and Decision-Based Algorithm (DBA) by using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), Image Enhancement Factor (IEF), and Visual Information Fidelity (VIF) for 20 traditional test images (Lena, Cameraman, Barbara, Baboon, Peppers, Living Room, Lake, Plane, Hill, Pirate, Boat, House, Bridge, Elaine, Flintstones, Flower, Parrot, Dark-Haired Woman, Blonde Woman, and Einstein), 40 test images in the TESTIMAGES Database, and 200 RGB test images from the UC Berkeley Dataset ranging in noise density from 10% to 90%. Moreover, we compare the running time of these algorithms. These results show that ARmF outperforms the methods mentioned above. We finally discuss the need for further research.

45 citations


Journal ArticleDOI
TL;DR: An effective hybrid clustering algorithm combined with morphological operations is proposed for segmenting brain tumors in this paper, and the proposed algorithm performs better in terms of accuracy, sensitivity, specificity, and recall than other current segmentation algorithms.
Abstract: Inference of tumor and edema areas from brain magnetic resonance imaging (MRI) data remains challenging owing to the complex structure of brain tumors, blurred boundaries, and external factors such as noise. To alleviate noise sensitivity and improve the stability of segmentation, an effective hybrid clustering algorithm combined with morphological operations is proposed for segmenting brain tumors in this paper. The main contributions of the paper are as follows: firstly, adaptive Wiener filtering is utilized for denoising, and morphological operations are used for removing nonbrain tissue, effectively reducing the method’s sensitivity to noise. Secondly, K-means++ clustering is combined with the Gaussian kernel-based fuzzy C-means algorithm to segment images. This clustering not only improves the algorithm’s stability, but also reduces the sensitivity of clustering parameters. Finally, the extracted tumor images are postprocessed using morphological operations and median filtering to obtain accurate representations of brain tumors. In addition, the proposed algorithm was compared with other current segmentation algorithms. The results show that the proposed algorithm performs better in terms of accuracy, sensitivity, specificity, and recall.

Journal ArticleDOI
TL;DR: An approach based on mathematical morphology filtering and K-means clustering for SAR image change detection and comparison of the experimental approach with other algorithms shows that the proposed algorithm can decrease the detection time and improve the detection result.
Abstract: Synthetic aperture radar (SAR) images have been applied in disaster monitoring and environmental monitoring. With the objective of reducing the effect of noise on SAR image change detection, this paper presents an approach based on mathematical morphology filtering and K-means clustering for SAR image change detection. First, the multiplicative noise in two SAR images is transformed into additive noise by a logarithmic transformation. Second, the two multitemporal SAR images are denoised by morphological filtering. Third, the mean ratio operator and subtraction operator are used to obtain two difference images. Median filtering is applied to the difference image based on a simple combination of the two difference images. Since an accurate statistical model for the difference image cannot be easily established, the results of change detection are clustered using the K-means algorithm. A comparison of the experimental approach with other algorithms shows that the proposed algorithm can decrease the detection time and improve the detection result.

Journal ArticleDOI
TL;DR: An all-weather, real-time and automatic flow measurement system using single near infrared (NIR)-imaging video camera is developed, which successfully overcomes the limitation of water line detection with current visible light (VIS) systems in clear water and low velocity conditions.

Journal ArticleDOI
TL;DR: Experiments show the superiority of the proposedNR IQA algorithm over existing state-of-the-art full-, reduced-, and NR IQA methods, in terms of both predicting accuracy and computational complexity.
Abstract: With multitudes of image processing applications, image quality assessment (IQA) has become a prerequisite for obtaining maximally distinctive statistics from images. Despite the widespread research in this domain over several years, existing IQA algorithms have a number of key limitations concerning different image distortion types and algorithms’ computational efficiency. Images that are synthesized using depth image-based rendering have applications in various disciplines, such as free viewpoint videos, which enable synthesis of novel realistic images in the referenceless environment. In the literature, very few no-reference (NR) quality assessment metrics of three-dimensional (3-D) synthesized images are proposed, and most of them are computationally expensive, which makes it difficult for them to be deployed in real-time applications. In this paper, we attribute the geometrically distorted pixels as outliers in 3-D synthesized images. This assumption is validated using the three $sigma$ rule-based robust outlyingness ratio. We propose a novel fast and accurate blind IQA metric of 3-D synthesized images using nonlinear median filtering since the median filtering has the capability of identifying and removing outliers. The advantages of the proposed algorithm are twofold. First, it uses a simple technique, i.e., median filtering, to capture the level of geometric and structural distortions (up to some extend). Second, the proposed algorithm has higher computational efficiency. Experiments show the superiority of the proposed NR IQA algorithm over existing state-of-the-art full-, reduced-, and NR IQA methods, in terms of both predicting accuracy and computational complexity.

Journal ArticleDOI
03 May 2019-Symmetry
TL;DR: In this paper, a method for calculating the dynamic background region in a video and removing false positives in order to overcome the problems of false positives that occur due to dynamic background and frame drop at slow speeds is proposed.
Abstract: In this paper, we propose a method for calculating the dynamic background region in a video and removing false positives in order to overcome the problems of false positives that occur due to the dynamic background and frame drop at slow speeds. Therefore, we need an efficient algorithm with a robust performance value including processing speed. The foreground is separated from the background by comparing the similarities between false positives and the foreground. In order to improve the processing speed, the median filter was optimized for the binary image. The proposed method was based on a CDnet 2012/2014 dataset and we achieved precision of 76.68%, FPR of 0.90%, FNR of 18.02%, and an F-measure of 75.35%. The average ranking across categories is 14.36, which is superior to the background subtraction method. The proposed method was operated at 45 fps (CPU), 150 fps (GPU) at 320 × 240 resolution. Therefore, we expect that the proposed method can be applied to current commercialized CCTV without any hardware upgrades.

Journal ArticleDOI
TL;DR: A simple unsupervised method based on Gabor wavelet and Multiscale Line Detector is proposed for retinal vessel segmentation, comparable to the state-of-the-art methods, albeit with a simpler approach.
Abstract: Eye and systemic diseases are known to manifest themselves in retinal vasculature. Segmentation of retinal vessel is one of the important steps in retinal image analysis. A simple unsupervised method based on Gabor wavelet and Multiscale Line Detector is proposed for retinal vessel segmentation. Vessels are enhanced by linear superposition of first scale Gabor wavelet image and complemented Green channel. Multiscale Line Detector is used to segment the blood vessels. Finally, a simple post processing scheme based on median filtering is deployed to remove false positives. The proposed scheme was evaluated with publicly available datasets called DRIVE, STARE and HRF, obtaining an accuracy of 0.9470, 0.9472, and 0.9559, and a sensitivity of 0.7421, 0.8004, and 0.7207, respectively. These results are comparable to the state-of-the-art methods, albeit with a simpler approach.

Journal ArticleDOI
Wang Xiao1, You Zhou1, Minglei Shu1, Yinglong Wang1, Anming Dong1 
TL;DR: A convex optimization method is presented, which combines linear time-invariant filtering with sparsity for the BW correction and denoising of ECG signals and shows the advantages of the proposed method compared with wavelet and median filter.
Abstract: To reduce the influence of both the baseline wander (BW) and noise in the electrocardiogram (ECG) is much important for further analysis and diagnosis of heart disease. This paper presents a convex optimization method, which combines linear time-invariant filtering with sparsity for the BW correction and denoising of ECG signals. The BW signals are modeled as low-pass signals, while the ECG signals are modeled as a sequence of sparse signals and have sparse derivatives. To illustrate the positive of the ECG peaks, an asymmetric function and a symmetric function are used to punish the original ECG signals and their difference signals, respectively. The banded matrix is used to represent the optimization problem, in order to make the iterative optimization method more computationally efficient, take up the less memory, and apply to the longer data sequence. Moreover, an iterative majorization-minization algorithm is employed to guarantee the convergence of the proposed method regardless of its initialization. The proposed method is evaluated based on the ECG signals from the database of MIT-BIH Arrhythmia. The simulation results show the advantages of the proposed method compared with wavelet and median filter.

Journal ArticleDOI
TL;DR: A novel method for horizon detection that combines a multi-scale approach and a convolutional neural network and is the only one capable of detecting the horizon at high speed with high accuracy, which is attractive for practical applications.
Abstract: This paper proposes a novel method for horizon detection that combines a multi-scale approach and a convolutional neural network (CNN). The ability to detect the horizon is the first step toward situational awareness of autonomous ships, which have recently attracted interest, and greatly affects the performance of subsequent steps and that of the overall system. Since typical approaches for horizon detection mainly use edge information, two challenging issues need to be overcome: non-stability of edge detection and complex maritime scenes. The proposed method first detects line features by combining edge information from the various scales to reduce the computational time while mitigating the non-stability of edge detection. Subsequently, CNN is used to verify the edge pixels belonging to the horizon to process complex maritime scenes that contain line features similar to the horizon and changes in the sea status. Finally, linear curve fitting along with median filtering are iteratively used to estimate the horizon line accurately. We compared the performance of the proposed method with state-of-the-art methods using the largest database publicly available. The experimental results showed that the accuracy with which the proposed method can identify the horizon is superior to that of state-of-the-art methods. Our method has a median positional error of less than 1.7 pixels from the center of the horizon and a median angular error of approximately 0.1 $$^{\circ }$$ . Further, our results showed that our method is the only one capable of detecting the horizon at high speed with high accuracy, which is attractive for practical applications.

Journal ArticleDOI
TL;DR: This paper proposes a single image haze removal algorithm that shows a marked improvement on the color attenuation prior-based method, and shows its superior performance to other state-of-the-art methods in terms of both subjective visual quality and quantitative metrics.
Abstract: This paper proposes a single image haze removal algorithm that shows a marked improvement on the color attenuation prior-based method. Through a vast number of experiments on a wide variety of images, it is discovered that there are problems in the color attenuation prior, such as color distortion and background noise, which arise due to the fact that the priors do not hold true in all circumstances. Successful resolution of these problems using the proposed algorithm shows its superior performance to other state-of-the-art methods in terms of both subjective visual quality and quantitative metrics, on both synthetic and natural hazy image datasets. The proposed algorithm also is computationally friendly, due to the use of an efficient quad-decomposition algorithm for atmospheric light estimation and a simple modified hybrid median filter for depth map refinement.

Journal ArticleDOI
TL;DR: A three-level hybrid model based on the median filter, empirical mode decomposition (EMD), classification and regression tree (CART), autoregression (AR) and exponential weighted moving average (EWMA) methods called MF-EMD-CART-AR-EWMA to detect outliers in sensor data is proposed.
Abstract: The intelligent environment monitoring network, as the foundation of ecosystem research, has rapidly developed with the ever-growing Internet of Things (IoT). IoT-networked sensors deployed to monitor ecosystems generate copious sensor data characterized by nonstationarity and nonlinearity such that outlier detection remains a source of concern. Most outlier detection models involve hypothesis tests based on setting outlier threshold values. However, signal decomposition describes stationary and nonstationary relationships sensor data. Therefore, this paper proposes a three-level hybrid model based on the median filter (MF), empirical mode decomposition (EMD), classification and regression tree (CART), autoregression (AR) and exponential weighted moving average (EWMA) methods called MF-EMD-CART-AR-EWMA to detect outliers in sensor data. The first-level performance is compared to that of the Butterworth filter, FIR filter, moving average filter, wavelet filter and Wiener filter. The second-level prediction performance is compared to support vector regression (SVR), K-nearest neighbor (KNN), CART, complementary ensemble EEMD with CART and AR (EEMD-CART-AR) and ensemble CEEMD with CART and AR (CEEMD-CART-AR) methods. Finally, EWMA is compared to Cumulative Sum Control Chart (CUSUM) and Shewhart control charts. The proposed hybrid model was evaluated with a real dataset from the hydrometeorological observation network in the Heihe River Basin, yielding experimental results with better generalization ability and higher accuracy than the compared models, and providing extremely effective detection of minor outliers in predicted values. This paper provides valuable insight and a promising reference for outlier detection involving sensor data and presents a new perspective for detecting outliers.

Posted Content
TL;DR: An automated pipeline for processing multi-view satellite images to 3D digital surface models (DSM) performs automated geo-referencing and generates high- quality densely matched point clouds and a novel approach is developed that fuses multiple depth maps derived by stereo matching to generate high-quality 3D maps.
Abstract: This paper presents an automated pipeline for processing multi-view satellite images to 3D digital surface models (DSM). The proposed pipeline performs automated geo-referencing and generates high-quality densely matched point clouds. In particular, a novel approach is developed that fuses multiple depth maps derived by stereo matching to generate high-quality 3D maps. By learning critical configurations of stereo pairs from sample LiDAR data, we rank the image pairs based on the proximity of the results to the sample data. Multiple depth maps derived from individual image pairs are fused with an adaptive 3D median filter that considers the image spectral similarities. We demonstrate that the proposed adaptive median filter generally delivers better results in general as compared to normal median filter, and achieved an accuracy of improvement of 0.36 meters RMSE in the best case. Results and analysis are introduced in detail.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method outperforms other existing color image watermarking methods, which can resist JPEG compression, salt & pepper noise, median filtering, scaling, blurring, low-pass filtering, and so on attacks.
Abstract: In order to protect the copyright of the color image, a novel robust color image watermarking method using correlations of RGB channels is presented. RGB three channels of the color image have much strong correlations, which are stable under various image attacks, and thus these correlations can be mined to embed watermark for robustness. In order to keep RGB correlations and chrominance perception, the color image is considered as the third-order tensor, and tucker decomposition is employed to operate on the color image. At first, Tucker decomposition is used to generate the first feature image, which includes the most of image energies and correlations between three channels. Then, the first feature image is divided into non-overlap blocks, and the singular value decomposition (SVD) is used to decompose the block to compute the left-singular matrix. Finally, the stable coefficients relationship of the left-singular matrix is modified to embed watermark for obtaining the robustness. Experimental results show that the proposed method outperforms other existing color image watermarking methods, which can resist JPEG compression, salt & pepper noise, median filtering, scaling, blurring, low-pass filtering, and so on attacks.

Journal ArticleDOI
TL;DR: A new approach based on convolutional neural networks (CNNs) is proposed, where distinguishable frequency-domain features are put into a conventional CNN model, to identify the template parameters of various types of spatial smooth filtering operations, such as average, Gaussian and median filtering.
Abstract: The increasing prevalence of digital technology brings great convenience to human life, while also shows us the problems and challenges. Relying on easy-to-use image editing tools, some malicious manipulations, such as image forgery, have already threatened the authenticity of information, especially the electronic evidence in the crimes. As a result, digital forensics attracts more and more attention of researchers. Since some general post-operations, like widely used smooth filtering, can affect the reliability of forensic methods in various ways, it is also significant to detect them. Furthermore, the determination of detailed filtering parameters assists to recover the tampering history of an image. To deal with this problem, we propose a new approach based on convolutional neural networks (CNNs). Through adding a transform layer, obtained distinguishable frequency-domain features are put into a conventional CNN model, to identify the template parameters of various types of spatial smooth filtering operations, such as average, Gaussian and median filtering. Experimental results on a composite database show that putting the images directly into the conventional CNN model without transformation can not work well, and our method achieves better performance than some other applicable related methods, especially in the scenarios of small size and JPEG compression.

Journal ArticleDOI
01 Apr 2019-Optik
TL;DR: Experimental results show that the proposed watermarking approach is robust to both geometric distortions and general signal processing attacks and outperforms state-of-the-art water marking methods.

Journal ArticleDOI
TL;DR: In this article, the spectral correlation between wavelengths was used to distinguish between levels of noise present in different image planes of the data cube, and a 2D non-sub-sampled shearlet transform (NSST) coefficients were obtained from each image plane to perform spatial and spectral de-noising.
Abstract: Hyperspectral imaging (HSI) has become an essential tool for exploration of different spatially-resolved properties of materials in analytical chemistry. However, due to various technical factors such as detector sensitivity, choice of light source and experimental conditions, the recorded data contain noise. The presence of noise in the data limits the potential of different data processing tasks such as classification and can even make them ineffective. Therefore, reduction/removal of noise from the data is a useful step to improve the data modelling. In the present work, the potential of a wavelength-specific shearlet-based image noise reduction method was utilised for automatic de-noising of close-range HS images. The shearlet transform is a special type of composite wavelet transform that utilises the shearing properties of the images. The method first utilises the spectral correlation between wavelengths to distinguish between levels of noise present in different image planes of the data cube. Based on the level of noise present, the method adapts the use of the 2-D non-subsampled shearlet transform (NSST) coefficients obtained from each image plane to perform the spatial and spectral de-noising. Furthermore, the method was compared with two commonly used pixel-based spectral de-noising techniques, Savitzky-Golay (SAVGOL) smoothing and median filtering. The methods were compared using simulated data, with Gaussian and Gaussian and spike noise added, and real HSI data. As an application, the methods were tested to determine the efficacy of a visible-near infrared (VNIR) HSI camera to perform non-destructive automatic classification of six commercial tea products. De-noising with the shearlet-based method resulted in a visual improvement in the quality of the noisy image planes and the spectra of simulated and real HSI. The spectral correlation was highest with the shearlet-based method. The peak signal-to-noise ratio (PSNR) obtained using the shearlet-based method was higher than that for SAVGOL smoothing and median filtering. There was a clear improvement in the classification accuracy of the SVM models for both the simulated and real HSI data that had been de-noised using the shearlet-based method. The method presented is a promising technique for automatic de-noising of close-range HS images, especially when the amount of noise present is high and in consecutive wavelengths.

Journal ArticleDOI
TL;DR: The experimental results of the proposed denoising method achieved better acceptable results compared with other methods, which provides an important method for the diagnosis of medical condition.
Abstract: In order to overcome the phenomenon of image blur and edge loss in the process of collecting and transmitting medical image, a denoising method of medical image based on discrete wavelet transform (DWT) and modified median filter for medical image coupling denoising is proposed. The method is composed of four modules: image acquisition, image storage, image processing and image reconstruction. Image acquisition gets the medical image that contains Gaussian noise and impulse noise. Image storage includes the preservation of data and parameters of the original image and processed image. In the third module, the medical image is decomposed as four sub bands (LL, HL, LH, HH) by wavelet decomposition, where LL is low frequency, LH, HL, HH are respective for horizontal, vertical and in the diagonal line high frequency component. Using improved wavelet threshold to process high frequency coefficients and retain low frequency coefficients, the modified median filtering is performed on three high frequency sub bands after wavelet threshold processing. The last module is image reconstruction,which means getting the image after denoising by wavelet reconstruction. The advantage of this method is combining the advantages of median filter and wavelet to make the denoising effect better, not a simple combination of the two previous methods. With DWT and improved median filter coefficients coupling denoising, it is highly practical for high-precision medical images containing complex noises. The experimental results of proposed algorithm are compared with the results of median filter, wavelet transform, contourlet and DT-CWT, etc. According to visual evaluation index PSNR and SNR and Canny edge detection, in low noise images, PSNR and SNR increase by 10%–15%; in high noise images, PSNR and SNR increase by 2%–6%. The experimental results of the proposed algorithm achieved better acceptable results compared with other methods, which provides an important method for the diagnosis of medical condition.

Journal ArticleDOI
TL;DR: A dedicated method for uterine-motion quantification by B-mode transvaginal ultrasound is proposed, and promising results motivate toward an extended validation in the context of fertilization procedures.
Abstract: Fertility problems are nowadays being paralleled by important advances in assisted reproductive technologies. Yet the success rate of these technologies remains low. There is evidence that fertilization outcome is affected by uterine motion, but solutions for quantitative analysis of uterine motion are lacking. This work proposes a dedicated method for uterine-motion quantification by B-mode transvaginal ultrasound. Motion analysis is implemented by speckle tracking based on block matching after speckle-size regularization. Sum of absolute differences is the adopted matching metrics. Prior to the analysis, dedicated singular value decomposition (SVD) filtering is implemented to enhance the uterine motion over noise, clutter, and uncorrelated motion induced by neighboring organs and probe movements. SVD and block matching are first optimized by a dedicated ex vivo setup. Robustness to noise and speckle decorrelation is improved by median filtering of the tracking coordinates from surrounding blocks. Speckle tracking is further accelerated by a diamond search. The method feasibility was tested in vivo with a longitudinal study on nine women, aimed at discriminating between four selected phases of the menstrual cycle known to show different uterine behavior. Each woman was scanned in each phase for 4 min; four sites on the uterine fundus were tracked over time to extract strain and distance signals along the longitudinal and transversal directions of the uterus. Several features were extracted from these signals. Among these features, median frequency and contraction frequency showed significant differences between active and quiet phases. These promising results motivate toward an extended validation in the context of fertilization procedures.

Journal ArticleDOI
TL;DR: A novel sub-image approach is proposed for extremely fast and highly accurate detecting of the duplicated forged objects in colour images and exhibits high robustness against different attacks such as additive white Gaussian noise, JPEG compression, scaling, and rotation.
Abstract: Most of the existing copy-move forgery detection (CMFD) methods utilised time-consuming overlapped block-based approach. Here, a novel sub-image approach is proposed for extremely fast and highly accurate detecting of the duplicated forged objects in colour images. The proposed approach consists of few steps. The input coloured images are converted into the hue-saturation-value (HSV) colour model. Then, the edges of all objects in the forged image are detected using the Sobel operator. Morphological opening operator and median filter are used in removing unnecessary small objects. The boundaries of the duplicated objects are accurately detected. A bounding rectangle is drawn around the detected object to form a sub-image. The features of this sub-image are extracted by using the quaternion polar complex exponential transform moments (QPCETMs) and their invariants to rotation, scaling, and translation. Finally, the duplicated regions are matched via calculating the Euclidian distances and the correlation between the feature vectors. Experiments are performed using different types of duplicated regions. The obtained results of the proposed method are much accurate when compared with the results of the existing methods. Also, the proposed method exhibits high robustness against different attacks such as additive white Gaussian noise, JPEG compression, scaling, and rotation.

Journal ArticleDOI
TL;DR: A new detection scheme of median filtering based on combined features of difference image (CFDI) achieves superior performance on the uncompressed image datasets, and it also achieves better performance compared with state-of-the-art methods, especially for strong JPEG compression and low resolution images.
Abstract: Median filtering is a widely used method for denoising and smoothing regions of an image; it has drawn much attention from researchers of image forensic. A new detection scheme of median filtering based on combined features of difference image (CFDI) is proposed in this paper. In the proposed scheme, the combined features consist of joint conditional probability density functions (JCPDFs) of first-order and second-order difference image (DI), the principal component analysis (PCA) is used to reduce the dimensionality of JCPDFs, and thus, the final features are obtained for the given threshold. A large number of experiments on single database and compound databases show that, the proposed scheme achieves superior performance on the uncompressed image datasets, and it also achieves better performance compared with state-of-the-art methods, especially for strong JPEG compression and low resolution images.

Journal ArticleDOI
TL;DR: This paper proposes a new robust MF forensic method based on a modified convolutional neural network (CNN) that outperforms the state-of-the-art methods in both JPEG compressed and small-sized MF image detection.
Abstract: Median filtering (MF) is frequently applied to conceal the traces of forgery and therefore can provide indirect forensic evidence of tampering when investigating composite images. The existing MF forensic methods, however, ignore how JPEG compression affects median filtered images, resulting in heavy performance degradation when detecting filtered images stored in the JPEG format. In this paper, we propose a new robust MF forensic method based on a modified convolutional neural network (CNN). First, relying on the analysis of the influence on median filtered images caused by JPEG compression, we effectively suppress the interference using image deblocking. Second, the fingerprints left by MF are highlighted via filtered residual fusion. These two functions are fulfilled with a deblocking layer and a fused filtered residual (FFR) layer. Finally, the output of the FFR layer becomes input when extracting multiple features for further classification using a tailor-made CNN. The extensive experimental results show that the proposed method outperforms the state-of-the-art methods in both JPEG compressed and small-sized MF image detection.