scispace - formally typeset
Search or ask a question

Showing papers on "Median filter published in 2017"


Journal ArticleDOI
TL;DR: This letter describes the realization of a sub-shot-noise wide field microscope based on spatially multi-mode non-classical photon number correlations in twin beams, achieving the best sensitivity per incident photon reported in absorption microscopy.
Abstract: Recently, several proof of principle experiments have demonstrated the advantages of quantum technologies over classical schemes. The present challenge is to surpass the limits of proof of principle demonstrations to approach real applications. This letter presents such an achievement in the field of quantum enhanced imaging. In particular, we describe the realization of a sub-shot-noise wide field microscope based on spatially multi-mode non-classical photon number correlations in twin beams. The microscope produces realtime images of 8000 pixels at full resolution, for a 500 μm2 field of view, with noise reduced to 80% of the shot noise level (for each pixel), which is suitable for absorption imaging of complex structures. By fast post-elaboration, specifically applying a quantum enhanced median filter, the noise can be further reduced (to <30% of the shot noise level) by setting a trade-off with the resolution, thus achieving the best sensitivity per incident photon reported in absorption microscopy.

133 citations


Journal ArticleDOI
TL;DR: An effective method using an adaptive median filter is proposed to effectively remove unwrapping errors and simultaneously preserve step-heights and both simulations and experiments can demonstrate its effectiveness.
Abstract: Phase-shifting profilometry combined with Gray-code patterns projection has been widely used for 3D measurement. In this technique, a phase-shifting algorithm is used to calculate the wrapped phase, and a set of Gray-code binary patterns is used to determine the unwrapped phase. In the real measurement, the captured Gray-code patterns are no longer binary, resulting in phase unwrapping errors at a large number of erroneous pixels. Although this problem has been attended and well resolved by a few methods, it remains challenging when a measured object has step-heights and the captured patterns contain invalid pixels. To effectively remove unwrapping errors and simultaneously preserve step-heights, in this paper, an effective method using an adaptive median filter is proposed. Both simulations and experiments can demonstrate its effectiveness.

115 citations


Journal ArticleDOI
TL;DR: A novel approach for removing noise from multiple reflections based on an adaptive randomized-order empirical mode decomposition (EMD) framework and the EMD-based smoothing method can help preserve the flattened signals better, without the need of exact flattening, and can preserve the amplitude variation much better.
Abstract: We propose a novel approach for removing noise from multiple reflections based on an adaptive randomized-order empirical mode decomposition (EMD) framework. We first flatten the primary reflections in common midpoint gather using the automatically picked normal moveout velocities that correspond to the primary reflections and then randomly permutate all the traces. Next, we remove the spatially distributed random spikes that correspond to the multiple reflections using the EMD-based smoothing approach that is implemented in the $f-x$ domain. The trace randomization approach can make the spatially coherent multiple reflections random along the space direction and can decrease the coherency of near-offset multiple reflections. The EMD-based smoothing method is superior to median filter and prediction error filter in that it can help preserve the flattened signals better, without the need of exact flattening, and can preserve the amplitude variation much better. In addition, EMD is a fully adaptive algorithm and the parameterization for EMD-based smoothing can be very convenient.

77 citations


Journal ArticleDOI
TL;DR: An adaptive image restoration algorithm based on 2-D bilateral filtering has been proposed to enhance the signal-to-noise ratio (SNR) of the intrusion location for phase-sensitive optical time domain reflectometry system and has the potential to precisely extract intrusion location from a harsh environment with strong background noise.
Abstract: An adaptive image restoration algorithm based on 2-D bilateral filtering has been proposed to enhance the signal-to-noise ratio (SNR) of the intrusion location for phase-sensitive optical time domain reflectometry (Ф-OTDR) system. By converting the spatial and time information of the Ф-OTDR traces into 2-D image, the proposed 2-D bilateral filtering algorithm can smooth the noise and preserve the useful signal efficiently. To simplify the algorithm, a Lorentz spatial function is adopted to replace the original Gaussian function, which has higher practicability. Furthermore, an adaptive parameter setting method is developed according to the relation between the optimal gray level standard deviation and noise standard deviation, which is much faster and more robust for different types of signals. In the experiment, the SNR of location information has been improved over 14 dB without spatial resolution loss for a signal with original SNR of 6.43 dB in 27.6 km sensing fiber. The proposed method has the potential to precisely extract intrusion location from a harsh environment with strong background noise.

77 citations


Journal ArticleDOI
TL;DR: It is observed from the experiments that the proposed filter outperforms some of the existing noise removal techniques not only at low density impulse noise but also at high-density impulse noise.
Abstract: In this study, a combination of adaptive vector median filter (VMF) and weighted mean filter is proposed for removal of high-density impulse noise from colour images. In the proposed filtering scheme, the noisy and non-noisy pixels are classified based on the non-causal linear prediction error. For a noisy pixel, the adaptive VMF is processed over the pixel where the window size is adapted based on the availability of good pixels. Whereas, a non-noisy pixel is substituted with the weighted mean of the good pixels of the processing window. The experiments have been carried out on a large database for different classes of images, and the performance is measured in terms of peak signal-to-noise ratio, mean squared error, structural similarity and feature similarity index. It is observed from the experiments that the proposed filter outperforms (~1.5 to 6 dB improvement) some of the existing noise removal techniques not only at low density impulse noise but also at high-density impulse noise.

63 citations


Journal ArticleDOI
TL;DR: The median filter performs better for removing salt-and-pepper noise and Poisson Noise for images in gray scale, and Weiner filter performsbetter for removing Speckle and Gaussian Noise and Gaussia filter for the Blurred Noise as suggested in the experimental results.
Abstract: Noise removal techniques have become an essential practice in medical imaging application for the study of anatomical structure and image processing of MRI medical images. To report these issues many de-noising algorithm has been developed like Weiner filter, Gaussian filter, median filter etc. In this research work is done with only three of the above filters which are already mentioned were successfully used in medical imaging. The most commonly affected noises in medical MRI image are Salt and Pepper, Speckle, Gaussian and Poisson noise. The medical images taken for comparison include MRI images, in gray scale and RGB. The performances of these algorithms are examined for various noise types which are salt-and-pepper, Poisson, speckle, blurred and Gaussian Noise. The evaluation of these algorithms is done by the measures of the image file size, histogram and clarity scale of the images. The median filter performs better for removing salt-and-pepper noise and Poisson Noise for images in gray scale, and Weiner filter performs better for removing Speckle and Gaussian Noise and Gaussian filter for the Blurred Noise as suggested in the experimental results.

62 citations


Journal ArticleDOI
TL;DR: In this paper, an image acquisition and processing system was developed to extract projected area, perimeter, and roundness features of mangos from images acquired using a XGA format color camera of 8-bit gray levels using fluorescent lighting.

61 citations


Journal ArticleDOI
TL;DR: The use of the state-of-the-art patch-based denoising methods for additive noise reduction is investigated and fast patch similarity measurements produce fast patch- based image denoizing methods.
Abstract: Digital images are captured using sensors during the data acquisition phase, where they are often contaminated by noise (an undesired random signal). Such noise can also be produced during transmission or by poor-quality lossy image compression. Reducing the noise and enhancing the images are considered the central process to all other digital image processing tasks. The improvement in the performance of image denoising methods would contribute greatly on the results of other image processing techniques. Patch-based denoising methods recently have merged as the state-of-the-art denoising approaches for various additive noise levels. In this work, the use of the state-of-the-art patch-based denoising methods for additive noise reduction is investigated. Various types of image datasets are addressed to conduct this study. We first explain the type of noise in digital images and discuss various image denoising approaches, with a focus on patch-based denoising methods. Then, we experimentally evaluate both quantitatively and qualitatively the patch-based denoising methods. The patch-based image denoising methods are analyzed in terms of quality and computational time. Despite the sophistication of patch-based image denoising approaches, most patch-based image denoising methods outperform the rest. Fast patch similarity measurements produce fast patch-based image denoising methods. Patch-based image denoising approaches can effectively reduce noise and enhance images. Patch-based image denoising approach is the state-of-the-art image denoising approach.

57 citations


Journal ArticleDOI
TL;DR: A Bayesian maximum a posteriori (MAP) framework is formulated to optimize the NLF estimation, and a method for image splicing detection according to noise level inconsistency in image blocks taking from different origins is developed.
Abstract: In a spliced image, areas from different origins contain different noise features, which may be exploited as evidence for forgery detection. In this paper, we propose a noise level evaluation method for digital photos, and use the method to detect image splicing. Unlike most noise-based forensic techniques in which an AWGN model is assumed, the noise distribution used in the present work is intensity-dependent. This model can be described with a noise level function (NLF) that better fits the actual noise characteristics. NLF reveals variation in the standard deviation of noise with respect to image intensity. In contrast to denoising problems, noise in forensic applications is generally weak and content-related, and estimation of noise characteristics must be done in small areas. By exploring the relationship between NLF and the camera response function (CRF), we fit the NLF curve under the CRF constraints. We then formulate a Bayesian maximum a posteriori (MAP) framework to optimize the NLF estimation, and develop a method for image splicing detection according to noise level inconsistency in image blocks taking from different origins. Experimental results are presented to show effectiveness of the proposed method.

54 citations


Journal ArticleDOI
TL;DR: Experimental results show the better performance with low computation time over existing OD segmentation methods for automatic segmentation of the Optic Disk in retinal images.

54 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: Experimental results showed that the approach is capable of recognizing fish species, which provides an effective way for solving recognition tasks in small sample size situations.
Abstract: Underwater target recognition is a challenging task due to the unrestricted environment of the ocean. With large datasets, deep learning methods have been applied with great success to the image recognition of objects in the air. However, it has been observed that deep neural networks (DNNs) easily suffer from overfitting with small samples. Underwater image acquisition always requires much manpower and costs a lot, which makes it difficult to obtain enough sample images for training DNNs. Besides, images captured by underwater cameras are usually deteriorated by noise. Taking live fish recognition as an example, we proposed a framework for underwater image recognition in small sample size situations. First, a novel improved median filter was utilized to suppress noise of fish images. Then, a convolutional neural network was employed and pre-trained with images from the world's largest image recognition database-ImageNet. Finally, preprocessed fish images were used to fine tune the pre-trained neural network and test the classification performance. Experimental results showed that the approach is capable of recognizing fish species, which provides an effective way for solving recognition tasks in small sample size situations.

Journal ArticleDOI
TL;DR: IEMD method increases the discrimination ability of these features as compared to the EMD method and the adaptively fast ensemble empirical mode decomposition (AFEEMD) method.
Abstract: In this paper, a simple technique with improved empirical mode decomposition (IEMD) in conjunction with four different features is used for the analysis of amyotrophic lateral sclerosis (ALS) and normal EMG signals. EMG signals consist of noise from various sources, such as electronic instruments, moving artifacts and electrical instruments. The empirical mode decomposition (EMD) method followed by median filter (MF) has been employed to remove the impulsive noise from intrinsic mode function (IMF) components generated through EMD. The filtered IMF components are summed together to generate a new signal. EMD process is further applied to new EMG signal to generate improved IMFs called as improved EMD method. In the IEMD algorithm for the first time, a new technique is proposed to choose the window size of median filter. For this, the features namely amplitude modulation bandwidth ( B AM ), frequency modulation bandwidth ( B FM ), spectral moment of power spectral density ( SM PSD ), and first derivative of instantaneous frequency ( MFD IF ) extracted from the improved IMFs are used to discriminate between ALS and normal EMG signals. Finally, it is observed that IEMD method increases the discrimination ability of these features as compared to the EMD method and the adaptively fast ensemble empirical mode decomposition (AFEEMD) method.

Journal ArticleDOI
01 Nov 2017
TL;DR: This paper proposed a system to detect the MAs in colored fundus images using deep convolutional neural network with reinforcement sample learning strategy and the results are encouraging.
Abstract: Microaneurysms (MAs) are known as early signs of diabetic-retinopathy which are called red lesions in color fundus images. Detection of MAs in fundus images needs highly skilled physicians or eye angiography. Eye angiography is an invasive and expensive procedure. Therefore, an automatic detection system to identify the MAs locations in fundus images is in demand. In this paper, we proposed a system to detect the MAs in colored fundus images. The proposed method composed of three stages. In the first stage, a series of pre-processing steps are used to make the input images more convenient for MAs detection. To this end, green channel decomposition, Gaussian filtering, median filtering, back ground determination, and subtraction operations are applied to input colored fundus images. After pre-processing, a candidate MAs extraction procedure is applied to detect potential regions. A five-stepped procedure is adopted to get the potential MA locations. Finally, deep convolutional neural network (DCNN) with reinforcement sample learning strategy is used to train the proposed system. The DCNN is trained with color image patches which are collected from ground-truth MA locations and non-MA locations. We conducted extensive experiments on ROC dataset to evaluate of our proposal. The results are encouraging.

Journal ArticleDOI
TL;DR: A novel noise level estimation approach for natural images is proposed by jointly exploiting the piecewise stationarity and a regular property of the kurtosis in bandpass domains by adapting a means-based algorithm to adaptively partition an image into a series of non-overlapping regions.
Abstract: Noise level estimation is crucial in many image processing applications, such as blind image denoising. In this paper, we propose a novel noise level estimation approach for natural images by jointly exploiting the piecewise stationarity and a regular property of the kurtosis in bandpass domains. We design a $K$ -means-based algorithm to adaptively partition an image into a series of non-overlapping regions, each of whose clean versions is assumed to be associated with a constant, but unknown kurtosis throughout scales. The noise level estimation is then cast into a problem to optimally fit this new kurtosis model. In addition, we develop a rectification scheme to further reduce the estimation bias through noise injection mechanism. Extensive experimental results show that our method can reliably estimate the noise level for a variety of noise types, and outperforms some state-of-the-art techniques, especially for non-Gaussian noises.

Journal ArticleDOI
TL;DR: The LaBGen method, which emerged as the best one during the Scene Background Modeling and Initialization workshop organized in 2015, is described extensively and the stability of the predicted background image over time with respect to the chosen background subtraction algorithm is studied.

Journal ArticleDOI
TL;DR: An adaptive method which increases the window size according to the amounts of impulsive noise is proposed, adaptive dynamically weighted median filter (ADWMF), which works better for both images with low and high density ofImpulsive noise than existing methods work.
Abstract: A new impulsive noise removal filter, adaptive dynamically weighted median filter (ADWMF), is proposed. A popular method for removing impulsive noise is a median filter whereas the weighted median filter and center weighted median filter were also investigated. ADWMF is based on weighted median filter. In ADWMF, instead of fixed weights, weightages of the filter are dynamically assigned with the results of noise detection. A simple and efficient noise detection method is also used to detect noise candidates and dynamically assign zero or small weights to the noise candidates in the window. This paper proposes an adaptive method which increases the window size according to the amounts of impulsive noise. Simulation results show that the AMWMF works better for both images with low and high density of impulsive noise than existing methods work.

Journal ArticleDOI
TL;DR: A novel, efficient, and fast-performing vision-based system for traffic flow monitoring that follows a robust adaptive background segmentation strategy based on the Approximated Median Filter technique, which detects pixels corresponding to moving objects.
Abstract: In this paper a novel, efficient, and fast-performing vision-based system for traffic flow monitoring is presented. Using standard traffic surveillance cameras and effectively applying simple techniques, the proposed method can produce accurate results on vehicle counting in different challenging situations, such as low-resolution videos, rainy scenes, and situations of stop-and-go traffic. Due to the simplicity of the proposed algorithm, the system is able to manage multiple video streams simultaneously in real time. The method follows a robust adaptive background segmentation strategy based on the Approximated Median Filter technique, which detects pixels corresponding to moving objects. Experimental results show that the proposed method can achieve sufficient accuracy and reliability while showing high performance rates, outperforming other state-of-the-art methods. Tests have proved that the system is able to work with up to 50 standard-resolution cameras at the same time in a standard computer, producing satisfactory results.

Journal ArticleDOI
TL;DR: This paper first proceed for the enhancement of the image with the help of median filter, Gaussian filter and un-sharp masking, then morphological operations like erosion and dilation and then entropy based segmentation is used to find the region of interest and finally KNN and SVM classification techniques are used for the analysis of kidney stone images.
Abstract: Kidney stone detection is one of the sensitive topic now-a-days. There are various problem associates with this topic like low resolution of image, similarity of kidney stone and prediction of stone in the new image of kidney. Ultrasound images have low contrast and are difficult to detect and extract the region of interest. Therefore, the image has to go through the preprocessing which normally contains image enhancement. The aim behind this operation is to find the out the best quality, so that the identification becomes easier. Medical imaging is one of the fundamental imaging, because they are used in more sensitive field which is a medical field and it must be accurate. In this paper, we first proceed for the enhancement of the image with the help of median filter, Gaussian filter and un-sharp masking. After that we use morphological operations like erosion and dilation and then entropy based segmentation is used to find the region of interest and finally we use KNN and SVM classification techniques for the analysis of kidney stone images.

Journal ArticleDOI
TL;DR: A hierarchical level set algorithm is introduced, which is fast and precise for multiregion segmentation of synthetic aperture radar (SAR) images, which performs curve regularization with a nonparametric median filter instead of using the curvature formulation, and hence it reduces the computation time.
Abstract: An efficient strategy of image processing algorithms to deal with the speckle noise is to incorporate data knowledge and models into them In this letter, we introduce a hierarchical level set algorithm, which is fast and precise for multiregion segmentation of synthetic aperture radar (SAR) images Our algorithm performs curve regularization with a nonparametric median filter instead of using the curvature formulation, and hence it reduces the computation time The proposed algorithm also replaces the front propagation derivatives by morphological operations, and finally, the arithmetic-geometric distance measures the contrast between regions and controls the hierarchical segmentation We conducted experiments on synthetic and real SAR images modeled by the $\mathcal {G}_{I}^{0}$ distribution The performance evaluation of the proposed algorithm and two related methods comprises the computation time and measures based on segmentation accuracy and stochastic distance Overall, our segmentation algorithm performed faster and more precise on both synthetic and real SAR images

Proceedings ArticleDOI
01 Dec 2017
TL;DR: In this paper, performance evaluation of the of MRI image de-noising techniques is provided and a new method is proposed which modifies the existing median filter by adding features.
Abstract: For the study of anatomical structure and image processing of MRI medical images techniques of noise removal have become an important practice in medical imaging application. In medical image processing, precise images need to be obtained to get accurate observations for the given application. The goal of any de-noising technique is to remove noise from an image which is the first step in any image processing. The noise removal method should be applied watchful manner otherwise artefacts can be introduced which may blur the image. In this paper, performance evaluation of the of MRI image de-noising techniques is provided. The techniques used are namely the median and Gaussian filter, Max filter [11], Min filter [11], and Arithmetic Mean filter [8]. All the above filters are applied on MRI brain and spinal cord images and the results are noted. A new method is proposed which modifies the existing median filter by adding features. The experimental result of the proposed method is then analyzed with the other three image filtering algorithms. The output image efficiency is measured by the statistical parameters like root mean square error (RMSE), signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR).

Journal ArticleDOI
TL;DR: This paper proposes a local difference descriptor with two feature sets to reveal the traces of median filtering, which is more reliable than prior methods to detect tampering involving local median filtering.
Abstract: As a content-preserved image manipulation, median filtering approach has received extensive attention from forensic analyzers. In this paper, we propose a local difference descriptor with two feature sets to reveal the traces of median filtering. The first set of features are fused rotation invariant uniform local binary patterns (LBP), which can quantify the occurrence statistics of micro-features in an image. The second features set is extracted from pixel difference matrix (PDM), which can better describe how pixel values change introduced by median filtering. To validate the effectiveness of the proposed approach, we compare it with the state-of-the-art median filtering detectors in the cases of JPEG compression and low resolution. Experimental results show that our approach outperforms existing detectors. Moreover, our approach is more reliable than prior methods to detect tampering involving local median filtering. HighlightsA local difference descriptor for median filtering detection is proposed.The occurrence statistics of certain micro-features have discrimination capability.The distribution of micro-features is estimated by the histogram of LBP.Local pixel differences can better describe how pixel values change.Joint probability is suitable to describe the behavior of local difference pairs.

Journal ArticleDOI
TL;DR: In this paper, a novel method combining a median filter and fractional order calculus can be used for automatic filtering of electrocardiography artifacts in the surface electromyography signal envelopes recorded in trunk muscles.

Journal ArticleDOI
TL;DR: A fault-tolerant implementation of the median filter is presented and studied in-depth, and Experimental results show that the technique detects enough corrupted pixels in an image to prevent 91% of the corrupted images from being erroneously sent to the next image processing operation.
Abstract: In digital image processing systems, the acquisition stage may capture impulsive noise along with the image. This physical phenomenon is commonly referred to as “salt-and-pepper” noise. The median filter is a nonlinear image processing operation used to remove this impulsive noise from images. This digital filter can be implemented in hardware to speed up the algorithm. However, an SRAM-based field-programmable gate array implementation of this filter is then susceptible to configuration memory bit flips induced by single event upsets, so a protection technique is needed for critical applications in which the proper filter operation must be ensured. In this paper, a fault-tolerant implementation of the median filter is presented and studied in-depth. Our protection technique checks if the median output is within a dynamic range created with the remaining nonmedian outputs. An output error signal is activated if a corrupted image pixel is detected, then a partial or complete reconfiguration can be performed to remove the configuration memory error. Experimental results show that our technique detects enough corrupted pixels in an image to prevent 91% of the corrupted images from being erroneously sent to the next image processing operation. This high error detection rate is achieved introducing only a 35% of additional resource overhead.

Journal ArticleDOI
TL;DR: A new insight into MF capabilities based on the optimal breakdown value (BV) of the median is offered, and it is shown that the BV-based versions of two of the most popular MF algorithms outperform their corresponding standard versions.
Abstract: Median filtering (MF) is a canonical image processing operation truly useful in many practical applications. The MF most appealing feature is its resistance to noise and errors in data, but because the method requires window values to be sorted it is computationally expensive. In this work, a new insight into MF capabilities based on the optimal breakdown value (BV) of the median is offered, and it is also shown that the BV-based versions of two of the most popular MF algorithms outperform their corresponding standard versions. A general framework for both the theoretical analysis and comparison of MF algorithms is presented in the process, which will hopefully contribute to a better understanding of the MF many subtle features. The introduced ideas are experimentally tested by using real and synthetic images.

Journal ArticleDOI
TL;DR: In this article, a robust f -x projection filtering scheme for simultaneous erratic noise and Gaussian random noise attenuation is proposed, where the estimation of the prediction error filter and the additive noise sequence are performed in an alternating fashion.
Abstract: Linear prediction filters are an effective tool for reducing random noise from seismic records. Unfortunately, the ability of prediction filters to enhance seismic records deteriorates when the data are contaminated by erratic noise. Erratic noise in this article designates non-Gaussian noise that consists of large isolated events with known or unknown distribution. We propose a robust f -x projection filtering scheme for simultaneous erratic noise and Gaussian random noise attenuation. Instead of adopting the 2-norm, as commonly used in the conventional design of f -x filters, we utilize the hybrid 1/2-norm to penalize the energy of the additive noise. The estimation of the prediction error filter and the additive noise sequence are performed in an alternating fashion. First, the additive noise sequence is fixed, and the prediction error filter is estimated via the least-squares solution of a system of linear equations. Then, the prediction error filter is fixed, and the additive noise sequence is estimated through a cost function containing a hybrid 1/2-norm that prevents erratic noise to influence the final solution. In other words, we proposed and designed a robust M-estimate of a special autoregressive moving-average model in the f -x domain. Synthetic and field data examples are used to evaluate the performance of the proposed algorithm.

Journal ArticleDOI
TL;DR: PAI adapts the image to the perception of the human visual system and thereof increases the quality of the image, and effect on image enhancement is benchmarked upon morphological image sharpening and high-boost filtering.
Abstract: The perceptual adaptation of the image (PAI) is introduced by inspiration from Chevreul–Mach Bands (CMB) visual phenomenon. By boosting the CMB assisting illusory effect on boundaries of the regions, PAI adapts the image to the perception of the human visual system and thereof increases the quality of the image. PAI is proposed for application to standard images or the output of any image processing technique. For the implementation of the PAI on the image, an algorithm of morphological filters (MFs) is presented, which geometrically adds the model of CMB effect. Numerical evaluation by improvement ratios of four no-reference image quality assessment (NR-IQA) indexes approves PAI performance where it can be noticeably observed in visual comparisons. Furthermore, PAI is applied as a postprocessing block for classical morphological filtering, weighted morphological filtering, and median morphological filtering in cancelation of salt and pepper, Gaussian, and speckle noise from MRI images, where the above specified NR-IQA indexes validate it. PAI effect on image enhancement is benchmarked upon morphological image sharpening and high-boost filtering.

Journal ArticleDOI
TL;DR: The aim is to develop telemedicine framework for wound diagnosis by improving good interaction between health experts, patients, and tele-medical agents who belongs to rural/urban areas that are involved in the provision of care to resolve the delayed treatment.
Abstract: This paper presents an outline of methods that have been proposed for the analysis of chronic wound images. This paper indicates the details of four different ulcerous cases, provides good treatment policy, enhances quality of patient’s life, improves evidence based clinical outcomes and suggests best possible issues. This paper investigates efficient filtering techniques for chronic wound image pre-processing under Tele-wound network. The aim of this work is to accurately access the healing status of chronic wound with improved image processing techniques by proper filtering. Efficient filtering techniques help to reduce the noise for wound images. The simulation results are presented by comparing different parameters. Performance parameters are peak signal to noise ratio (PSNR), mean square error (MSE), signal to noise ratio and mean absolute error. Results shows adaptive Median filtering provides better performances with respect to high PSNR and reduced MSE between original and filtered image. This work proposes the Particle Swarm Optimization (PSO) method for segmentation of wound areas via suitable color space selection. The PSO algorithm in Db channel provided good accuracy (98.93%) for chronic wound segmentation. Here proposed Linier discriminant analysis classifier provides 98% overall tissue prediction accuracy. The aim is to develop telemedicine framework for wound diagnosis by improving good interaction between health experts, patients, and tele-medical agents who belongs to rural/urban areas that are involved in the provision of care to resolve the delayed treatment.

Journal ArticleDOI
TL;DR: The proposed denoising procedure works without thresholds for the localisation of noise, as well for the stop criterium of the algorithm, and a proposition which states a constructive structural property of the wavelets tree with respect to a defined seminorm has been proven for a special technical case.
Abstract: This paper deals with noise detection and threshold free on-line denoising procedure for discrete scanning probe microscopy (SPM) surface images using wavelets. In this sense, the proposed denoising procedure works without thresholds for the localisation of noise, as well for the stop criterium of the algorithm. In particular, a proposition which states a constructive structural property of the wavelets tree with respect to a defined seminorm has been proven for a special technical case. Using orthogonal wavelets, it is possible to obtain an efficient localisation of noise and as a consequence a denoising of the measured signal. An on-line denoising algorithm, which is based upon the discrete wavelet transform (DWT), is proposed to detect unavoidable measured noise in the acquired data. With the help of a seminorm the noise of a signal is defined as an incoherent part of a measured signal and it is possible to rearrange the wavelet basis which can illuminate the differences between its coherent and incoherent part. In effect, the procedure looks for the subspaces consisting of wavelet packets characterised either by small or opposing components in the wavelet domain. Taking real measurements the effectiveness of the proposed denoising algorithm is validated and compared with Gaussian FIR- and Median filter. The proposed method was built using the free wavelet toolboxes from the WaveLab 850 library of the Stanford University (USA).

Journal ArticleDOI
TL;DR: An optimized and robust digital image watermarking technique based on lifting wavelet transform (LWT) and firefly algorithm and experimental results showed its good imperceptibility and high robustness.
Abstract: In this paper, an optimized and robust digital image watermarking technique based on lifting wavelet transform (LWT) and firefly algorithm is proposed. LWT is newer and faster generation of former wavelet transforms and firefly algorithm is an efficient optimizing algorithms. In current technique, base image decomposed by LWT into 4 sub bands then the first sub band separated into non overlapping blocks. After that blocks are sorted in order of descending based on standard derivation of each block. Selecting suitable blocks for special embedding process seems to be an optimization problem due to existence of a trade-off between imperceptibility and robustness. Firefly algorithm used to solve this trade-off while selecting primary blocks causes high robustness and low imperceptibility and vice versa. For improving security, Arnold transform applied to watermark and achieved scrambled image bits used as condition for embedding process. The proposed technique evaluated by variety of attacks like additive noise, average filter, median filter, sharpening filter and some other geometric and non-geometric attacks and experimental results showed its good imperceptibility and high robustness.

Journal ArticleDOI
TL;DR: A high-quality denoising and restoration ratio for the noisy images than the existing methods, in terms of peak signal-to-noise ratio (PSNR) and second-derivative-like measure of enhancement (SDME).
Abstract: In this paper, we have proposed adaptive methods for image restoration in which the input images are affected by noise which is removed by fuzzy based median filter (FMF). The noise removed images from the FMF is appears to be so there is a need to restore the images with high quality. To restore the images an APSO (Adaptive particle swarm optimization) based Richardson-Lucy (R-L) algorithm is utilized. By both FMF and APSO-RL methods the denoising and restoration of the image is performed efficiently. The performance of the image denoising and restoration technique is evaluated by comparing the result of proposed technique with the existing denoising filter and GA, PSO methods. The comparison result shows a high-quality denoising and restoration ratio for the noisy images than the existing methods, in terms of peak signal-to-noise ratio (PSNR) and second-derivative-like measure of enhancement (SDME).