scispace - formally typeset
Search or ask a question

Showing papers on "Histogram equalization published in 2016"


Journal ArticleDOI
TL;DR: This paper presents a secure multiple watermarking method based on discrete wavelet transform (DWT), discrete cosine transforms (DCT) and singular value decomposition (SVD) and the technique is found to be robust against the Checkmark attacks.
Abstract: This paper presents a secure multiple watermarking method based on discrete wavelet transform (DWT), discrete cosine transforms (DCT) and singular value decomposition (SVD). For identity authentication purpose, the proposed method uses medical image as the image watermark, and the personal and medical record of the patient as the text watermark. In the embedding process, the cover medical image is decomposed up to second level of DWT coefficients. Low frequency band (LL) of the host medical image is transformed by DCT and SVD. The watermark medical image is also transformed by DCT and SVD. The singular value of watermark image is embedded in the singular value of the host image. Furthermore, the text watermark is embedding at the second level of the high frequency band (HH) of the host image. In order to enhance the security of the text watermark, encryption is applied to the ASCII representation of the text watermark before embedding. Results are obtained by varying the gain factor, size of the text watermark, and medical image modalities. Experimental results are provided to illustrate that the proposed method is able to withstand a variety of signal processing attacks such as JPEG, Gaussian, Salt-and-Pepper, Histogram equalization etc. The performance of the proposed technique is also evaluated by using the benchmark software Checkmark and the technique is found to be robust against the Checkmark attacks such as Collage, Trimmed Mean, Hard and Soft Thresholding, Wavelet Compression, Mid Point, Projective, and Wrap etc.

155 citations


Journal ArticleDOI
01 May 2016
TL;DR: A novel fuzzy logic and histogram based algorithm called Fuzzy Clipped Contrast-Limited Adaptive Histogram Equalization (FC-CLAHE) algorithm is proposed for enhancing the local contrast of digital mammograms and produces better results than several state-of-art algorithms.
Abstract: A novel fuzzy logic and histogram based algorithm called Fuzzy Clipped Contrast-Limited Adaptive Histogram Equalization (FC-CLAHE) algorithm is proposed for enhancing the local contrast of digital mammograms. A digital mammographic image uses a narrow range of gray levels. The contrast of a mammographic image distinguishes its diagnostic features such as masses and micro calcifications from one another with respect to the surrounding breast tissues. Thus, contrast enhancement and brightness preserving of digital mammograms is very important for early detection and further diagnosis of breast cancer. The limitation of existing contrast enhancement and brightness preserving techniques for enhancing digital mammograms is that they limit the amplification of contrast by clipping the histogram at a predefined clip-limit. This clip-limit is crisp and invariant to mammogram data. This causes all the pixels inside the window region of the mammogram to be equally affected. Hence these algorithms are not very suitable for real time diagnosis of breast cancer. In this paper, we propose a fuzzy logic and histogram based clipping algorithm called Fuzzy Clipped Contrast-Limited Adaptive Histogram Equalization (FC-CLAHE) algorithm, which automates the selection of the clip-limit that is relevant to the mammogram and enhances the local contrast of digital mammograms. The fuzzy inference system designed to automate the selection of clip-limit requires a limited number of control parameters. The fuzzy rules are developed to make the clip limit flexible and variant to mammogram data without human intervention. Experiments are conducted using the 322 digital mammograms extracted from MIAS database. The performance of the proposed technique is compared with various histogram equalization methods based on image quality measurement tools such as Contrast Improvement Index (CII), Discrete Entropy (DE), Absolute Mean Brightness Coefficient (AMBC) and Peak Signal-to-Noise Ratio (PSNR). Experimental results show that the proposed FC-CLAHE algorithm produces better results than several state-of-art algorithms.

98 citations


Journal ArticleDOI
TL;DR: The novel idea presented in this paper is to suppress the impact of pixels in non-textured areas and to exploit texture features for the computation of histogram in the process of HE.
Abstract: This paper presents two novel contrast enhancement approaches using texture regions-based histogram equalization (HE). In HE-based contrast enhancement methods, the enhanced image often contains undesirable artefacts because an excessive number of pixels in the non-textured areas heavily bias the histogram. The novel idea presented in this paper is to suppress the impact of pixels in non-textured areas and to exploit texture features for the computation of histogram in the process of HE. The first algorithm named as Dominant Orientation-based Texture Histogram Equalization (DOTHE), constructs the histogram of the image using only those image patches having dominant orientation. DOTHE categories image patches into smooth, dominant or non-dominant orientation patches by using the image variance and singular value decomposition algorithm and utilizes only dominant orientation patches in the process of HE. The second method termed as Edge-based Texture Histogram Equalization, calculates significant ed...

93 citations


01 Jan 2016
TL;DR: The color image processing and applications is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can download it instantly.
Abstract: Thank you very much for reading color image processing and applications. As you may know, people have look numerous times for their favorite books like this color image processing and applications, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some malicious virus inside their computer. color image processing and applications is available in our book collection an online access to it is set as public so you can download it instantly. Our books collection spans in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the color image processing and applications is universally compatible with any devices to read.

90 citations


Journal ArticleDOI
TL;DR: A novel fuzzy color difference histogram (FCDH) is proposed by using fuzzy c-means (FCM) clustering and exploiting the CDH, which reduces the large dimensionality of the histogram bins in the computation and lessens the effect of intensity variation generated due to the fake motion or change in illumination of the background.
Abstract: Detection of moving objects in the presence of complex scenes such as dynamic background (e.g, swaying vegetation, ripples in water, spouting fountain), illumination variation, and camouflage is a very challenging task. In this context, we propose a robust background subtraction technique with three contributions. First, we present the use of color difference histogram (CDH) in the background subtraction algorithm. This is done by measuring the color difference between a pixel and its neighbors in a small local neighborhood. The use of CDH reduces the number of false errors due to the non-stationary background, illumination variation and camouflage. Secondly, the color difference is fuzzified with a Gaussian membership function. Finally, a novel fuzzy color difference histogram (FCDH) is proposed by using fuzzy c-means (FCM) clustering and exploiting the CDH. The use of FCM clustering algorithm in CDH reduces the large dimensionality of the histogram bins in the computation and also lessens the effect of intensity variation generated due to the fake motion or change in illumination of the background. The proposed algorithm is tested with various complex scenes of some benchmark publicly available video sequences. It exhibits better performance over the state-of-the-art background subtraction techniques available in the literature in terms of classification accuracy metrics like $MCC$ and $PCC$ .

77 citations


Journal ArticleDOI
TL;DR: Quantitative and qualitative results obtained from experiments on a wide variety of natural scene images demonstrate the effectiveness of the proposed approach over other methods at reducing artefact while increasing image contrast and colourfulness.

76 citations


Journal ArticleDOI
TL;DR: The main objective of the proposed FRVM classification is to accurately predict the type of leaf from the given input leaf images, which showed better results such as accuracy, sensitivity and specificity of 99.87%, 99.5%, and 99.9% respectively, which are the improved values over the literature.

65 citations


Journal ArticleDOI
TL;DR: Results showed that the proposed colour image enhancement technique introduced in this work is able to recover the largest amount of information as compared to other current approaches, and provides satisfactory performances in terms of image contrast, and sharpness.
Abstract: The histogram equalization process is a simple yet efficient image contrast enhancement technique that generally produces satisfactory results. However, due to its design limitations, output images often experience a loss of fine details or contain unwanted viewing artefacts. One reason for such imperfection is a failure of some techniques to fully utilize the allowable intensity range in conveying the information captured from a scene. The proposed colour image enhancement technique introduced in this work aims at maximizing the information content within an image, whilst minimizing the presence of viewing artefacts and loss of details. This is achieved by weighting the input image and the interim equalized image recursively until the allowed intensity range is maximally covered. The proper weighting factor is optimally determined using the efficient golden section search algorithm. Experiments had been conducted on a large number of images captured under natural indoor and outdoor environment. Results s...

65 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed image enhancement with the AGCCH method can perform well in brightness preservation, contrast enhancement, and detail preservation, and it is superior to previous state-of-the-art methods.

54 citations


Journal ArticleDOI
TL;DR: Experimental result proves that the proposed RDH-CE work is effective in improving image contrast, reducing visual image distortion, and increasing embedding capacity.
Abstract: Existing image-based reversible data hiding (RDH) methods tend to focus on increasing embedding capacity, but few consider keeping or improving visual image quality. Wu et al. proposed a new RDH method with contrast enhancement (RDH-CE) by pair-wisely expanding the histogram to the lower end and upper end. RDH-CE is especially valuable in exploiting the details of poorly illustrated images for which the visibility of image details is more important than just keeping PSNR high. However, obvious visual image distortion appears when embedding level gets high, and embedding capacity is relatively low when embedding level is small. In this paper, Wu et al.'s work is improved from three perspectives, namely image contrast enhancement, visual distortion reduction, and embedding capacity increment. The image contrast is improved by making the histogram shifting process adaptive to the histogram distribution characteristics, the image visual distortion is reduced by cutting off half the modification range of pixels induced in histogram pre-shifting, and the embedding capacity is increased by exploiting the pixel value ordering technique at the early stage of data embedment. Experimental result proves that the proposed work is effective in improving image contrast, reducing visual image distortion, and increasing embedding capacity. >Adaptive histogram shifting improves the image contrast for RDH methods.Bidirectional histogram pre-shifting alleviates visual image distortion.The pixel value ordering technique increases the embedding capacity.An alternative way can better evaluate the marked image quality.

51 citations


Journal ArticleDOI
TL;DR: An improved and simple approach for enhancement of dark and low contrast satellite image based on knee function and gamma correction using discrete wavelet transform with singular value decomposition (DWT–SVD) has been proposed for quality enhancement of feature.
Abstract: In this paper, an improved and simple approach for enhancement of dark and low contrast satellite image based on knee function and gamma correction using discrete wavelet transform with singular value decomposition (DWT---SVD) has been proposed for quality enhancement of feature. In addition, this method can also process the high resolution dark or very low contrast images, and offers best enhanced result using tuning parameter of Gamma. The technique decomposes the input image into four frequency subbands by using DWT and estimates the singular value matrix of the low---low subband image, and then compute the knee transfer function using gamma correction for further improvement of the LL component. Afterward, processed LL band image undergoes IDWT together with the unprocessed LH, HL, and HH subbands to generate an appropriate enhanced image. Although, various histogram equalization approaches has been proposed in the literature, they tend to degrade the overall image quality by exhibiting saturation artifacts in both low- and high-intensity regions. The proposed algorithm overcomes this problem using knee function and gamma correction. The experimental results show that the proposed algorithm enhances the overall contrast and visibility of local details better than the existing techniques.

Proceedings ArticleDOI
23 Mar 2016
TL;DR: In this era, Computerized field in digital image processing needs efficient MRI image with less noise and improved contrast of image, and Histogram equalization technique was used to improve contrast of MRI image.
Abstract: In this era, Computerized field in digital image processing needs efficient MRI image with less noise and improved contrast of image. The main process examined and look at different Histogram based enhancement techniques. Histogram equalization analyze on the bases of Magnetic resonance imaging (MRI) furthermore calculate the metrics parameter of histogram techniques. Image enhancement is a procedure of changing or adjusting image in order to make it more suitable for certain applications and is used to enhance or improve contrast ratio, brightness of image, remove noise from image and make it easier to identify. Magnetic resonance imaging (MRI) is an astounding medical technology provide more appropriate information regarding Human brain soft tissue, cancer, stroke and various another diseases. MRI helps doctors to identify the diseases easily. MRI has very low contrast ratio to improve contrast of MRI image we used Histogram equalization technique. In which, Histogram Equalization, Local Histogram Equalization, Adaptive Histogram Equalization and Contrast Limited Adaptive Histogram Equalization techniques are compared.

Journal ArticleDOI
TL;DR: The obtained results show that the proposed approach outperforms traditional ear image contrast enhancement techniques, and increases the amount of detail in the ear image, and consequently improves the recognition rate.
Abstract: An ear biometric system based on artificial bees for ear image contrast enhancement is proposed.In the feature extraction stage, the scale invariant feature transform is used.The proposed approach was tested on three databases of ear biometrics: IIT Delhi, USTB 1 and USTB 2.The obtained results proved the superiority of the proposed in two out of three test databases. Ear recognition is a new biometric technology that competes with well-known biometric modalities such as fingerprint, face and iris. However, this modality suffers from common image acquisition problems, such as change in illumination, poor contrast, noise and pose variation. Using a 3D ear models reduce rotation, scale variation and translation-related problems, but they are computationally expensive. This paper presents a new architecture of ear biometrics that aims at solving the acquisition problems of 2D ear images. The proposed system uses a new ear image contrast enhancement approach based on the gray-level mapping technique, and uses an artificial bee colony (ABC) algorithm as an optimizer. This technique permits getting better-contrasted 2D ear images. In the feature extraction stage, the scale invariant feature transform (SIFT) is used. For the matching phase, the Euclidean distance is adopted. The proposed approach was tested on three reference ear image databases: IIT Delhi, USTB 1 and USTB 2, and compared with traditional ear image contrast enhancement approaches, histogram equalization (HE) and contrast limited adaptive histogram equalization (CLAHE). The obtained results show that the proposed approach outperforms traditional ear image contrast enhancement techniques, and increases the amount of detail in the ear image, and consequently improves the recognition rate.

Journal ArticleDOI
01 Feb 2016-Optik
TL;DR: Experiments results show that the proposed method is able to enhance contrast of all type of color images without much affecting its visual and color information.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: Wang et al. as discussed by the authors presented a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN, and trained 20 different CNN models and verified the performance of each network with test images from five different databases.
Abstract: We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks × 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.

Posted Content
TL;DR: A baseline convolutional neural network structure and image preprocessing methodology is presented to improve facial expression recognition algorithm using CNN and shows that a three-layer structure consisting of a simple Convolutional and a max pooling layer with histogram equalization image input was the most efficient.
Abstract: We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks x 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new method of underwater image restoration and enhancement which was inspired by the dark channel prior in image dehazing field, by estimating and rectifying the bright channel image, estimating the atmospheric light and estimating and refining the transmittance image, eventually underwater images were restored.
Abstract: This paper proposed a new method of underwater images restoration and enhancement which was inspired by the dark channel prior in image dehazing field. Firstly, we proposed the bright channel prior of underwater environment. By estimating and rectifying the bright channel image, estimating the atmospheric light, and estimating and refining the transmittance image, eventually underwater images were restored. Secondly, in order to rectify the color distortion, the restoration images were equalized by using the deduced histogram equalization. The experiment results showed that the proposed method could enhance the quality of underwater images effectively.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: CryptoImg as mentioned in this paper is a modular privacy preserving image processing over encrypted images, which allows the users to delegate their image processing operations to remote servers without any privacy concerns, such as image adjustment, spatial filtering, edge sharpening, histogram equalization and others.
Abstract: Cloud computing services provide a scalable solution for the storage and processing of images and multimedia files. However, concerns about privacy risks prevent users from sharing their personal images with third-party services. In this paper, we describe the design and implementation of CryptoImg, a library of modular privacy preserving image processing operations over encrypted images. By using homomorphic encryption, CryptoImg allows the users to delegate their image processing operations to remote servers without any privacy concerns. Currently, CryptoImg supports a subset of the most frequently used image processing operations such as image adjustment, spatial filtering, edge sharpening, histogram equalization and others. We implemented our library as an extension to the popular computer vision library OpenCV. CryptoImg can be used from either mobile or desktop clients. Our experimental results demonstrate that CryptoImg is efficient while performing operations over encrypted images with negligible error and reasonable time overheads on the supported platforms.

Journal ArticleDOI
TL;DR: The proposed ARCS scheme is used for determining the ideal reference color for MM and for color image segmentation application and the threshold determination reacts with less sensitivity to the context variations of the images tested.
Abstract: This paper proposes a novel automatic reference color selection (ARCS) scheme for the adaptive mathematical morphology (MM) method, and is specifically designed for color image segmentation applications. Because of the main advantages of being intuitive and simple, in the past decade, it has contributed to the growing popularity of binary and gray-scale MM processing. However, the MM process typically neglects the details of reference color determination. Applying other ordering methods, which select only black as the reference color for sorting pixels, result in the problem in which the scope of the distance measurement is not optimal. The proposed ARCS scheme is used for determining the ideal reference color for MM and for color image segmentation application. In addition, we use both 1D histogram-based modeling scheme binning from 3D color spaces, such as red–green–blue and hue–saturation–intensity, and 2D color models, such as (H, S), (Cb, Cr), and (I, By). According to the results of the quartile analysis, the threshold determination reacts with less sensitivity to the context variations of the images tested. The experiments focused on color-based image segmentation using the proposed ARCS scheme for color MM processing through a bottom–up scenario. To evaluate the system, four quantitative indices were utilized for an ARCS comparison using advanced segmentation methods in the experiments. The cross validation with different system parameters and a comparison of the morphological gradient operation with different color models are also presented.

Journal ArticleDOI
Yi Li1, Yunfeng Zhang1, Aihui Geng1, Lihua Cao1, Juan Chen1 
TL;DR: The novel algorithm optimizes and improves the visual image haze remove method which combines the characteristics of the fuzzy infrared images and proposes a sectional plateau histogram equalization method which is capable of background suppression.
Abstract: Infrared images are fuzzy due to the special imaging technology of infrared sensor. In order to achieve contrast enhancement and gain clear edge details from a fuzzy infrared image, we propose an efficient enhancement method based on atmospheric scattering model and histogram equalization. The novel algorithm optimizes and improves the visual image haze remove method which combines the characteristics of the fuzzy infrared images. Firstly, an average filtering operation is presented to get the estimation of coarse transmission rate. Then we get the fuzzy free image through self-adaptive transmission rate calculated with the statistics information of original infrared image. Finally, to deal with low lighting problem of fuzzy free image, we propose a sectional plateau histogram equalization method which is capable of background suppression. Experimental results show that the performance and efficiency of the proposed algorithm are pleased, compared to four other algorithms in both subjective observation and objective quantitative evaluation. In addition, the proposed algorithm is competent to enhance infrared image for different applications under different circumstances.

Journal ArticleDOI
TL;DR: Suitability of the proposed RGB YCbCr Processing method is validated by real-time implementation during the testing of the Autonomous Underwater Vehicle (AUV-150) developed indigenously by CSIR-CMERI.
Abstract: An RGB YCbCr Processing method (RYPro) is proposed for underwater images commonly suffering from low contrast and poor color quality. The degradation in image quality may be attributed to absorption and backscattering of light by suspended underwater particles. Moreover, as the depth increases, different colors are absorbed by the surrounding medium depending on the wavelengths. In particular, blue/green color is dominant in the underwater ambience which is known as color cast. For further processing of the image, enhancement remains an essential preprocessing operation. Color equalization is a widely adopted approach for underwater image enhancement. Traditional methods normally involve blind color equalization for enhancing the image under test. In the present work, processing sequence of the proposed method includes noise removal using linear and non-linear filters followed by adaptive contrast correction in the RGB and YCbCr color planes. Performance of the proposed method is evaluated and compared with three golden methods, namely, Gray World (GW), White Patch (WP), Adobe Photoshop Equalization (APE) and a recently developed method entitled “Unsupervised Color Correction Method (UCM)”. In view of its simplicity and computational ease, the proposed method is recommended for real-time applications. Suitability of the proposed method is validated by real-time implementation during the testing of the Autonomous Underwater Vehicle (AUV-150) developed indigenously by CSIR-CMERI.

03 Oct 2016
TL;DR: A manuscript reversible data hiding (RDH) algorithmic suggested for digital images reveals that the visual quality could be preserved after a great deal of message bits happen to be embedded into the contrast-enhanced images, better than three specific MATLAB functions employed for image contrast enhancement.
Abstract: Reversible data hiding (RDH) continues to be intensively studied locally of signal processing. To judge the performance of the RDH formula, hiding rate and marked picture quality are essential metrics. There exists a trade-off together because growing the hiding rate frequently causes more distortion in image content. To measure the distortion, the peak signal-to-noise ratio (PSNR) value ofthemarked image is frequently calculated. The greatest two bins within the histogram are selected for data embedding to ensure that histogram equalization could be carried out by repeating the procedure. Alongside it details are to be embedded combined with the message bits into the host image so the original image is totally recoverable. The suggested formula was developed on two teams of images to demonstrate its efficiency. Within this letter, a manuscript reversible data hiding (RDH) algorithmic suggested for digital images. Rather than attempting to keep the PSNR value high, the suggested formula improves the contrast of the image to enhance its visual quality. To the best understanding, it's the first algorithm that accomplishes image contrast enhancement with data hiding. In addition, the evaluation results reveal that the visual quality could be preserved after a great deal of message bits happen to be embedded into the contrast-enhanced images, better still than three specific MATLAB functions employed for image contrast enhancement.

Journal ArticleDOI
TL;DR: A new gray level image (edge preserving) enhancement method called the harmony search algorithm (HSA) is proposed, a recently introduced population-based algorithm stemmed by the musical improvisation process when a group of musicians play the pitches of their instruments seeking for pleasing harmony.
Abstract: For decades, image enhancement has been considered one of the most important aspects in computer science because of its influence on a number of fields including but not limited to medical, security, banking and financial sectors. In this paper, a new gray level image (edge preserving) enhancement method called the harmony search algorithm (HSA) is proposed. HSA is a recently introduced population-based algorithm stemmed by the musical improvisation process when a group of musicians play the pitches of their instruments seeking for pleasing harmony. Tremendous successful stories of HSA application to a wide variety of optimization problems have been passed on at a large scale. In order to evaluate the proposed HSA-based image enhancement method, 14 standard images from the literature are used. For comparative evaluation, the results of the 14 enhanced image produced by HSA are compared with two classical image enhancement methods (i.e., Histogram Equalization algorithm and Image Adjacent algorithm...

Journal ArticleDOI
TL;DR: The proposed SURF-based algorithm is compared with scale-invariant feature transform, histogram of oriented gradients, maximally stable extremal regions and DAISY and shows that the proposed algorithm is robust to different image variations and gives the highest recognition accuracy.
Abstract: Iris recognition system is one of the biometric systems in which the development is growing rapidly. In this paper, speeded up robust features (SURFs) are used for detecting and describing iris keypoints. For feature matching, simple fusion rules are applied at different levels. Contrast-limited adaptive histogram equalization (CLAHE) is applied on the normalized image and is compared with histogram equalization (HE) and adaptive histogram equalization (AHE). The aim is to find the best enhancement technique with SURF and to verify the necessity of iris image enhancement. The recognition accuracy in each case is calculated. Experimental results demonstrate that CLAHE is a crucial enhancement step for SURF-based iris recognition. More keypoints can be extracted with enhancement using CLAHE compared to HE and AHE. This alleviates the problem of feature loss and increases the recognition accuracy. The accuracies of recognition using left and right iris images are 99 and 99.5 %, respectively. Fusion of local distances and choosing suitable fusion rules affect the recognition accuracy, noticeably. The proposed SURF-based algorithm is compared with scale-invariant feature transform, histogram of oriented gradients, maximally stable extremal regions and DAISY. Results show that the proposed algorithm is robust to different image variations and gives the highest recognition accuracy.

Patent
17 Aug 2016
TL;DR: In this paper, a cascaded convolutional neural network based human face occlusion detection method was proposed, which consists of the following steps of 1) obtaining a video frame image, 2) performing normalization processing on the image, and copying and storing two images; 3) graying the image 1 and performing histogram equalization on the brightness imbalance image; 4) performing human head detection in a multi-scale sliding window form by adopting a three-level cascaded network, and storing window coordinates and sizes which meet the conditions; 5) performing clustering analysis
Abstract: The invention discloses a cascaded convolutional neural network based human face occlusion detection method. The method comprises the following steps of 1) obtaining a video frame image; 2) performing normalization processing on the image, and copying and storing two images; 3) graying the image 1 and performing histogram equalization processing on the brightness imbalance image; 4) performing human head detection in a multi-scale sliding window form by adopting a three-level cascaded network, and storing window coordinates and sizes which meet the conditions; 5) performing clustering analysis on the window coordinates to obtain a target window position; 6) according to the obtained data, intercepting a human head region in the image 2, and performing normalization processing and brightness adjustment; and 7) performing eye and mouth detection in the multi-scale sliding window form by adopting a two-level eye/mouth cascaded network regionally, and if a set condition is not met, judging that the eyes/the mouth is occluded and triggering alarming. The method is strong in illumination and posture robustness, suitable for various occlusion types, and relatively high in detection precision.

Journal ArticleDOI
TL;DR: The results show that the algorithm can significantly improve the visual impression of the image, the average gradient and information entropy are significantly improved and the running time is shortened.
Abstract: Aiming at the characteristics of remote sensing images with low-contrast, weak edge preservation, and poor resolution textual information, an image enhancement method that combines nonsubsampled shearlet transform (NSST) and guided filtering is presented. First, histogram equalization is applied to the remote sensing image. Second, the image is decomposed into a low frequency component and several high frequency components by NSST. Then, a linear stretch is adopted for the coefficients of the low-frequency component to improve the contrast of the original image; the threshold method is used to restrain the noise in the high-frequency components, then guided filtering is used for dealing with the high-frequency components, improving the detail information and edge-gradient retention ability. Finally, the final enhanced image is reconstructed by applying the inverse NSST to the processed low- and high-frequency components. The results show that the algorithm can significantly improve the visual impression of the image. Compared with the proposed algorithms in recent years, the average gradient and information entropy are significantly improved and the running time is shortened.

Proceedings ArticleDOI
03 Mar 2016
TL;DR: Histogram Equalization (HE) method is studied and most important and highly cited HE techniques are studied and discussed their advantages, limitations and applications.
Abstract: The multimedia devices, mobile phones and other palmtop devices are integral part of our day to day life. Image and video processing plays a key role in modern day information and communication technologies. Images captured from cameras needs enhancement for before using them in various applications. Noise removal, contrast improvement, and adjustment of brightness are common operations performed on raw images after capturing from camera. Researchers have proposed variety of image enhancement techniques by using numerical methods, statistics, signal processing. In this paper, Histogram Equalization (HE) method is studied. HE is an important image enhancement scheme and many variations of HE has been proposed. We study most important and highly cited HE techniques in this work and discuss their advantages, limitations and applications.

Journal ArticleDOI
01 Jan 2016
TL;DR: Results clearly indicate that Color Histogram techniques give better precision and recall in content based image retrieval using color Histogram.
Abstract: In this paper, we present content based image retrieval using color Histogram. Color algorithm using color histogram was applied. Humans tend to differentiate images based on color, therefore color features are mostly used in CBIR. Color histogram is mostly used to represent color features but it cannot entirely characterize the image. The results clearly indicate that Color Histogram techniques give better precision and recall.

Journal ArticleDOI
TL;DR: The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature.
Abstract: Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

Journal ArticleDOI
TL;DR: An algorithm including adaptive median filter and bilateral filter is proposed that is able to suppress the mixed noise which contains Gaussian noise and impulsive noise, while preserving the important structures in the images, and enhances the contrast of image by using gray-level morphology and contrast limited histogram equalization.
Abstract: X-ray image play s a very important role in the medical diagnosis. To help the doctors for diagnosis of the disease, some algorithms for enhancing X-ray images were proposed in the past decades. However, the enhancement of images will also amplify the noise or produce distortion of image, which are unfavorable to the diagnosis. Therefore, appropriate techniques for noise suppression and contrast enhancement are necessary. This paper proposed an algorithm including t wo-stage filtering and contrast enhancement for X-ray images. By using adaptive median filter and bilateral filter, our method is able to suppress the mixed noise which contains Gaussian noise and impulsive noise, while preserving the important structures (e.g., edges) in the images. Afterwards, the contrast of image is enhanced by using gray-level morphology and contrast limited histogram equalization (CLAHE). In the experiments, we evaluate the performance of noise removal and contrast enhancement separately with quantitative indexes and visual results. For the mixed noise case, our method is able to achieve averaged PSNR 39.89 dB and averaged SSIM 0.9449 ; for the contrast enhancement, our method is able to enhance more detail structures (e.g., edges, textures) than CLAHE.