scispace - formally typeset
Search or ask a question

Showing papers on "Histogram equalization published in 2020"


Posted Content
TL;DR: An approach with very reliable and comparable performance will boost the fast and robust COVID-19 detection using chest X-ray images and the reliability of network performance is significantly improved for the segmented lung images, which was observed using the visualization technique.
Abstract: The use of computer-aided diagnosis in the reliable and fast detection of coronavirus disease (COVID-19) has become a necessity to prevent the spread of the virus during the pandemic to ease the burden on the medical infrastructure. Chest X-ray (CXR) imaging has several advantages over other imaging techniques as it is cheap, easily accessible, fast and portable. This paper explores the effect of various popular image enhancement techniques and states the effect of each of them on the detection performance. We have compiled the largest X-ray dataset called COVQU-20, consisting of 18,479 normal, non-COVID lung opacity and COVID-19 CXR images. To the best of our knowledge, this is the largest public COVID positive database. Ground glass opacity is the common symptom reported in COVID-19 pneumonia patients and so a mixture of 3616 COVID-19, 6012 non-COVID lung opacity, and 8851 normal chest X-ray images were used to create this dataset. Five different image enhancement techniques: histogram equalization, contrast limited adaptive histogram equalization, image complement, gamma correction, and Balance Contrast Enhancement Technique were used to improve COVID-19 detection accuracy. Six different Convolutional Neural Networks (CNNs) were investigated in this study. Gamma correction technique outperforms other enhancement techniques in detecting COVID-19 from standard and segmented lung CXR images. The accuracy, precision, sensitivity, f1-score, and specificity in the detection of COVID-19 with gamma correction on CXR images were 96.29%, 96.28%, 96.29%, 96.28% and 96.27% respectively. The accuracy, precision, sensitivity, F1-score, and specificity were 95.11 %, 94.55 %, 94.56 %, 94.53 % and 95.59 % respectively for segmented lung images. The proposed approach with very high and comparable performance will boost the fast and robust COVID-19 detection using chest X-ray images.

336 citations


Journal ArticleDOI
TL;DR: It is demonstrated that adding two image preprocessing steps and generating a pseudo color image plays an important role in developing a deep learning CAD scheme of chest X-ray images to improve accuracy in detecting COVID-19 infected pneumonia.

260 citations


Journal ArticleDOI
TL;DR: The hybrid method is based on using both image processing and deep learning for improved results and the introduced method is efficient and successful enough at diagnosing diabetic retinopathy from retinal fundus images.
Abstract: The objective of this study is to propose an alternative, hybrid solution method for diagnosing diabetic retinopathy from retinal fundus images. In detail, the hybrid method is based on using both image processing and deep learning for improved results. In medical image processing, reliable diabetic retinopathy detection from digital fundus images is known as an open problem and needs alternative solutions to be developed. In this context, manual interpretation of retinal fundus images requires the magnitude of work, expertise, and over-processing time. So, doctors need support from imaging and computer vision systems and the next step is widely associated with use of intelligent diagnosis systems. The solution method proposed in this study includes employment of image processing with histogram equalization, and the contrast limited adaptive histogram equalization techniques. Next, the diagnosis is performed by the classification of a convolutional neural network. The method was validated using 400 retinal fundus images within the MESSIDOR database, and average values for different performance evaluation parameters were obtained as accuracy 97%, sensitivity (recall) 94%, specificity 98%, precision 94%, FScore 94%, and GMean 95%. In addition to those results, a general comparison of with some previously carried out studies has also shown that the introduced method is efficient and successful enough at diagnosing diabetic retinopathy from retinal fundus images. By employing the related image processing techniques and deep learning for diagnosing diabetic retinopathy, the proposed method and the research results are valuable contributions to the associated literature.

165 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the suggested watermarking technique archives high robustness against attacks in comparison to the other scheme for medical images, and verification its robustness for various attacks while maintaining imperceptibility, security and compression ratio.

160 citations


Journal ArticleDOI
TL;DR: A new classification of the main techniques of low-light image enhancement developed over the past decades is presented, dividing them into seven categories: gray transformation methods, histogram equalization methods, Retinex methods, frequency-domain methods, image fusion methods, defogging model methods and machine learning methods.
Abstract: Images captured under poor illumination conditions often exhibit characteristics such as low brightness, low contrast, a narrow gray range, and color distortion, as well as considerable noise, which seriously affect the subjective visual effect on human eyes and greatly limit the performance of various machine vision systems. The role of low-light image enhancement is to improve the visual effect of such images for the benefit of subsequent processing. This paper reviews the main techniques of low-light image enhancement developed over the past decades. First, we present a new classification of these algorithms, dividing them into seven categories: gray transformation methods, histogram equalization methods, Retinex methods, frequency-domain methods, image fusion methods, defogging model methods and machine learning methods. Then, all the categories of methods, including subcategories, are introduced in accordance with their principles and characteristics. In addition, various quality evaluation methods for enhanced images are detailed, and comparisons of different algorithms are discussed. Finally, the current research progress is summarized, and future research directions are suggested.

138 citations


Journal ArticleDOI
TL;DR: A novel method to segment the breast tumor via semantic classification and merging patches and achieved competitive results compared to conventional methods in terms of TP and FP, and produced good approximations to the hand-labelled tumor contours.

135 citations


Journal ArticleDOI
TL;DR: This work proposes a two-branch network to compensate the global distorted color and local reduced contrast, respectively, and designs a compressed-histogram equalization to complement the data-driven deep learning, in which the parameters are fixed after training.
Abstract: Due to the light absorption and scattering, captured underwater images usually contain severe color distortion and contrast reduction. To address the above problems, we combine the merits of deep learning and conventional image enhancement technology to improve the underwater image quality. We first propose a two-branch network to compensate the global distorted color and local reduced contrast, respectively. Adopting this global–local network can greatly ease the learning problem, so that it can be handled by using a lightweight network architecture. To cope with the complex and changeable underwater environment, we then design a compressed-histogram equalization to complement the data-driven deep learning, in which the parameters are fixed after training. The proposed compression strategy is able to generate vivid results without introducing over-enhancement and extra computing burden. Experiments demonstrate that our method significantly outperforms several state-of-the-arts in both qualitative and quantitative qualities.

110 citations


Journal ArticleDOI
01 Mar 2020
TL;DR: A new method using Local Binary Pattern (LBP) algorithm combined with advanced image processing techniques such as Contrast Adjustment, Bilateral Filter, Histogram Equalization and Image Blending to address some of the issues hampering face recognition accuracy so as to improve the LBP codes, thus improve the accuracy of the overall face recognition system.
Abstract: Face Recognition is a computer application that is capable of detecting, tracking, identifying or verifying human faces from an image or video captured using a digital camera. Although lot of progress has been made in domain of face detection and recognition for security, identification and attendance purpose, but still there are issues hindering the progress to reach or surpass human level accuracy. These issues are variations in human facial appearance such as; varying lighting condition, noise in face images, scale, pose etc. This research paper presents a new method using Local Binary Pattern (LBP) algorithm combined with advanced image processing techniques such as Contrast Adjustment, Bilateral Filter, Histogram Equalization and Image Blending to address some of the issues hampering face recognition accuracy so as to improve the LBP codes, thus improve the accuracy of the overall face recognition system. Our experiment results show that our method is very accurate, reliable and robust for face recognition system that can be practically implemented in real-life environment as an automatic attendance management system.

102 citations


Journal ArticleDOI
TL;DR: A hybrid feature optimization algorithm along with a deep learning classifier is proposed to improve performance of LULC classification, helping to predict wildlife habitat, deteriorating environmental quality, haphazard, etc.
Abstract: Land-use and land-cover (LULC) classification using remote sensing imagery plays a vital role in many environment modeling and land-use inventories. In this study, a hybrid feature optimization algorithm along with a deep learning classifier is proposed to improve the performance of LULC classification, helping to predict wildlife habitat, deteriorating environmental quality, haphazard elements, etc. LULC classification is assessed using Sat 4, Sat 6 and Eurosat datasets. After the selection of remote-sensing images, normalization and histogram equalization methods are used to improve the quality of the images. Then, a hybrid optimization is accomplished by using the local Gabor binary pattern histogram sequence (LGBPHS), the histogram of oriented gradient (HOG) and Haralick texture features, for the feature extraction from the selected images. The benefits of this hybrid optimization are a high discriminative power and invariance to color and grayscale images. Next, a human group-based particle swarm optimization (PSO) algorithm is applied to select the optimal features, whose benefits are a fast convergence rate and ease of implementation. After selecting the optimal feature values, a long short-term memory (LSTM) network is utilized to classify the LULC classes. Experimental results showed that the human group-based PSO algorithm with a LSTM classifier effectively well differentiates the LULC classes in terms of classification accuracy, recall and precision. A maximum improvement of 6.03% on Sat 4 and 7.17% on Sat 6 in LULC classification is reached when the proposed human group-based PSO with LSTM is compared to individual LSTM, PSO with LSTM, and Human Group Optimization (HGO) with LSTM. Moreover, an improvement of 2.56% in accuracy is achieved, compared to the existing models, GoogleNet, Visual Geometric Group (VGG), AlexNet, ConvNet, when the proposed method is applied.

91 citations


Journal ArticleDOI
TL;DR: Overall security analysis and experimental results show that the proposed Chaos-based color multiple image encryption technique has achieved confidentiality and have resistance against classical attacks.

70 citations


Journal ArticleDOI
TL;DR: The hypothesis that the quality of the image, which is enhanced at the pre-processing stage, can play a significant role in enhancing the classification performance of any statistical approach is presented.

Journal ArticleDOI
TL;DR: The results show that the proposed framework is having superior performance compared to all the existing methods, both qualitatively and quantitatively, in terms of contrast, information content, edge details, and structure similarity.

Journal ArticleDOI
TL;DR: A fully automated design is proposed in this work employing optimal deep learning features for classifying gastrointestinal infections using Enhanced Crow Search and Differential Evolution to achieve significant improvement over preceding techniques and other neural network architectures.
Abstract: A fully automated design is proposed in this work employing optimal deep learning features for classifying gastrointestinal infections. Here, three prominent infections– ulcer, bleeding, polyp and a healthy class are considered as class labels. In the initial stage, the contrast is improved by fusing bi-directional histogram equalization with top-hat filtering output. The resultant fusion images are then passed to ResNet101 pre-trained model and trained once again using deep transfer learning. However, there are challenges involved in extracting deep learning features including impertinent information and redundancy. To mitigate this problem, we took advantage of two metaheuristic algorithms– Enhanced Crow Search and Differential Evolution. These algorithms are implemented in parallel to obtain optimal feature vectors. Following this, a maximum correlation-based fusion approach is applied to fuse optimal vectors from the previous step to obtain an enhanced vector. This final vector is given as input to Extreme Learning Machine (ELM) classifier for final classification. The proposed method is evaluated on a combined database. It accomplished an accuracy of 99.46%, which shows significant improvement over preceding techniques and other neural network architectures.

Journal ArticleDOI
TL;DR: Modified Histograms Equalization on Fuzzy based Improved Particle Swarm Optimization (FIPSO) is proposed for Dynamic Histogram Equalization which resolves this problem through image contrast enhancement and demonstrates that the current equalization technique attains highest performance against existing techniques in terms of brightness and contrast.

Journal ArticleDOI
Cao Haijie, Liu Ning, Xu ji, Peng Jie, Liu Yuxin 
TL;DR: An adaptive inverse histogram equalization algorithm is proposed that significantly improve the image visual effect in different gray level distributions and enhance the details of different areas of the image to different degrees.
Abstract: In infrared images, when the traditional histogram equalizes the image, the detail pixels are easily immerged by the background pixels, resulting in the image being too bright and too dark. Based on this situation, an adaptive inverse histogram equalization algorithm was proposed in this paper. The algorithm enhanced image details by inverse statistics, adaptive selection threshold and segmentation mapping. Compared with the traditional histogram equalization algorithm, the inverse histogram equalization algorithm significantly improve the image visual effect in different gray level distributions and enhance the details of different areas of the image to different degrees. Moreover, under the premise of achieving better image processing effects, this algorithm can still guarantee real-time performance and high efficiency by optimizing calculation methods, and is suitable for FPGA hardware transplantation.

Journal ArticleDOI
TL;DR: Experimental results on four real hyperspectral data sets demonstrate that the proposed approach performs the best in improving image contrast and preserving the details and compared with seven state-of-the-art visualization methods.

Journal ArticleDOI
01 Dec 2020-Optik
TL;DR: A novel particle swarm optimized texture based histogram equalization (PSOTHE) technique is proposed to enhance the contrast of MRI brain images and shows the supremacy of the proposed method over other existing methods.

Journal ArticleDOI
01 Dec 2020
TL;DR: A comprehensive survey on different contrast enhancement techniques exclusively on spatial domain is presented and compared and illustrates that brightness preservation, entropy preservation, structural information loss etc., are to be catered during contrast enhancement.
Abstract: Image enhancement is essential for any image processing applications. The objective of image enhancement is to reveal the hidden information which is not available for the purview of the observer due to the presence of low and poor contrast during image acquisition. The quality of the image can be raised up by increasing the contrast. Contrast enhancement has found a prominent application in various fields such as medical, satellite imaging systems owing to its better visibility of the features. In this paper, a comprehensive survey on different contrast enhancement techniques exclusively on spatial domain is presented and compared. The survey illustrates that brightness preservation, entropy preservation, structural information loss etc., are to be catered during contrast enhancement. To validate the algorithm in terms of both qualitative and quantitative means, the researchers have used various performance measures. Among all the performance measures, entropy finds the benchmark for evaluation of the algorithms. Different databases are used by researchers to analyse the performance of their algorithms, where as USC-SIPI database finds prominence in usage.

Journal ArticleDOI
TL;DR: PRATIT with the usage of histogram equalization during image preprocessing and data augmentation surpasses the state-of-the-art results and achieves a testing accuracy of 78.52%.
Abstract: Emotions are spontaneous feelings that are accompanied by fluctuations in facial muscles, which leads to facial expressions. Categorization of these facial expressions as one of the seven basic emotions - happy, sad, anger, disgust, fear, surprise, and neutral is the intention behind Emotion Recognition. This is a difficult problem because of the complexity of human expressions, but is gaining immense popularity due to its vast number of applications such as predicting behavior. Using deeper architectures has enabled researchers to achieve state-of-the-art performance in emotion recognition. Motivated from the aforementioned discussion, in this paper, we propose a model named as PRATIT, used for facial expression recognition that uses specific image preprocessing steps and a Convolutional Neural Network (CNN) model. In PRATIT, preprocessing techniques such as grayscaling, cropping, resizing, and histogram equalization have been used to handle variations in the images. CNNs accomplish better accuracy with larger datasets, but there are no freely accessible datasets with adequate information for emotion recognition with deep architectures. Therefore, to handle the aforementioned issue, we have applied data augmentation in PRATIT, which helps in further fine tuning the model for performance improvement. The paper presents the effects of histogram equalization and data augmentation on the performance of the model. PRATIT with the usage of histogram equalization during image preprocessing and data augmentation surpasses the state-of-the-art results and achieves a testing accuracy of 78.52%.

Journal ArticleDOI
TL;DR: A new fuzzy clustering based subhistogram scheme using discrete cosine transform (DCT) for contrast enhancement has been proposed, which reveals not only clearer features along with a contrast enhancement, but also remarkably more natural look in the images.
Abstract: Histogram equalization is a famous method for enhancing the contrast and image features. However, in few cases, it causes the overenhancement, and hence demolishes the natural display of the image. Therefore, in this article, a new fuzzy clustering based subhistogram scheme using discrete cosine transform (DCT) for contrast enhancement has been proposed. For preserving the distinctive appearance of the image, histogram division and separate histogram equalization is done on each subhistogram. The way of dividing histogram and calculating the numbers of parts for histogram division are the major problems which directly affects the quality of the output image. The proposed fuzzy-DCT scheme includes automatic calculation of a number of parts in which histogram is divided. Histogram division has done on the basis of density function and histogram separation is computed in such a way that each main peak can be divided in a different segment. The proposed scheme consists of four stages. The first stage includes the automatic calculation of number of clusters for image brightness levels. The second stage includes clustering of brightness levels by the fuzzy c -means clustering method and utilizing the given transfer function of histogram equalization. In the third stage, contrast enhancement is computed on each individual cluster separately. In the final stage, DCT is employed on the resulting image of the third step for better contrast and brightness preservation. The simulation results of the proposed scheme reveal not only clearer features along with a contrast enhancement, but also remarkably more natural look in the images.

Journal ArticleDOI
01 Feb 2020
TL;DR: A novel optimized brightness preserving histogram equalization approach to preserve the mean brightness and to improve the contrast of low-contrast image using cuckoo search algorithm that outperforms other state-of-art methods in terms of the objective as well as subjective quality evaluation.
Abstract: This paper introduces a novel optimized brightness preserving histogram equalization approach to preserve the mean brightness and to improve the contrast of low-contrast image using cuckoo search algorithm. Traditional histogram equalization scheme induces extreme enhancement and brightness change ensuing abnormal appearance. The proposed method utilizes plateau limits to modify histogram of the image. In this method, histogram is divided into two sub-histograms on which histogram statistics are exploited to obtain the plateau limits. The sub-histograms are equalized and modified based on the calculated plateau limits obtained by cuckoo search optimization technique. To demonstrate the effectiveness of proposed method a comparison of the proposed method with different histogram processing techniques is presented. Proposed method outperforms other state-of-art methods in terms of the objective as well as subjective quality evaluation.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed logarithmic histogram modification technique preserves the natural appearance of the image and yields better perceptual quality as compared to the state-of-the-art techniques.
Abstract: In this paper, a new logarithmic histogram modification technique for image contrast enhancement with naturalness preservation has been proposed. Traditional histogram equalization scheme usually causes extreme contrast enhancement, which results in unnatural look and artifacts. The proposed technique first enhances the contrast of the image globally through addition and logarithmic law based modification scheme, thereafter the local details of the image are emphasized through the coefficient scaling directly in the compressed domain using discrete cosine transformation. The proposed method can enhance the image contrast uniformly with less number of parameters without losing its basic features. Experimental results show that the proposed method preserves the natural appearance of the image and yields better perceptual quality as compared to the state-of-the-art techniques.

Journal ArticleDOI
TL;DR: This article introduces a novel optimally selected plateau limit (PL)-based histogram modification framework that preserves the brightness and improves the contrast of an image effectively without introducing absurd visual deterioration, unnatural contrast effects, and structural artifacts.
Abstract: This article introduces a novel optimally selected plateau limit (PL)-based histogram modification framework. This approach preserves the brightness and improves the contrast of an image effectively without introducing absurd visual deterioration, unnatural contrast effects, and structural artifacts. It also enhances the weak illumination situations, such as backlighting effect and the nonuniform illumination of images without introducing any undesirable artifacts. The proposed method based on the subhistogram and clipping operations utilizes the PLs to modify the histogram of the image before applying the histogram equalization approach. The salp swarm algorithm (SSA)-based optimization technique is incorporated to compute the optimal PLs or adaptive weighted limits. To prove the efficiency of the proposed algorithm, a comparative study is done with the well-known histogram-based processing techniques and state-of-art methods in the literature. Furthermore, well-recognized different evaluation parameters are considered to compare the proposed framework with other existing methods.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed algorithm can control the brightness of traffic sign images, which can accurately extract image regions of interest and complete the automatic recognition of traffic signs.
Abstract: Because of the hierarchical significance of traffic sign images, the traditional methods do not effectively control and extract the brightness and features of layered images. Therefore, an automatic recognition algorithm for traffic signs based on a convolution neural network is proposed in this paper. First, the histogram equalization method is used to pre-process the traffic sign images, with details of the images being enhanced and contrast of the images improved. Then, the traffic sign images are recognized by a convolution neural network and the large scale structure of information in the traffic sign images are obtained by using a hierarchical significance detection method based on graphical models. Next, the area of interest in the traffic sign images are extracted by using the hierarchical significance model. Finally, the Softmax classifier is selected to classify the input feature images to realize the automatic recognition of traffic signs. Experimental results show that the proposed algorithm can control the brightness of traffic sign images, which can accurately extract image regions of interest and complete the automatic recognition of traffic signs.

Journal ArticleDOI
TL;DR: This paper focuses on detecting and identifying three major defects in concrete structure, including delamination, air void and moisture through characterizing their reflection signal’s polarity and image shape patterns through GPR data processing algorithms.

Proceedings ArticleDOI
01 Jul 2020
TL;DR: A convolutional neural network architecture with different training strategies towards detecting pneumonia on CXRs and distinguishing its subforms of bacteria and virus and the proposed ensemble model increased the representation of inflammatory patterns from bacteria and viruses with few epochs to train the deep CNNs.
Abstract: Pneumonia is one of the leading causes of childhood mortality worldwide. Chest x-ray (CXR) can aid the diagnosis of pneumonia, but in the case of low contrast images, it is important to include computational tools to aid specialists. Deep learning is an alternative because it can identify patterns automatically, even in low-resolution images. We propose herein a convolutional neural network (CNN) architecture with different training strategies towards detecting pneumonia on CXRs and distinguishing its subforms of bacteria and virus. We also evaluated different image pre-processing methods to improve the classification. This study used CXRs from pediatric patients from a public pneumonia CXR dataset. The pre-processing methods evaluated were image cropping and histogram equalization. To classify the images, we adopted the VGG16 CNN and replaced its fully-connected layers with a customized multilayer perceptron. With this architecture, we proposed and evaluated four different training strategies: original CXR image (baseline), chest-cavity-cropped image (A), and histogram-equalized segmented image (B). The last strategy method (C) implemented is based on ensemble between strategies A and B. The performance was assessed by the area under the ROC curve (AUC) with 95% confidence interval (CI), accuracy, sensitivity, specificity, and F1-score. The ensemble model C yielded the highest performances: AUC of 0.97 (CI: 0.96–0.99) to classify pneumonia vs. normal, and AUC of 0.91 (CI: 0.88–0.94) to classify bacterial vs. viral cases. All models that used pre-processed images showed higher AUC than baseline, which used the original CXR image. Image cropping and histogram equalization reduced irrelevant information from the exam, enhanced contrast, and was able to identify fine CXR texture details. The proposed ensemble model increased the representation of inflammatory patterns from bacteria and viruses with few epochs to train the deep CNNs.Clinical relevance— Deep learning can identify complex radiographic patterns in low contrast images due to pneumonia and distinguish its subforms of bacteria and virus. The correlation of imaging with lab results could accelerate the adoption of complementary exams to confirm the disease’s cause

Journal ArticleDOI
TL;DR: Visual and analytical results on various test images affirm that the proposed algorithm outperforms all other existing algorithms and provide a clear path to analyse the fine details and infected portions effectively.
Abstract: Contrast enhancement methods are used to reduce image noise and increase the contrast of structures of interest. In medical images where the distinction between normal and abnormal tissue is subtle, accurate interpretation may become difficult if noise levels are relatively high. To provide accurate interpretation and clearer image for the observer with reduced noise levels “a novel adaptive fuzzy gray level difference histogram equalization algorithm” is proposed. At first, gray level difference of an input image is calculated using the binary similar patterns. Then, the gray level differences are fuzzified in order to deal the uncertainties present in the input image. Following the fuzzification, fuzzy gray level difference clip limit is computed to control the insignificant contrast enhancement. Finally, a fuzzy clipped histogram is equalized to obtain the contrast-enhanced MR medical image. The proposed algorithm is analysed both visually and analytically to calculate its performance against the other existing algorithms. Visual and analytical results on various test images affirm that the proposed algorithm outperforms all other existing algorithms and provide a clear path to analyse the fine details and infected portions effectively.

Journal ArticleDOI
TL;DR: A novel image contrast enhancement technique that uses exposure-based energy curve equalization (ECE) with a plateau limit with a primary deviation from the current histogram equalization process for contrast enhancement is presented.
Abstract: This paper presents a novel image contrast enhancement technique that uses exposure-based energy curve equalization (ECE) with a plateau limit. In a primary deviation from the current histogram equalization process for contrast enhancement, the proposed approach uses an energy curve for the same. The energy curve is computed based on the modified Hopfield neural network architecture, which contains spatial context information. The calculated energy curve is clipped with a plateau limit computed as the average of the energy curve. The exposure threshold is computed and used to divide the clipped energy curve. The two resulting energy curves are equalized independently, and the final enhanced image is generated by integrating the images achieved by transforming the equalized energy curves. The performance of the proposed method is evaluated on a variety of low contrast images. The subjective and objective evaluations of the proposed method are compared with the various histogram equalization (HE) based methods and other state-of-the-art methods to exemplify the effectiveness.

Journal ArticleDOI
14 Oct 2020
TL;DR: This paper presents a fully automated neural framework for real-time melanoma detection, where a low-dimensional, computationally inexpensive but highly discriminative descriptor for skin lesions is derived from local patterns of Gabor-based entropic features.
Abstract: The American Cancer Society has recently stated that malignant melanoma is the most serious type of skin cancer, and it is almost 100% curable, if it is detected and treated early. In this paper, we present a fully automated neural framework for real-time melanoma detection, where a low-dimensional, computationally inexpensive but highly discriminative descriptor for skin lesions is derived from local patterns of Gabor-based entropic features. The input skin image is first preprocessed by filtering and histogram equalization to reduce noise and enhance image quality. An automatic thresholding by the optimized formula of Otsu's method is used for segmenting out lesion regions from the surrounding healthy skin regions. Then, an extensive set of optimized Gabor-based features is computed to characterize segmented skin lesions. Finally, the normalized features are fed into a trained Multilevel Neural Network to classify each pigmented skin lesion in a given dermoscopic image as benign or melanoma. The proposed detection methodology is successfully tested and validated on the public PH2 benchmark dataset using 5-cross-validation, achieving 97.5%, 100% and 96.87% in terms of accuracy, sensitivity and specificity, respectively, which demonstrate competitive performance compared with several recent state-of-the-art methods.

Book ChapterDOI
01 Jan 2020
TL;DR: This paper compared the different equalization techniques and concluded the best among them as it contains maximum information of the disease.
Abstract: Magnetic Resonance Imaging (MRI) is a medical imaging technique used for analyzing and diagnosing diseases such as cancer or tumor in a brain. In order to analyze these diseases, physicians require good contrast scanned images obtained from MRI for better treatment purpose as it contains maximum information of the disease. MRI images are low-contrast images which lead to difficulty in diagnoses, hence better localization of image pixels is required. Histogram equalization techniques help to enhance the image so that it gives an improved visual quality and a well-defined problem. The contrast and brightness are enhanced in such a way that it does not lose its original information and the brightness is preserved. In this paper, we compared the different equalization techniques which are critically studied and elaborated. Various parameters are calculated and tabulated, finally concluded the best among them.