Image Enhancement using ESRGAN for CNN based X-Ray Classification
14 Dec 2022-pp 1965-1969
TL;DR: In this paper , a Super Resolution GAN (SRGAN) is used to super resolute the fine textures of the image by upscaling it and in order to enhance the images further, ESRGAN is used.
Abstract: There is tremendous amount of computational power in artificial intelligence models like computing variety of complex mathematical calculations and recognizing objects. In the past six to seven years, the amount of computing power used by record-breaking AI models doubled frequently in the time span of months. An interesting way in which these models learn and progress is through deep learning. Deep learning is an intelligent machine’s way in which machines learn without being supervised by us and grants them the power to recognize speech, translate, and even make or take data-driven decisions. Machines consider this as a studying method, inspired by the architecture of the human brain and how we learn. An important deep learning method where we train the machines on information that is unlabeled is called unsupervised learning. A strong part of neural networks that are utilized for unsupervised learning is Generative Adversarial Networks. When it comes to applications on images quality improvement, Super Resolution GAN (SRGAN) have a key role to play in it. It was proposed by researchers at Twitter. The motive of this GAN is to super resolute the fine textures of the image by upscaling it. In order to enhance the images further, ESRGAN is used. As the name suggests, ESRGAN is an implementation of SRGAN and uses some added components of SRGAN.
TL;DR: In this paper , the authors presented an approach for the segmentation and classification of brain tumors using Entropy and CLAHE (Contrast Limited Adaptive Histogram Equalization) based Intuitionistic Fuzzy Method with Deep Learning.
Abstract: The inner area of the human brain is where abnormal brain cells gather when they become a mass. These are known as brain tumors, and based on the location and size of the tumor, they can produce a wide range of symptoms. Accurate segmentation and classification of brain tumors are critical for effective diagnosis and treatment planning. In this paper, we present a novel approach for the segmentation and classification of brain tumors using Entropy and CLAHE Based Intuitionistic Fuzzy Method with Deep Learning. Entropy and CLAHE (Contrast Limited Adaptive Histogram Equalization) based Intuitionistic Fuzzy Method with Deep Learning is a technique that combines several image processing and machine learning algorithms to enhance the quality of images. By applying entropy-based techniques to an image, we can identify and highlight the most significant features or patterns in the image. Our study provides a thorough evaluation of the proposed technique and its performance compared to other methods, showing its effectiveness and potential for use in real-world applications. Our method separates the tumor regions from the healthy tissue and provides accurate results in comparison with traditional methods. The results of this study demonstrate the potential of this approach to improve the diagnosis and treatment of brain tumors and provide a foundation for future research in this field. The proposed technique holds significant promise for improving the prognosis and quality of life for patients with brain tumors.
08 Sep 2018
TL;DR: ESRGAN as mentioned in this paper improves the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery, and won the first place in the PIRM2018-SR Challenge (region 3).
Abstract: The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge (region 3) with the best perceptual index. The code is available at https://github.com/xinntao/ESRGAN.
TL;DR: This paper researches how to apply the convolutional neural network (CNN) based algorithm on a chest X-ray dataset to classify pneumonia and shows that data augmentation generally is an effective way for all three algorithms to improve performance.
Abstract: Medical image classification plays an essential role in clinical treatment and teaching tasks. However, the traditional method has reached its ceiling on performance. Moreover, by using them, much time and effort need to be spent on extracting and selecting classification features. The deep neural network is an emerging machine learning method that has proven its potential for different classification tasks. Notably, the convolutional neural network dominates with the best results on varying image classification tasks. However, medical image datasets are hard to collect because it needs a lot of professional expertise to label them. Therefore, this paper researches how to apply the convolutional neural network (CNN) based algorithm on a chest X-ray dataset to classify pneumonia. Three techniques are evaluated through experiments. These are linear support vector machine classifier with local rotation and orientation free features, transfer learning on two convolutional neural network models: Visual Geometry Group i.e., VGG16 and InceptionV3, and a capsule network training from scratch. Data augmentation is a data preprocessing method applied to all three methods. The results of the experiments show that data augmentation generally is an effective way for all three algorithms to improve performance. Also, Transfer learning is a more useful classification method on a small dataset compared to a support vector machine with oriented fast and rotated binary (ORB) robust independent elementary features and capsule network. In transfer learning, retraining specific features on a new target dataset is essential to improve performance. And, the second important factor is a proper network complexity that matches the scale of the dataset.
TL;DR: The main idea is to collect all the possible images for COVID-19 that exists until the writing of this research and use the GAN network to generate more images to help in the detection of this virus from the available X-rays images with the highest accuracy possible.
Abstract: The coronavirus (COVID-19) pandemic is putting healthcare systems across the world under unprecedented and increasing pressure according to the World Health Organization (WHO). With the advances in computer algorithms and especially Artificial Intelligence, the detection of this type of virus in the early stages will help in fast recovery and help in releasing the pressure off healthcare systems. In this paper, a GAN with deep transfer learning for coronavirus detection in chest X-ray images is presented. The lack of datasets for COVID-19 especially in chest X-rays images is the main motivation of this scientific study. The main idea is to collect all the possible images for COVID-19 that exists until the writing of this research and use the GAN network to generate more images to help in the detection of this virus from the available X-rays images with the highest accuracy possible. The dataset used in this research was collected from different sources and it is available for researchers to download and use it. The number of images in the collected dataset is 307 images for four different types of classes. The classes are the COVID-19, normal, pneumonia bacterial, and pneumonia virus. Three deep transfer models are selected in this research for investigation. The models are the Alexnet, Googlenet, and Restnet18. Those models are selected for investigation through this research as it contains a small number of layers on their architectures, this will result in reducing the complexity, the consumed memory and the execution time for the proposed model. Three case scenarios are tested through the paper, the first scenario includes four classes from the dataset, while the second scenario includes 3 classes and the third scenario includes two classes. All the scenarios include the COVID-19 class as it is the main target of this research to be detected. In the first scenario, the Googlenet is selected to be the main deep transfer model as it achieves 80.6% in testing accuracy. In the second scenario, the Alexnet is selected to be the main deep transfer model as it achieves 85.2% in testing accuracy, while in the third scenario which includes two classes (COVID-19, and normal), Googlenet is selected to be the main deep transfer model as it achieves 100% in testing accuracy and 99.9% in the validation accuracy. All the performance measurement strengthens the obtained results through the research.
••18 Jun 2018
TL;DR: An unpaired learning method for image enhancement based on the framework of two-way generative adversarial networks (GANs) with several improvements that significantly improve the stability of GAN training for this application.
Abstract: This paper proposes an unpaired learning method for image enhancement. Given a set of photographs with the desired characteristics, the proposed method learns a photo enhancer which transforms an input image into an enhanced image with those characteristics. The method is based on the framework of two-way generative adversarial networks (GANs) with several improvements. First, we augment the U-Net with global features and show that it is more effective. The global U-Net acts as the generator in our GAN model. Second, we improve Wasserstein GAN (WGAN) with an adaptive weighting scheme. With this scheme, training converges faster and better, and is less sensitive to parameters than WGAN-GP. Finally, we propose to use individual batch normalization layers for generators in two-way GANs. It helps generators better adapt to their own input distributions. All together, they significantly improve the stability of GAN training for our application. Both quantitative and visual results show that the proposed method is effective for enhancing images.
TL;DR: An approach with very reliable and comparable performance will boost the fast and robust COVID-19 detection using chest X-ray images and the reliability of network performance is significantly improved for the segmented lung images, which was observed using the visualization technique.
Abstract: The use of computer-aided diagnosis in the reliable and fast detection of coronavirus disease (COVID-19) has become a necessity to prevent the spread of the virus during the pandemic to ease the burden on the medical infrastructure. Chest X-ray (CXR) imaging has several advantages over other imaging techniques as it is cheap, easily accessible, fast and portable. This paper explores the effect of various popular image enhancement techniques and states the effect of each of them on the detection performance. We have compiled the largest X-ray dataset called COVQU-20, consisting of 18,479 normal, non-COVID lung opacity and COVID-19 CXR images. To the best of our knowledge, this is the largest public COVID positive database. Ground glass opacity is the common symptom reported in COVID-19 pneumonia patients and so a mixture of 3616 COVID-19, 6012 non-COVID lung opacity, and 8851 normal chest X-ray images were used to create this dataset. Five different image enhancement techniques: histogram equalization, contrast limited adaptive histogram equalization, image complement, gamma correction, and Balance Contrast Enhancement Technique were used to improve COVID-19 detection accuracy. Six different Convolutional Neural Networks (CNNs) were investigated in this study. Gamma correction technique outperforms other enhancement techniques in detecting COVID-19 from standard and segmented lung CXR images. The accuracy, precision, sensitivity, f1-score, and specificity in the detection of COVID-19 with gamma correction on CXR images were 96.29%, 96.28%, 96.29%, 96.28% and 96.27% respectively. The accuracy, precision, sensitivity, F1-score, and specificity were 95.11 %, 94.55 %, 94.56 %, 94.53 % and 95.59 % respectively for segmented lung images. The proposed approach with very high and comparable performance will boost the fast and robust COVID-19 detection using chest X-ray images.