scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Anomaly detection of fundus images

01 Dec 2020-Vol. 1716, Iss: 1, pp 012044
TL;DR: In this article, a method of early detection of glaucoma using deep neural network (NN) from the retinal images was presented, where CNN was used for feature extraction from the fundus image, and fully connected feed forward NN (FFNN) was used to find out the level of GlaucomA. The accuracy is compared among different architectures.
Abstract: Research states that at least 2.2 billion people have a vision impairment or blindness all over the world. There are many reasons for blindness and few of them are leading causes such as cataract, macular degeneration due to age factor, glaucoma, diabetic retinopathy, corneal opacity, trachoma. In all of them, glaucoma is one of the main causes of blindness. Glaucoma is asymptomatic and non-reversible vision loss disease. This paper presents a method of early detection of glaucoma using deep Neural Network (NN) from the retinal images. In different retinal imaging modalities, fundus images are widely accepted. In deep learning, Convolution Neural Network (CNN) is used for feature extraction from the fundus image, and fully connected feed forward NN (FFNN) is used to find out the level of glaucoma. Typical image processing algorithms are used for feature extraction from fundus images and classified with FFNN. The accuracy is compared among different architectures. The TensorFlow software tool and python language are used for this research.
References
More filters
Journal ArticleDOI
TL;DR: A novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline and proposing a weighted loss function considering network and interaction-based uncertainty for the fine tuning is proposed.
Abstract: Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine tuning. We applied this framework to two applications: 2-D segmentation of multiple organs from fetal magnetic resonance (MR) slices, where only two types of these organs were annotated for training and 3-D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training. Experimental results show that: 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods.

582 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a novel deep learning-based framework for interactive segmentation by incorporating CNNs into a bounding box and scribble-based segmentation pipeline.
Abstract: Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes. To address these problems, we propose a novel deep learning-based framework for interactive segmentation by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine-tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine-tuning. We applied this framework to two applications: 2D segmentation of multiple organs from fetal MR slices, where only two types of these organs were annotated for training; and 3D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only tumor cores in one MR sequence were annotated for training. Experimental results show that 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine-tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods.

240 citations

Journal ArticleDOI
TL;DR: The proposed AG-CNN approach significantly advances the state-of-the-art in glaucoma detection, and the features are also visualized as the localized pathological area, which are further added in theAG-CNN structure to enhance the glauca detection performance.
Abstract: Glaucoma is one of the leading causes of irreversible vision loss. Many approaches have recently been proposed for automatic glaucoma detection based on fundus images. However, none of the existing approaches can efficiently remove high redundancy in fundus images for glaucoma detection, which may reduce the reliability and accuracy of glaucoma detection. To avoid this disadvantage, this paper proposes an attention-based convolutional neural network (CNN) for glaucoma detection, called AG-CNN. Specifically, we first establish a large-scale attention-based glaucoma (LAG) database, which includes 11 760 fundus images labeled as either positive glaucoma (4878) or negative glaucoma (6882). Among the 11 760 fundus images, the attention maps of 5824 images are further obtained from ophthalmologists through a simulated eye-tracking experiment. Then, a new structure of AG-CNN is designed, including an attention prediction subnet, a pathological area localization subnet, and a glaucoma classification subnet. The attention maps are predicted in the attention prediction subnet to highlight the salient regions for glaucoma detection, under a weakly supervised training manner. In contrast to other attention-based CNN methods, the features are also visualized as the localized pathological area, which are further added in our AG-CNN structure to enhance the glaucoma detection performance. Finally, the experiment results from testing over our LAG database and another public glaucoma database show that the proposed AG-CNN approach significantly advances the state-of-the-art in glaucoma detection.

148 citations

Journal ArticleDOI
TL;DR: The proposed JointRCNN model outperforms state-of-the-art methods for optic disc and cup segmentation task and glaucoma detection task and is promising to be used for glAUcoma screening.
Abstract: Objective: The purpose of this paper is to propose a novel algorithm for joint optic disc and cup segmentation, which aids the glaucoma detection. Methods: By assuming the shapes of cup and disc regions to be elliptical, we proposed an end-to-end region-based convolutional neural network for joint optic disc and cup segmentation (referred to as JointRCNN). Atrous convolution is introduced to boost the performance of feature extraction module. In JointRCNN, disc proposal network (DPN) and cup proposal network (CPN) are proposed to generate bounding box proposals for the optic disc and cup, respectively. Given the prior knowledge that the optic cup is located in the optic disc, disc attention module is proposed to connect DPN and CPN, where a suitable bounding box of the optic disc is first selected and then continued to be propagated forward as the basis for optic cup detection in our proposed network. After obtaining the disc and cup regions, which are the inscribed ellipses of the corresponding detected bounding boxes, the vertical cup-to-disc ratio is computed and used as an indicator for glaucoma detection. Results: Comprehensive experiments clearly show that our JointRCNN model outperforms state-of-the-art methods for optic disc and cup segmentation task and glaucoma detection task. Conclusion: Joint optic disc and cup segmentation, which utilizes the connection between optic disc and cup, could improve the performance of optic disc and cup segmentation. Significance: The proposed method improves the accuracy of glaucoma detection. It is promising to be used for glaucoma screening.

85 citations

Proceedings ArticleDOI
29 Jul 2010
TL;DR: The component analysis method and region of interest (ROI) based segmentation are used for the detection of disc and the active contour is used to plot the boundary accurately.
Abstract: Glaucoma is a disease characterized by elevated intraocular pressure (IOP). This increased IOP leads to damage of optic nerve axons at the back of the eye, with eventual deterioration of vision. CDR is a key indicator for the detection of glaucoma. The existing approaches determined the CDR using manual threshold analysis which is fairly time consuming. This paper proposes two methods to extract the disc automatically. The component analysis method and region of interest (ROI) based segmentation are used for the detection of disc. For the cup, component analysis method is used. Later the active contour is used to plot the boundary accurately. This method has been tested on numerous image data sets from Madurai Eye Care Centre, Coimbatore.

54 citations