scispace - formally typeset
Search or ask a question

Generating Fundus Fluorescence Angiography Images from Structure Fundus Images Using Generative Adversarial Networks

29 Jan 2020-pp 424-439
TL;DR: A conditional generative adversarial network (GAN) - based method to directly learn the mapping relationship between structure fundus images and fundus fluorescence angiography images is proposed and local saliency maps, which define each pixel's importance, are used to define a novel saliency loss in the GAN cost function.
Abstract: Fluorescein angiography can provide a map of retinal vascular structure and function, which is commonly used in ophthalmology diagnosis, however, this imaging modality may pose risks of harm to the patients. To help physicians reduce the potential risks of diagnosis, an image translation method is adopted. In this work, we proposed a conditional generative adversarial network(GAN) - based method to directly learn the mapping relationship between structure fundus images and fundus fluorescence angiography images. Moreover, local saliency maps, which define each pixel's importance, are used to define a novel saliency loss in the GAN cost function. This facilitates more accurate learning of small-vessel and fluorescein leakage features.

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
Wanyue Li1, Yi He1, Wen Kong1, Jing Wang1, Guohua Deng, Yiwei Chen1, Guohua Shi1 
27 Sep 2021
TL;DR: Zhang et al. as discussed by the authors proposed a sequential generative adversarial network (GAN) to generate FA sequences of critical phases from a structure fundus image, where a feature space loss is applied to ensure the generated FA sequences with better visual effect.
Abstract: Fundus fluorescein angiography (FA) is an indispensable procedure that can investigate the integrity of retina vasculature. Fluorescein angiograms progress through five phases: pre-arterial, arterial, arteriovenous, venous, and late, and each phase could be an important diagnostic basis for retina-related disease. However, the FA imaging technique may provide risks of harm to the patients. To help physicians reduce the potential risks of diagnosis, we proposed “SequenceGAN”, a novel sequential generative adversarial network that aims to generate FA sequences of critical phases from a structure fundus image. Moreover, a feature-space loss is applied to ensure the generated FA sequences with a better visual effect. The proposed method was qualitatively and quantitatively compared with existing FA image generation methods and image translation methods. The experimental results indicate that the proposed model has better performance on the generation of retina vascular, leakage structures, and characteristics of each angiogram phase, and thus indicates potential value for application in clinical diagnosis.

2 citations

Book ChapterDOI
Wanyue Li1, Yi He1, Jing Wang1, Wen Kong1, Yiwei Chen1, Guohua Shi1 
04 Oct 2020
TL;DR: The proposed unsupervised learning-based fluorescein leakage detecting method has higher accuracy in leakage detection, and can detect an image in a very short time (in 1 s), which has great potential significance for clinical diagnosis.
Abstract: Detecting the high-intensity retinal leakage in fundus fluorescein angiography (FA) image is a key step for retinal-related disease diagnosis and treatment. In this study, we proposed an unsupervised learning-based fluorescein leakage detecting method which can give the leakage detection results without the need for manual annotation. In this method, a model that can generate the normal-looking FA image from the input abnormal FA image is trained; and then the leakage can be detected by making the difference between the abnormal and generated normal image. The proposed method was validated on the publicly available datasets, and qualitatively and quantitatively compared with the state-of-the-art leakage detection methods. The comparison results indicate that the proposed method has higher accuracy in leakage detection, and can detect an image in a very short time (in 1 s), which has great potential significance for clinical diagnosis.

1 citations

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a network that generates multi-frame high-resolution fundus fluorescein angiography (FA) images by combining supervised and unsupervised learning methods and achieves better quantitative and qualitative results than using either method alone.
Abstract: Fundus fluorescein angiography (FA) can be used to diagnose fundus diseases by observing dynamic fluorescein changes that reflect vascular circulation in the fundus. As FA may pose a risk to patients, generative adversarial networks have been used to convert retinal fundus images into fluorescein angiography images. However, the available methods focus on generating FA images of a single phase, and the resolution of the generated FA images is low, being unsuitable for accurately diagnosing fundus diseases.We propose a network that generates multi-frame high-resolution FA images. This network consists of a low-resolution GAN (LrGAN) and a high-resolution GAN (HrGAN), where LrGAN generates low-resolution and full-size FA images with global intensity information, HrGAN takes the FA images generated by LrGAN as input to generate multi-frame high-resolution FA patches. Finally, the FA patches are merged into full-size FA images.Our approach combines supervised and unsupervised learning methods and achieves better quantitative and qualitative results than using either method alone. Structural similarity (SSIM), normalized cross-correlation (NCC) and peak signal-to-noise ratio (PSNR) were used as quantitative metrics to evaluate the performance of the proposed method. The experimental results show that our method achieves better quantitative results with structural similarity of 0.7126, normalized cross-correlation of 0.6799, and peak signal-to-noise ratio of 15.77. In addition, ablation experiments also demonstrate that using a shared encoder and residual channel attention module in HrGAN is helpful for the generation of high-resolution images.Overall, our method has higher performance for generating retinal vessel details and leaky structures in multiple critical phases, showing a promising clinical diagnostic value.
TL;DR: In this article , a DL-based framework was proposed to automatically segment the foveal avascular zone (FAZ) in challenging FA scans from clinical routine, where the authors mimics the workflow of retinal experts by using additional RV labels as a guidance during training.
Abstract: : In clinical routine, ophthalmologists frequently analyze the shape and size of the foveal avascular zone (FAZ) to detect and monitor retinal diseases. In order to extract those parameters, the contours of the FAZ need to be segmented, which is normally achieved by analyzing the retinal vasculature (RV) around the macula in fluorescein angiograms (FA). Computer-aided segmentation methods based on deep learning (DL) can automate this task. However, current approaches for segmenting the FAZ are often tailored to a specific dataset or require manual initialization. Furthermore, they do not take the variability and challenges of clinical FA into account, which are often of low quality and difficult to analyze. In this paper we propose a DL-based framework to automatically segment the FAZ in challenging FA scans from clinical routine. Our approach mimics the workflow of retinal experts by using additional RV labels as a guidance during training. Hence, our model is able to produce RV segmentations simultaneously. We minimize the annotation work by using a multi-modal approach that leverages already available public datasets of color fundus pictures (CFPs) and their respective manual RV labels. Our experimental evaluation on two datasets with FA from 1) clinical routine and 2) large multicenter clinical trials shows that the addition of weak RV labels as a guidance during training improves the FAZ segmentation significantly with respect to using only manual FAZ annotations
Proceedings ArticleDOI
14 Oct 2022
TL;DR: The experimental results show that the proposed algorithm provides the highest resolution and quality in the synthesis of fluorescein angiography images from fundus structure images among the evaluated methods.
Abstract: Although fundus fluorescein angiography is an imaging modality that supports ophthalmic diagnosis, it requires the intravenous injection of harmful fluorescein dye. We propose the synthesis of fluorescein angiography images from fundus structure images to avoid injection. Specifically, we automatically synthesize high-resolution fundus fluorescein angiography images through an algorithm that integrates a generative adversarial networks and image stitching and enhancement. By evaluating the peak signal-to-noise ratio and structural similarity index of the proposed algorithm, pix2pix, and cycleGAN, we confirmed the superior performance of our proposal. To further validate the proposed algorithm, we compared the fundus fluorescein angiography images synthesized by our algorithm, pix2pix, and cycleGAN. The experimental results show that our algorithm provides the highest resolution and quality in the synthesis of fluorescein angiography images from fundus structure images among the evaluated methods.
References
More filters
Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal Article

3,940 citations


"Generating Fundus Fluorescence Angi..." refers methods in this paper

  • ..., 2018) proposed a method based on CycleGAN (Zhu et al., 2017) to generate FFA images from retinal fundus images....

    [...]

  • ...One of the methods is CycleGAN (Zhu et al., 2017), which is one of the most popular unsupervised image translation methods, and has been applied to FFA image synthesis task in (Schiffers et al., 2018)....

    [...]

  • ...In terms of retina image synthesis, Schiffers et al. (Schiffers et al., 2018) proposed a method based on CycleGAN (Zhu et al., 2017) to generate FFA images from retinal fundus images....

    [...]

  • ...One of the methods is CycleGAN (Zhu et al., 2017), which is one of the most popular unsupervised image translation methods, and has been applied to FFA image synthesis task in (Schiffers et al....

    [...]

Proceedings ArticleDOI
20 Sep 1999
TL;DR: A non-parametric method for texture synthesis that aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.
Abstract: A non-parametric method for texture synthesis is proposed. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by querying the sample image and finding all similar neighborhoods. The degree of randomness is controlled by a single perceptually intuitive parameter. The method aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.

2,972 citations


"Generating Fundus Fluorescence Angi..." refers background in this paper

  • ...The idea of imageto-image translation can go back to Hertzmann’s “Image Analogies” work (Hertzmann et al., 2001), who employ a non-parametric texture model (Efros and Leung, 1999) on a single input-output training image pair....

    [...]

  • ..., 2001), who employ a non-parametric texture model (Efros and Leung, 1999) on a single input-output training image pair....

    [...]

Proceedings ArticleDOI
01 Aug 2001
TL;DR: This paper describes a new framework for processing images by example, called “image analogies,” based on a simple multi-scale autoregression, inspired primarily by recent results in texture synthesis.
Abstract: This paper describes a new framework for processing images by example, called “image analogies.” The framework involves two stages: a design phase, in which a pair of images, with one image purported to be a “filtered” version of the other, is presented as “training data”; and an application phase, in which the learned filter is applied to some new target image in order to create an “analogous” filtered result. Image analogies are based on a simple multi-scale autoregression, inspired primarily by recent results in texture synthesis. By choosing different types of source image pairs as input, the framework supports a wide variety of “image filter” effects, including traditional image filters, such as blurring or embossing; improved texture synthesis, in which some textures are synthesized with higher quality than by previous approaches; super-resolution, in which a higher-resolution image is inferred from a low-resolution source; texture transfer, in which images are “texturized” with some arbitrary source texture; artistic filters, in which various drawing and painting styles are synthesized based on scanned real-world examples; and texture-by-numbers, in which realistic scenes, composed of a variety of textures, are created using a simple painting interface.

1,794 citations


"Generating Fundus Fluorescence Angi..." refers background in this paper

  • ...The idea of imageto-image translation can go back to Hertzmann’s “Image Analogies” work (Hertzmann et al., 2001), who employ a non-parametric texture model (Efros and Leung, 1999) on a single input-output training image pair....

    [...]