scispace - formally typeset
Open AccessProceedings ArticleDOI

Realistic synthesis of brain tumor resection ultrasound images with a generative adversarial network

Reads0
Chats0
TLDR
In this article, the authors simulated intra-operative US images of the brain after tumor resection surgery using GANs and found that these generated images are hardly distinguishable from real post-resection US images.
Abstract
The simulation of realistic ultrasound (US) images has many applications in image-guided surgery such as image registration, data augmentation, or educational purposes. In this paper we simulated intraoperative US images of the brain after tumor resection surgery. In a first stage, a Generative Adversarial Networks generated an US image with resection from a resection cavity map. While the cavity texture can be realistic, surrounding structures are usually not anatomically coherent. Thus, a second stage blended the generated cavity texture into a real patient-specific US image acquired before resection. A validation study on 68 images of 21 cases showed that three raters correctly identified 64% of all images. In particular, two neurosurgeons correctly labelled only 56% and 53% of the simulated images, which indicate that these synthesized images are hardly distinguishable from real post-resection US images.

read more

Content maybe subject to copyright    Report

HAL Id: hal-03185137
https://hal.archives-ouvertes.fr/hal-03185137
Submitted on 30 Mar 2021
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Realistic synthesis of brain tumor resection ultrasound
images with a generative adversarial network
Mélanie Donnez, François-Xavier Carton, Florian Le Lann, Emmanuel de
Schlichting, Matthieu Chabanas
To cite this version:
Mélanie Donnez, François-Xavier Carton, Florian Le Lann, Emmanuel de Schlichting, Matthieu Cha-
banas. Realistic synthesis of brain tumor resection ultrasound images with a generative adversarial
network. SPIE Medical Imaging, Feb 2021, Online Only, France. pp.84, �10.1117/12.2581911�. �hal-
03185137�

Realistic synthesis of brain tumor resection ultrasound
images with a Generative Adversarial Network
M´elanie Donnez
a
, Fran¸cois-Xavier Carton
a,b
, Florian Le Lann
c
, Emmanuel De Schlichting
c
,
and Matthieu Chabanas
a
a
University of Grenoble Alpes, CNRS, Grenoble-INP, TIMC-IMAG; Grenoble, France
b
Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
c
Grenoble Alpes University Hospital, Department of Neurosurgery; Grenoble, France
ABSTRACT
The simulation of realistic ultrasound (US) images has many applications in image-guided surgery such as image
registration, data augmentation, or educational purposes. In this paper we simulated intraoperative US images
of the brain after tumor resection surgery. In a first stage, a Generative Adversarial Networks generated an
US image with resection from a resection cavity map. While the cavity texture can be realistic, surrounding
structures are usually not anatomically coherent. Thus, a second stage blended the generated cavity texture into
a real patient-specific US image acquired before resection. A validation study on 68 images of 21 cases showed
that three raters correctly identified 64% of all images. In particular, two neurosurgeons correctly labelled only
56% and 53% of the simulated images, which indicate that these synthesized images are hardly distinguishable
from real post-resection US images.
Keywords: Intraoperative Ultrasound, Image synthesis, Generative Adversarial Networks, Neurosurgery.
1. INTRODUCTION
Intraoperative ultrasound (iUS) is commonly used during brain tumor resection to localize tissue and ensure than
the tumor removal is as complete as possible. These iUS images can be used as is in neuronavigation systems,
or could also be registered with preoperative Magnetic Resonance (MR) images to compensate for brain-shift
deformation.
1
The simulation of realistic iUS images, especially after tumor resection, is valuable in several applications such
as surgical planning or medical training, to shorten the learning curve of young surgeons.
2
Another potential
application is the registration of preoperative MR images with intraoperative iUS images at the end of tumor
resection, to guarantee that tumor tissue was optimally removed. Instead of multimodal registration, always
a challenging task, one strategy is to simulate US images from the preoperative MR then perform monomodal
registration between these simulated US and the actual iUS images.
3
However, this monomodal registration
is still hindered by the fact that the resection cavity is visible in the real iUS images only. Simulating US
images with a resection cavity, which would follow the tumor contours delineated in the MR, is expected to
significantly simplify this registration problem. Finally, another targeted application is data augmentation to
train deep neural networks. We recently proposed such a network to automatically segment resection cavities in
iUS images.
4
However, realistically simulating a large number of resection patterns would enable to enlarge the
training set and thus improve the robustness of the segmentation network.
Many authors proposed to simulate the physics of ultrasonic waves propagation and backscattering on bio-
logical tissue,
3, 5, 6
typically to generate US images from CT or MR volumes. More recently, several groups used
deep neural networks to simulate intravascular or kidney US images.
2, 7
These works especially used Generative
Adversarial Networks
8
(GAN) which consist in two neural networks: a generator learns while attempting to gen-
erate realistic ultrasound images, while a discriminator simultaneously learns while attempting to discriminate
between real images and those simulated by the generator. These networks are typically trained with real US
images along with segmented tissue-maps of anatomical structures. Other tissue-maps can then be input into
the trained GAN to synthesize the corresponding US image.
1

In many contexts the same anatomical structures are visible in all images (for instance the kidney, spleen, and
bones in Pigeau et al.
2
), which differ only due to anatomical variations and pathology, and image characteristics.
A major challenge in our context is that intraoperative US images of the brain are extremely different from
patient to patient: due to the variation of tumor locations, no common structures can be reliably found in the
iUS images. Sulci are almost always present, but their number and shape also depend on the imaged area. For
these reasons, learning the relations between iUS images and tissue-maps only seemed extremely challenging.
The proposed method simulates post-resection iUS images of the brain from two inputs: a patient-specific
iUS image acquired before resection and a resection cavity map. This is realized in two stages: 1) a GAN network
generates a pseudo iUS image from a resection cavity map. While the cavity texture can be realistic, surrounding
structures usually do not correspond to coherent anatomical features; 2) the GAN-generated cavity texture is
then merged into a real iUS image acquired before resection to simulate the final image. A validation study was
carried out with three raters, including two neurosurgeons, to evaluate whether they could distinguish simulated
images from real ones. In this study, we used cavity maps segmented from real post-resection iUS images, so
that the synthesized images can be directly compared with these real images.
2. MATERIALS AND METHODS
2.1 Dataset
We used intraoperative ultrasound images of patients with low-grade gliomas from the public database RESECT.
9
21 of the 23 cases of the database could be processed in this study, with two 3D volumes per case: iUS
before
and iUS
after
acquired before and after tumor resection, respectively. Several major differences can be observed
between these two volumes:
the US probe position changed between the two acquisitions yielding to different fields of view, image
textures (intensities, speckle, noise), and potentially different reconstruction artefacts.
the brain-shift phenomenon induces tissue deformation during surgery, due to several factors like gravity,
loss of Cerebrospinal fluid, or drugs. Thus, anatomical structures in the two volumes do not exactly fit.
in the iUS
after
volume, the tumor was removed and a resection cavity is instead visible. This cavity can
induce additional tissue deformation as well as artifacts in the images (typically a hyper-intense area at
the bottom of the cavity and ultrasound shadows). The cavity borders could also be appear brighter due
to bleeding.
To compensate for part of the brain-shift, iUS
before
and iUS
after
volumes were non-rigidly registered with thin
plate splines between homologous landmarks available in the RESECT database. Overall, the mean distance
between landmarks (i.e. the fiducial registration error) was reduced from 3.55 ± 1.76 to basically zero. 1857 2D
images containing a tumor were extracted from these 3D volumes, and later used in this study.
To simulate complete tumor resections, we used the real resection cavity masks segmented from the iUS
after
images.
4
Examples of registered iUS
before
and iUS
after
images as well as resection cavity masks are available in
figure 3.
2.2 Stage I: GAN simulation of a resection cavity image
The pix2pix conditional GAN developed by Isola et al.
8
was used as pictured in figure 1. The generator simulates
an US image with resection from a resection cavity map, while the discriminator compares this image to the
actual iUS
after
image to determine if its input is real or simulated. 15 of the 21 cases (1253 of the 1857 2D
images) were used for training, plus 1 case (108 images) for validation. The remaining images were kept for
testing.
After this first stage, the GAN-generated image contains a realistic resection cavity. However its surrounding
looks like an iUS image except that the structures (sulci, ventricles...) may not be anatomically coherent.
2

Figure 1. Stage I: GAN generation of an iUS with resection from a cavity map and real after-resection iUS images.
2.3 Stage II: simulated image in a patient’s context
The next stage was to blend the GAN-generated cavity image into a real iUS
before
image in order to get the
cavity within a patient’s anatomical features and thus simulate a realistic iUS image after resection.
Intensities of the two images to blend may significantly differ. Therefore, a first step was to normalize the
GAN-generated resection image by equalizing its histogram to the histogram of the iUS
before
image. Histograms
were computed on a region of interest (ROI) centered on the cavity, the ROI’s size being twice the size of the
cavity.
After this normalization, pixels of the simulated image were assigned as follows:
if a pixel is located inside the cavity map, its intensity is taken from the GAN-generated resection image;
if a pixel is outside the cavity map but at a distance d inferior to half the cavity size, the pixel intensity
is linearly interpolated from the two images with a weighting factor depending on d (the closest image has
more influence);
if a pixel is outside this blending area, its intensity is taken from the iUS
before
image.
Figure 2. Stage II: merging of the GAN-generated resection cavity image into a real before-resection iUS image.
2.4 Validation
Three raters participated in the validation study, a medical imaging expert (rater 1) and two neurosurgeons who
have routinely used intraoperative US during tumor resection. Although none of the raters was involved in the
methodological developments of this paper, raters 1 and 2 already worked on the RESECT database to manually
3

segment resection cavities in iUS images.
4
Despite this bias, these two raters were still included in the validation
study as they worked on the images more than a year ago and as they never saw any simulated iUS image.
The first validation test was a form of Visual Turing Test used in Pigeau et al.
2
to evaluate how well a person
can differentiate between real and synthesized images, thus determining how convincing the synthesized images
are. The dataset consisted in 68 2D images, 34 simulated and 34 real ones, selected from all cases and presented
in a random order. Each rater had to label each image as simulated or real, and score his confidence in his choice
on a scale of 1 to 5.
In a second test, images of 10 cases were presented with: 1) an initial iUS
before
image acquired before resection;
2) a resection cavity mask (segmented from the fourth image); 3) the image simulated from these inputs; 4) and
the corresponding real iUS
after
image acquired after resection. Examples of these images are shown in figure 3.
Note that the simulated and real iUS images after resection are always different, because the simulation was
based on the image acquired before resection. The goal here was simply to qualitatively determine whether the
simulated images were plausible, and the aspects that were correctly simulated or could be improved.
3. RESULTS AND DISCUSSION
Examples of GAN-generated resection and final simulated images are shown on figures 1 and 3, respectively.
Results of the differentiation study are presented in table 1.
If real and simulated images were easy to distinguish, 100% of the images should be correctly labelled.
Conversely, a score of 50% would mean that there is no way to discriminate real from synthesized images,
suggesting simulated images appear realistic. Overall, 64% of the images were correctly labelled. Significant
differences can be observed among raters: rater 1 who extensively manually segmented structures on the RESECT
images could distinguish almost 80% of all images. However, only 56% and 53% of the simulated images were
labelled as such by the two neurosurgeons which is an excellent result. Cases that were used in the training phase
of the GAN network were more difficult to discriminate that test cases, which suggests the cavity appearance of
new cases is slighlty less realistic.
Table 1. Percentages of correctly labelled images.
Rater 1 Rater 2 Rater 3 All raters
Imaging expert Neurosurgeon Neurosurgeon
Simulated images 0.79 0.56 0.53 0.63
train cases (GAN stage) 0.78 0.48 0.54 0.59
test cases (GAN stage) 0.82 0.72 0.52 0.69
Real images 0.76 0.79 0.41 0.65
Overall 0.78 0.68 0.47 0.64
Confidence score 2.74 ± 1.2 3.37 ± 0.8 3.32 ± 1.34 3.14 ± 1.2
The qualitative analysis of 10 cases revealed that all raters were globally very satisfied with the simulated
images. Several limits were nevertheless identified, mostly than the resection cavity margins can appear blurry
when the texture of the GAN-generated cavity significantly differs from the iUS
before
texture. Also, images
may lack variability since several features that could be observed in real images are never synthesized, like high
intensity areas from blood clots on the resection cavity borders, or deep shadows below the cavity.
4. CONCLUSION
We simulated post-resection iUS images by generating an image of a resection cavity with a GAN network, then
merging this image into a real patient-specific iUS image before resection. Despite blurriness on some cases,
validation results showed than most simulated images were hardly distinguishable from real images.
Future development will be to include the iUS
before
image as a GAN input, along with the cavity map, to
generate resection images which intensity, texture, and speckle better match the patient image characteristics.
Also, we plan to evaluate the simulation of iUS images from more anatomical maps of sulci, ventricles or the
tumor, and directly from preoperative MR images.
4

Citations
More filters
Proceedings ArticleDOI

An Ultra-Fast Method for Simulation of Realistic Ultrasound Images

TL;DR: In this paper, the Fourier transform was used to simulate a large number of images for lesion segmentation in the medical field, and the proposed method substantially outperformed Field II in terms of Dice similarity coefficient.
Posted Content

An Ultra-Fast Method for Simulation of Realistic Ultrasound Images

TL;DR: In this article, the Fourier transform was used to simulate a large number of ultrasound images and evaluated its performance in a lesion segmentation task in the medical field, where clinical data is not easily accessible.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What contributions have the authors mentioned in the paper "Realistic synthesis of brain tumor resection ultrasound images with a generative adversarial network" ?

In this paper the authors simulated intraoperative US images of the brain after tumor resection surgery. A validation study on 68 images of 21 cases showed that three raters correctly identified 64 % of all images. 

Also, the authors plan to evaluate the simulation of iUS images from more anatomical maps of sulci, ventricles or the tumor, and directly from preoperative MR images. 

These iUS images can be used as is in neuronavigation systems, or could also be registered with preoperative Magnetic Resonance (MR) images to compensate for brain-shift deformation. 

2. MATERIALS AND METHODSThe authors used intraoperative ultrasound images of patients with low-grade gliomas from the public database RESECT.9 21 of the 23 cases of the database could be processed in this study, with two 3D volumes per case: iUSbefore and iUSafter acquired before and after tumor resection, respectively. 

images may lack variability since several features that could be observed in real images are never synthesized, like high intensity areas from blood clots on the resection cavity borders, or deep shadows below the cavity. 

Instead of multimodal registration, always a challenging task, one strategy is to simulate US images from the preoperative MR then perform monomodal registration between these simulated US and the actual iUS images. 

Many authors proposed to simulate the physics of ultrasonic waves propagation and backscattering on biological tissue,3,5, 6 typically to generate US images from CT or MR volumes. 

This cavity can induce additional tissue deformation as well as artifacts in the images (typically a hyper-intense area at the bottom of the cavity and ultrasound shadows). 

the authors plan to evaluate the simulation of iUS images from more anatomical maps of sulci, ventricles or the tumor, and directly from preoperative MR images. 

A validation study was carried out with three raters, including two neurosurgeons, to evaluate whether they could distinguish simulated images from real ones. 

The proposed method simulates post-resection iUS images of the brain from two inputs: a patient-specific iUS image acquired before resection and a resection cavity map.