scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Ultrasound tissue classification: a review

TL;DR: This paper presents a survey on ultrasound tissue classification, focusing on recent advances in this area, and introduces the traditional approaches and the recent deep learning methods for tissue classification.
Abstract: Ultrasound imaging is the most widespread medical imaging modality for creating images of the human body in clinical practice. Tissue classification in ultrasound has been established as one of the most active research areas, driven by many important clinical applications. In this paper, we present a survey on ultrasound tissue classification, focusing on recent advances in this area. We start with a brief review on the main clinical applications. We then introduce the traditional approaches, where the existing research on feature extraction and classifier design are reviewed. As deep learning approaches becoming popular for medical image analysis, the recent deep learning methods for tissue classification are also introduced. We briefly discuss the FDA-cleared techniques being used clinically. We conclude with the discussion on the challenges and research focus in future.

Summary (5 min read)

Introduction

  • Different medical imaging modalities are available and widely used nowadays in clinical practice to create images of the human body, such as computed tomography (CT), magnetic resonance imaging (MR), positron emission tomography (PET), and ultrasound (US).
  • Driven by the unmet clinical needs to distinguish different tissue types (e.g., healthy versus diseased) in US, tissue characterization and classification have received much attention in recent years [1], [2].
  • Unfortunately in practice most extracted features have low discriminative power.
  • This paper attempts to provide a review on ultrasound tissue classification, particularly focusing on recent advances in this area.

II. CLINICAL APPLICATIONS

  • Medical ultrasound has a very broad range of clinical uses.
  • Ultrasound tissue classification can be applied in many clinical fields, for instance, tissue classification plays an important role in ultrasoundbased cancer diagnosis, e.g., by classifying the tissue regions as benign or malignant.
  • Here the authors briefly introduce the main clinical applications that have received the most attention in the recent literature.

A. Cardiology

  • The primary aim of non-invasive cardiac imaging is to provide information on the diagnosis and severity of underlying cardiac conditions [7].
  • Echocardiography (ultrasound imaging of the heart) is the most common cardiac imaging procedure performed in clinical practice, due to its portability, low cost, and patient acceptance.

2 JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014

  • Echocardiography has also been one of the driving application areas of medical ultrasound [8].
  • Cardiac imaging techniques characterise the underlying tissue directly, by assessing a signal from the tissue itself, or indirectly, by inferring tissue characteristics from global or regional function [7].
  • One challenge for ablation therapy is the monitoring of the temperature rise and the extend of the ablated region in the tissue.
  • In the era of atherosclerosis, intravascular ultrasound (IVUS), a catheter-based imaging technique, has evolved as a valuable technique for diagnosis and intervention for coronary disease, by providing more precise measurement on intimal thickness and vulnerable plaques.
  • Tissue classification in IVUS images can automatically predict vulnerable plaques as well as quantify the amount of the different tissues; many approaches have been proposed in literature [31]–[36], which will be discussed in detail in the following sections.

C. Breast Cancer

  • Breast cancer is the most common cancer in women globally, and early detection is the key to reduce the death rate.
  • To improve cancer detection in dense breasts, personalized breast cancer screening with ultrasound has been proposed for women with dense breasts and women with elevated risk factors for developing breast cancer.
  • Furthermore, BUS is more convenient and safer for clinical use [39] and it can be used as an alternative screening device to mammography for women with harmful mutations in either BRCA1 (breast cancer gene type1) or BRCA2 since no radiation is involved.
  • Because of the standard imaging procedure, it is possible to perform temporal analysis on prior and current exams.
  • Since the ABUS images are standardly defined, many researcher paid lots of attention on tumor classification [45]–[49] and cancer detection [50]–[57] using various techniques.

D. Prostate Cancer

  • Prostate cancer is the second most common cancer worldwide and the most common malignancy in men [62].
  • It is due to the abnormal and uncontrolled cell mutation and replication in the prostate gland.

4 JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014

  • As with any cancer, early detection and treatment is vital for good survival rates of prostate cancer.
  • By analyzing the characteristics of the tissue regions, tissue classification in TRUS provides a malignancy map to guide biopsy, which can reduce the number of unnecessary biopsy.
  • Later different types of features were considered together with more sophisticated classifiers [67]– [69], for instance, textural features and spectral parameters extracted from RF data were employed in [67].
  • It has been shown that the combination of CDUS and gray-scale ultrasound can detect a greater number of prostate cancers than gray-scale ultrasound alone [74].
  • The classification of lesions in ultrasound liver image usually depends heavily on the characteristics of the lesions including internal echo, morphology, edge, echogenicity, and posterior echo enhancement.

III. FEATURE EXTRACTION

  • The traditional approaches to ultrasound tissue classification start with extracting discriminative features from the ultrasound signal or image.
  • Here the authors group them into three categories.
  • By identifying spatial variations in pixel intensities and quantifying them into numerical features, texture analysis has been widely adopted and different kinds of texture features are available in the literature: Gabor filter responses, derivatives of Gaussian filters, wavelet transform, cooccurrence matrices, Local Binary Patterns, fractal spaces, Markov random fields, and so on.
  • In [87], four types of texture features, GLCM, LBP, Gabor filters and the shading of the polar image, are extracted to form a feature vector of 68 dimensions for IVUS tissue classification.
  • The second-order co-occurrence features are often supplemented by first order intensity distribution statistics, such as mean, variance, skew and kurtosis.

6 JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014

  • Thus, for the same type of tissue, different system parameters might lead to different texture patterns.
  • For this purpose, in [87], the raw RF signals were exploited to reproduce ultrasound images with a unique and well controlled set of imaging parameters.
  • In addition to the above texture features, the morphological features are also often considered for tissue classification in ultrasound.
  • In contrast, the morphological parameters are less dependent on system parameters and acquisition characteristics, thus more consistent compared to texture features.
  • Due to aggressiveness of the growth, malignant tumours tend to have an irregular shape while benign tumours tend to have a spherical or oval shape.

B. RF-signal-based approaches

  • The US image formation process, i.e. from raw RF signals to US images, introduces a certain number of approximations (such as envelope detection and log compression), thus a certain amount of information is lost.
  • Following the seminal work of Lizzi et al. [103], [130], spectral parameters of local RF signals are the most often used features, based on the hypothesis that different tissue types behave differently in the frequency domain.
  • Spectral features are calculated based on the estimated local power spectrum of RF signals, which can be computed by using the Fourier transform or by the AutoRegressive (AR) model.
  • Similarly, in [31], the plaque tissues are classified by comparing the full spectrum of a tissue sample to the ones in the training database.
  • The Rayleigh distribution is also widely used to describe homogeneous areas in US images.

8 JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014

  • Beyond the feature categories discussed above, other features such as scatterer size and the speed of sound can also be considered.
  • It is known that the ultrasound wave propagates with different speed in tissue with different density.
  • Given the relationship between the acoustic impedance and the tissue density, the relative acoustic impedance can be used as a parameter for tissue characterization [35].

C. Feature Combination and Selection

  • Individual features usually have limited discriminative power.
  • Escalera et al. [32] considered texture features (GLCM, LBP, Gabor, and shadow), RF-based features (full spectrum, two global spectral features), and slope-based features [104] for tissue classification in IVUS.
  • In [88], the texture and spectral features in combination with the RF time series features result in the best performance.
  • In another study [42], the GLCM features are combined with the Nakagami parametric image for breast tumour detection.
  • Feature selection approaches can be categorized into two classes: filter and wrapper.

IV. CLASSIFIER DESIGN

  • After feature extraction, the next step is to design a classifier to automatically label tissue types based on the features.
  • Support Vector Machine — SVM [141] is a discriminant method based on the Bayesian learning theory.
  • In [110], after dimensionality reduction of full spectrum, SVM classification was performed for separating cancer from normal prostate tissue.
  • In [31], the tissue classifier contains a bank of tissue detector arrays, e.g., 10 for each tissue type.
  • In this respect their method share similarities with the random forests algorithms.

10 JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014

  • The k-NN classifier can capture complex boundary among different classes but it is computationally expensive.
  • In [78], the k-NN, neural network, and Bayes classifier were utilized to classify liver tissues.
  • In [108], with spectrum analysis of RF signals, four non-linear classifiers were trained for prostate tissue classification: multi-layer-perceptron neural networks, logitboost algorithms, SVM, and stacked restricted Boltzmann machines.
  • By embedding ECOC in the potential functions for Discriminative Random Fields (DRF), Ciompi presented in [35] a multi-class classification technique called ECOC-DRF for IVUS tissue classification.
  • Therefore, different classifiers can potentially offer complementary information, and the combination of various classifiers could improve the performance.

V. DEEP LEARNING APPROACHES

  • Nowadays deep learning has become popular as a self-taught approach in which features are computed in an automatic manner instead of combining manual designed features.
  • Both CNN and FCN have been applied extensively in ultrasound.
  • In many medical image classification cases, the number of labeled data are limited for training.
  • The remaining layers of the new network are initialized randomly and trained according to the new task [183].
  • Recurrent Neural Networks (RNN) — Recurrent neural networks [195] are often used to process a sequence of data.

VI. FDA-CLEARED MACHINE LEARNING ALGORITHMS

  • Outside the research regime, there are FDA cleared tissue classification algorithms or products are being accepted in the clinic to benefit patients in real practice.
  • In breast imaging, lesion classification products by Koios Medical [212] for AI-based clinical decision support in 2D ultrasound have received clearance.
  • In cardiac imaging, FDA recently approved the first AI-guided medical imaging tool by Caption Health [214] for the use in cardiac ultrasounds in which tissue-classification algorithms are behind.
  • The tool can automatically capture video clips, and saves the best video clip acquired from a particular view for reviewing.
  • In prostate imaging, Focal Healthcare provides a FDA-cleared solution [216] for MR guided ultrasound biopsy for identifying cancer tissues which help urologists perform fusion biopsy procedures more efficiently and accurately.

VII. DISCUSSIONS

  • Compared to techniques in computed tomography (CT), magnetic resonance (MR) and X-rays, the techniques of tissue classification in ultrasound are less applied.
  • These factors lead to different looks of ultrasound images while the robustness of the tissue classification will depend on the similarity of training data for developing the techniques and the testing data in clinical practice.
  • To alleviate this problem, the best approach is to improve the standardization of the imaging via technology or guideline to make the procedure less-user dependent and image-looks consistent.
  • For prostate cancer, the fusion of TRUS and MRI for biopsy guidance may offer improved results by combining the strengths of the two imaging modalities [218].
  • In [111], ovarian tissue features extracted from photoacoustic spectra data, beam envelopes and co-registered ultrasound and photoacoustic.

12 JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014

  • Images are used to characterize cancerous vs. normal tissue using a SVM classifier.
  • One of the critical issues in tissue classification research is the creation of a reliable data set with ground truth.
  • Large databases are necessary for training powerful classifiers and validating the trained classifiers.
  • Currently, the ground truth is mainly obtained by expert annotations or the histopathological analysis, which is a slow and complex process, leading to limited data with annotation.
  • In [33], [96], an approach was presented to enhance the in vitro training data set by selectively including examples from in vivo data for plaque characterization.

VIII. CONCLUSIONS

  • This paper has presented a survey on ultrasound tissue classification, particularly focusing on recent development in this area.
  • SHELL shan et al.: ULTRASOUND TISSUE CLASSIFICATION: A REVIEW 13 REFERENCES [1].
  • Chayakrit Krittanawong, Anusith Tunhasiriwet, HongJu Zhang, Zhen Wang, Mehmet Aydar, and Takeshi Kitai.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Aberystwyth University
Ultrasound tissue classification
Shan, Caifeng; Tan, Tao; Han, Jungong; Huang, Di
Published in:
Artificial Intelligence Review
Publication date:
2020
Citation for published version (APA):
Shan, C., Tan, T., Han, J., & Huang, D. (2020). Ultrasound tissue classification: A review. Artificial Intelligence
Review.
General rights
Copyright and moral rights for the publications made accessible in the Aberystwyth Research Portal (the Institutional Repository) are
retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the
legal requirements associated with these rights.
• Users may download and print one copy of any publication from the Aberystwyth Research Portal for the purpose of private study or
research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the Aberystwyth Research Portal
Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately
and investigate your claim.
tel: +44 1970 62 2400
email: is@aber.ac.uk
Download date: 09. Aug. 2022

JOURNAL OF L
A
T
E
X CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014 1
Ultrasound Tissue Classification: A Review
Caifeng Shan, Tao Tan, Jungong Han, Di Huang
Abstract Ultrasound (US) imaging is the most widespread
medical imaging modality for creating images of the human body
in clinical practice. Tissue classification in ultrasound has been
established as one of the most active research areas, driven by
many important clinical applications. In this paper, we present
a survey on ultrasound tissue classification, focusing on recent
advances in this area. We start with a brief review on the main
clinical applications. We then introduce the traditional approaches,
where the existing research on feature extraction and classifier de-
sign are reviewed. As deep learning approaches becoming popular
for medical image analysis, the recent deep learning methods for
tissue classification are also introduced. We briefly discuss the
FDA-cleared techniques being used clinically. We conclude with
the discussion on the challenges and research focus in future.
Index Terms Tissue Classification, Tissue Characteri-
zation, Machine Learning, Deep Learning, Ultrasound Im-
age Analysis
I. INTRODUCTION
Different medical imaging modalities are available and widely used
nowadays in clinical practice to create images of the human body,
such as computed tomography (CT), magnetic resonance imaging
(MR), positron emission tomography (PET), and ultrasound (US).
Among those, US imaging is the most widespread modality for
visualizing human soft tissue, because of its advantages compared to
others: cheap, harmless (no ionizing radiations), allowing real-time
feedback, convenient to operate, well established technology present
in all places, and so on. On the other hand, because of the limited
field of view, shadows, speckle noise, and other artifacts in the US
images, the interpretation of US images is sometimes difficult.
One main target of US image (signal) analysis is tissue classifica-
tion. Ultrasound tissue classification is to analyze the characteristics
of the US data and their correlation to the pathological state of
tissue, and design a classifier to distinguish the US data into different
tissue types (or states). Tissue classification in ultrasound has many
important applications, such as cancer diagnosis (in prostate, breast,
liver, etc.) and cardiovascular disease diagnosis and intervention.
Driven by the unmet clinical needs to distinguish different tissue
types (e.g., healthy versus diseased) in US, tissue characterization
and classification have received much attention in recent years [1],
[2].
Traditionally the process of tissue classification can be divided
into two main steps: 1) feature extraction: in this step, ultrasound
modelling and signal analysis techniques are used to extract char-
acteristic features from US image (signal) that can describe and
hopefully differentiate different tissue types [1]. These characteristics
may not be visible to human eyes, and it is necessary to examine
the ultrasound image. If the features are discriminative, different
C. Shan is with College of Electrical Engineering and Automation,
Shandong University of Science and Technology, Qingdao 266590,
China
T. Tan is with Eindhoven University of Technology, Eindhoven 5600
MB, The Netherlands
J. Han is with Department of Computer Science, Aberystwyth Univer-
sity, SY23 3DB, UK
D. Huang is with Beijing Advanced Innovation Center for Big Data and
Brain Computing, Beihang University, Beijing 100191, China
C. Shan (caifeng.shan@gmail.com) is the corresponding author.
tissue types will be represented as separate clusters in the feature
space. Unfortunately in practice most extracted features have low
discriminative power. Therefore, in order to discriminate different
tissue types, a proper classifier is needed. This is step 2): classifier
design, which aims to define the optimal decision boundary in the
feature space to separate different tissue types. The classifier can
be trained by applying machine learning and pattern recognition
techniques on the data with ground truth. Different techniques have
been exploited for ultrasound tissue classification.
Since 2012, deep learning algorithms such as convolutional neural
networks (CNNs) [3] have become a powerful tool for automatically
classifying pixels (patches) in the images. It usually contains several
pairs of a convolution layer and a pooling layer. These layers behave
as feature extractors. The intermediate outputs of these layers are
fully connected to a multi-layer perception neural network for the
classification task. New techniques including dropout [4], batch
normalization [5] and resnetblock [6] were proposed to improve
performance of neural networks. Deep learning approaches have been
extensively exploited for medical image analysis, including tissue
classification in ultrasound.
Ultrasound tissue classification is difficult, because the interaction
between biological tissue (an inhomogeneous medium) and acoustic
wave is very hard to model [1]. Although the quality of ultrasound
images, in terms of signal-to-noise and contrast-to-noise ratios, has
been improved substantially in recent years, it remains a challenging
task to classify different tissue types in US images. This paper
attempts to provide a review on ultrasound tissue classification,
particularly focusing on recent advances in this area. The work on
tissue characterization prior to 2009 was reviewed in the articles [1],
[2].
The paper is organized as follows. We start with a brief review
on the main clinical applications in Section II. We then introduce
the existing research on feature extraction and classifier design in
Section III and Section IV respectively. The recent deep learning
approaches for tissue classification are discussed in Section V.
Section VI introduces the FDA-cleared machine learning algorithms
being used clinically. Section VII discusses the challenges and
research directions in future, and finally Section VIII concludes the
paper.
II. CLINICAL APPLICATIONS
Medical ultrasound has a very broad range of clinical uses. Ultra-
sound tissue classification can be applied in many clinical fields, for
instance, tissue classification plays an important role in ultrasound-
based cancer diagnosis, e.g., by classifying the tissue regions as
benign or malignant. Here we briefly introduce the main clinical
applications that have received the most attention in the recent
literature.
A. Cardiology
The primary aim of non-invasive cardiac imaging is to provide
information on the diagnosis and severity of underlying cardiac
conditions [7]. Echocardiography (ultrasound imaging of the heart)
is the most common cardiac imaging procedure performed in clinical
practice, due to its portability, low cost, and patient acceptance.

2 JOURNAL OF L
A
T
E
X CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
Echocardiography has also been one of the driving application areas
of medical ultrasound [8]. Cardiac imaging techniques characterise
the underlying tissue directly, by assessing a signal from the tissue
itself, or indirectly, by inferring tissue characteristics from global
or regional function [7]. Although Cardiac Magnetic Resonance
(CMR) imaging currently is the most investigated modality for tissue
characterisation, this technology is difficult to scale up for routine use.
On the other hand, echocardiography remains the primary imaging
tool for most patients, so it represents an attractive alternative for
cardiac tissue characterization.
Traditionally, integrated backscatter, which measures the ultrasonic
reflectivity of the region of interest, was a major focus of tissue
characterisation research [9]. However, backscatter has limited ability
to reflect fibrosis in those with lower levels of myocardial fibrosis,
such as coronary artery disease [10]. In [11], the frequency content
from echocardiography and spectral analysis techniques were inves-
tigated for differentiating three different cardiac tissue types (cardiac
adipose tissue, myocardium, and blood). Recently texture features
of myocardium have been extracted from still ultrasound images
for tissue characterization [12]. Speckle tracking echocardiography
(STE) is a technique used to assess myocardial deformation at both
segmental and global levels. Since distinct myocardial pathologies
affect deformation differently, information about the underlying tissue
can be inferred by STE. While other modalities such as CMR assess
tissue characteristics through changes in the acquired myocardial tis-
sue images, STE deformation parameters assess the impact of under-
lying pathology on tissue function. The available studies correlating
STE deformation parameters with underlying tissue characteristics are
reviewed in [7]. The most commonly used deformation measurements
are those of strain (the change in length compared to initial length),
strain rate (strain divided by time), and rotation (or twist). Machine
learning and deep learning methods have been explored with STE
features for tissue classification [13], [14].
In recent years ultrasound has been increasingly used for image-
guided cardiac interventions or therapy [16]–[18]. Cardiac ablation,
to create a set of transmural lesions in cardiac tissue, has been the
main therapy method for atrial fibrillation. In thermal ablation, the
target tissue is coagulated by transferring heat to the target area.
Radio-frequency (RF) ablation and the lesion created by RF ablation
are illustrated in Figure 1. One challenge for ablation therapy is the
monitoring of the temperature rise and the extend of the ablated
region in the tissue. US-based techniques for evaluating heat-induced
lesions have been studied recently [15], [19], [20]. In an earlier
work [21], both the textural and spectral features were considered
for analyzing the ablated regions in the tissue. Methods have also
been developed to estimate the temperature increase during the
ablation process by detecting the change of sound speed, attenuation
coefficient, and backscattering. In [19], US imaging is employed to
investigate the temporal evolution and spatial extent of the lesion
created by the HIFU (high intensity focused ultrasound) ablation. It
is shown that spectral analysis of RF signals, which is related to the
physical scatter properties, can potentially be used for monitoring the
evolution of HIFU lesions. Imani et al. [20], [22] proposed to classify
ablated tissue using RF time series features. The acoustical coefficient
of nonlinearity is estimated in [23] for discriminating tissues during
cardiac ablation therapy.
B. Vascular Disease
Cardiovascular diseases are responsible for a third of all deaths
in women worldwide and more than a half in men [24]. Ultrasound
imaging has become one of the most important modalities used in
the assessment of vascular diseases. Virtually all peripheral arterial
and venous structures can be visualized with the duplex ultrasound
(DU), which currently is the main diagnostic modality used in deep
venous thrombosis and carotid disease [25]. For carotid artery disease,
the carotid plaque is traditionally judged according to the degree
of stenosis and a 70% or greater diameter loss is an indication for
surgery. Later it was recognized that not only must the degree of
stenosis be evaluated, but also the carotid plaque instability, as it
is an important determinant of stroke risk [26]. The DU allows for
the study of plaque constituents; for instance, fibrotic tissue, which
renders the plaque more stable, has different brightness from lipids.
Traditionally the carotid plaque composition is determined based on
the pixel brightness, i.e., using a threshold-based method. In [26],
a multi-scale descriptor was used for pixel-level tissue classification
in DU images. Similarly, various image features were considered in
[27], [28] for plaque classification. A recent review on carotid artery
ultrasound image analysis can be found in [29].
In the era of atherosclerosis, intravascular ultrasound (IVUS), a
catheter-based imaging technique, has evolved as a valuable technique
for diagnosis and intervention for coronary disease, by providing
more precise measurement on intimal thickness and vulnerable
plaques. In contrast, angiography, the traditionally used gold-standard
in the imaging of vascular morphology, can only depict contrast
agent filled lumen and not the vessel wall [25]. IVUS provides
images of vascular structure by scanning inside the vessel, which
are acquired by a mechanically rotated transducer or a multi-element
transducer array. For the latter case, an array of transducers is
disposed around the probe, with the synchronized emission of US
waves. The imaging plane, perpendicular to the long axis of the
catheter, provides a 360 degree image of the vessel. An example
IVUS image is shown in Figure 2. By converting the A-lines from
polar to cartesian coordinates, the full circumference of the vessel
wall can be visualized. Therefore, all components of the vessel are
visualized: the cross sectional luminal size, shape and vessel wall, as
well as the various layers of the wall. IVUS has been used to guide
the intervention by analyzing the vessel condition, e.g., assessing
the atherosclerotic plaque amount and composition. IVUS can also
be used to check the vessel condition after the intervention, and to
monitor the status of disease over the time.
In IVUS images, calcifications are demonstrated as hyperechoic
areas, whereas hemorrage or fat deposition inside an atheromatic
plaque is hypoechoic. Subsequently, the plaque can be classified
as lipid, calcified and fibrous, according to its acoustic properties
[30] Tissue classification in IVUS images can automatically predict
vulnerable plaques as well as quantify the amount of the different
tissues; many approaches have been proposed in literature [31]–[36],
which will be discussed in detail in the following sections.
C. Breast Cancer
Breast cancer is the most common cancer in women globally,
and early detection is the key to reduce the death rate. Women
with dense breasts have a risk of breast cancer four to six times
higher than that of women with no or little dense tissue [37]. To
improve cancer detection in dense breasts, personalized breast cancer
screening with ultrasound has been proposed for women with dense
breasts and women with elevated risk factors for developing breast
cancer. Supplemental breast ultrasound (BUS) cancer screening can
detect small, early stage invasive cancers that appear to be occult
on mammograpms due to breast density [38]. Furthermore, BUS is
more convenient and safer for clinical use [39] and it can be used
as an alternative screening device to mammography for women with
harmful mutations in either BRCA1 (breast cancer gene type1) or
BRCA2 since no radiation is involved. Berg et al. [40] found that the

SHELL shan et al.: ULTRASOUND TISSUE CLASSIFICATION: A REVIEW 3
Fig. 1. (Left) RF ablation catheter in the heart; (Right) An example of lesion created by RF ablation [15], where the red line indicates the change
in US image upon energy delivery.
Fig. 2. (Left) Standard cross-sectional IVUS image in cartesian coordinates, and (Right) its corresponding polar representation: ρ represents the
depth in the tissue and θ the position (angle) in the rotation of the probe [35].
supplemental yield of using ultrasound together with mammography
was 4.2 cancers per 1000 women screened. Kaplan et al. [41]
found 6 (0.36%) extra cancers from 1862 mammgraphic negative
patients with dense breasts. However handheld ultrasound breast
cancer screening is operator dependent, uneasy to reproduce, time
consuming and relatively expensive as a screening procedure when
performed by radiologists.
Tissue classification in BUS aims to distinguish benign masses
(cysts and fibroadenomas) and malignant cancerous masses. Various
approaches have been investigated to detect the suspicious masses in
B-mode BUS images [42]–[44], where the challenge is to characterize
the textured appearance and geometry of a tumor relative to normal
tissue. A survey on cancer detection and classification in BUS images
can be found in [39]. Recently, automated 3D breast ultrasound
system (ABUS) as a novel modality was proposed to overcome the
drawbacks of the traditional 2D ultrasound. Fig. 3 shows one example
of an ABUS image. It usually involves a heavy compression from
a membrane on the breast. With a swipe of a wide linear or curved
transducer, a number of transversal images are generated and recon-
structed to become a 3D volume. Different than 2D ultrasound, ABUS
provides the possibility of visualizing speculation patterns associated
with malignancy on coronal planes. Because of the standard imaging
procedure, it is possible to perform temporal analysis on prior and
current exams. Since the ABUS images are standardly defined, many
researcher paid lots of attention on tumor classification [45]–[49]
and cancer detection [50]–[57] using various techniques. Currently
GE invenia ABUS system is FDA-cleared for screening purpose [58]
while Siemens ABVS system is FDA-cleared for diagnosis purpose
[59].
Besides ABUS system which uses reflected echoes, transmission
ultrasound is employed for cancer detection and diagnosis as well by
FDA-cleared systems such as SoftVue System [60] from Delphinus
Medical Technologies and QTscan [61] from QT ultrasound. These
systems usually employ rotating ultrasound transducer, to capture
details of the tissue of the uncompressed breast for patients positioned
in the prone position. Images are generated using both reflection and
transmission modalities. The addition of tissue attributes of sound
speed and attenuation can enrich diagnostic information for tissue
classification.
D. Prostate Cancer
Prostate cancer is the second most common cancer worldwide and
the most common malignancy in men [62]. It is due to the abnormal
and uncontrolled cell mutation and replication in the prostate gland.

4 JOURNAL OF L
A
T
E
X CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
Fig. 3. An example of a lesion in orthogonal planes of an automated
3D breast ultrasound volume [54].
As with any cancer, early detection and treatment is vital for good
survival rates of prostate cancer. Although multi-parametric MRI
has been increasingly used for prostate cancer diagnosis, transrectal
ultrasound (TRUS) is widely used for evaluation of the prostate,
because of its advantages of low costs, good availability, and ability
to visualize the prostate in real time [63]. TRUS has been used for the
early detection and active surveillance of prostatic cancer. Currently,
serum prostate-specific antigen (PSA) and Digital Rectal Examination
(DRE) are used for screening. If either study is abnormal, TRUS-
guided biopsy is performed for diagnostic confirmation [64].
By analyzing the characteristics of the tissue regions, tissue clas-
sification in TRUS provides a malignancy map to guide biopsy,
which can reduce the number of unnecessary biopsy. However,
the prostate regions in TRUS images are characterized by a weak
texture, speckle, short gray scale ranges, and shadow regions [8];
detection and delineation of prostate pathology in TRUS is difficult
due to the heterogeneous and multi-focal nature of prostatic lesions.
Tissue classification has been extensively studied for prostate can-
cer detection [65]. The earlier work on TRUS tissue classification
discriminated the rectangular regions around biopsy needle insertion
points with textural features [66]. Later different types of features
were considered together with more sophisticated classifiers [67]–
[69], for instance, textural features and spectral parameters extracted
from RF data were employed in [67]. Morphologic features and mul-
tiresolution textural features were also used for malignancy detection
[69]. Non-linear Higher Order Spectra (HOS) features and Discrete
Wavelet Transform (DWT) coefficients were considered in [70]. It
has been shown that combining features extracted from RF analysis
of ultrasound signals and texture would result in better classification
[65]. HistoScanning [71]–[73] is a commercially-available ultrasound
tissue characterisation technique that has shown encouraging results
in the detection of prostate cancer.
In addition to gray-scale ultrasound, color Doppler ultrasound
(CDUS) has also been used for the evaluation of prostate cancer
by detecting the increased perfusion compared with surrounding
prostate tissue [63]. It has been shown that the combination of
CDUS and gray-scale ultrasound can detect a greater number
of prostate cancers than gray-scale ultrasound alone [74]. New
ultrasound techniques such as contrast-enhanced ultrasound (CEUS)
and ultrasound elastography [74], [75], have been developed to
improve the detection of prostate cancer. MRI-ultrasound fusion is
another novel imaging technique that is being used to guide prostate
biopsy, where the target lesion is marked on the prebiopsy MRI and
fused to the real-time TRUS images [76].
The previous two subsections have discussed two main application
areas in cancer diagnosis, breast and prostate. Ultrasound tissue
classification has also been investigated for diagnosis of other cancer
types, for instance, early detection of liver tumour [77]–[80]. Focal
liver lesions such as cysts and tumours are concentrated over a quite
small area of the tissue and are difficult to identify. The classification
of lesions in ultrasound liver image usually depends heavily on the
characteristics of the lesions including internal echo, morphology,
edge, echogenicity, and posterior echo enhancement. In [77], texture
features are used to distinguish malignant and benign liver tumours.
Automatic classification of thyroid tissue into benign and malignant
types using US are investigated in [81]–[83]; a review on thyroid
cancer tissue characterization can be seen in [84].
III. FEATURE EXTRACTION
The traditional approaches to ultrasound tissue classification start
with extracting discriminative features from the ultrasound signal or
image. In the US image formation process [1], the radio-frequency
(RF) signal acquired by the US transducer undergoes filtering,
envelop detection, log compression, and post-processing to finally
give a grayscale representation (which is often called A-line). The
grayscale signal is then interpolated and rasterized to give a B-mode
or M-model image for display.
A large amount of features of different nature have been investi-
gated in the literature for tissue characterization and classification (as
summarized in Table I). Here we group them into three categories.
The first category relies on tissue appearance in US images, where
texture and morphology are usually analyzed. In the second category,
the RF signal or envelope-detected data is analyzed. Spectral analysis
is often performed to extract characteristics about the behavior
(property) of different tissue types in the frequency domain. The last
category is to fuse and combine different types of features.
A. Image-based approaches
Before extracting features from US images, image pre-processing
and image segmentation are usually performed. Pre-processing
consists of speckle reduction and image enhancement (see [39] for
some details), and image segmentation is to segment the image into
Regions of Interest (ROI) for further analysis [8].
Texture features Feature extraction from US images aims at
describing the appearance of tissue. Most studies have focused on
textural properties of speckle which represent the macroscopic ap-
pearance of scattering generated by tissue micro-structures [121]. By
identifying spatial variations in pixel intensities and quantifying them
into numerical features, texture analysis has been widely adopted and
different kinds of texture features are available in the literature: Gabor
filter responses, derivatives of Gaussian filters, wavelet transform, co-
occurrence matrices, Local Binary Patterns, fractal spaces, Markov
random fields, and so on. Here we list the texture features that have
been widely used for US tissue classification
Grey Level Co-occurrence Matrix (GLCM) [122]: GLCM is
a well-known statistical tool for extracting texture information
from images. It measures how often different combinations of
pixel intensities occur in an image. GLCM can also be defined
as an estimation of the joint probability density function of gray

Citations
More filters
Journal ArticleDOI
TL;DR: The proposed method analyzes the gynaecological ultrasound images to identify suspicious objects or cases with health consequences for women to validate the output of modern computerized and automated technologies.

21 citations


Cites background from "Ultrasound tissue classification: a..."

  • ...Ultrasound tissue classification has been identified as one of the most active fields of clinical study, powered by several major clinical applications (Shan et al., 2020)....

    [...]

  • ...Ultrasound image detection is a medical diagnosis method that utilizes dispersed or reflected ultrasound echo data to identify lesion regions in the human body, depending on the variation in acoustic impedance of various human tissues (Shan et al., 2020)....

    [...]

Journal ArticleDOI
TL;DR: In this article , a handheld microwave-induced thermoacoustic imaging (MTAI) system equipped with a compact impedance matching microwave-sono and an ergonomically designed probe was presented and evaluated.
Abstract: Microwave-induced thermoacoustic imaging (MTAI) is a promising alternative for breast tumor detection due to its deep imaging depth, high resolution, and minimal biological hazards. However, due to the bulky size and complicated system configuration of conventional benchtop MTAI, it is limited to imaging various anatomical sites and its application in different clinical scenarios. In this study, a handheld MTAI system equipped with a compact impedance matching microwave-sono and an ergonomically designed probe was presented and evaluated. The probe integrates a flexible coaxial cable for microwave delivery, a miniaturized microwave antenna, a linear transducer array, and wedge-shaped polystyrene blocks for efficient acoustic coupling, achieving microwave illumination and ultrasonic detection coaxially, and enabling high signal-to-noise ratio (SNR). Phantom experiments demonstrated that the maximum imaging depth is 5 cm (SNR = 8 dB), and the lateral and axial resolutions are 1.5 mm and 0.9 mm, respectively. Finally, three healthy female volunteers of different ages were subjected to breast thermoacoustic tomography and ultrasound imaging. The results showed that the h-MTAI data are correlated with the data of ultrasound imaging, indicating the safety and effectiveness of the system. Thus, the proposed h-MTAI system might contribute to breast tumor screening.

8 citations

Journal ArticleDOI
10 Mar 2021-PLOS ONE
TL;DR: In this article, a frequency division denoising algorithm combining transform domain and spatial domain is proposed to solve the problem of speckle noise in ultrasound images, which leads to defects, such as low resolution, poor contrast, spots, and shadows, which affect the accuracy of physician analysis and diagnosis.
Abstract: Ultrasound imaging has developed into an indispensable imaging technology in medical diagnosis and treatment applications due to its unique advantages, such as safety, affordability, and convenience. With the development of data information acquisition technology, ultrasound imaging is increasingly susceptible to speckle noise, which leads to defects, such as low resolution, poor contrast, spots, and shadows, which affect the accuracy of physician analysis and diagnosis. To solve this problem, we proposed a frequency division denoising algorithm combining transform domain and spatial domain. First, the ultrasound image was decomposed into a series of sub-modal images using 2D variational mode decomposition (2D-VMD), and adaptively determined 2D-VMD parameter K value based on visual information fidelity (VIF) criterion. Then, an anisotropic diffusion filter was used to denoise low-frequency sub-modal images, and a 3D block matching algorithm (BM3D) was used to reduce noise for high-frequency images with high noise. Finally, each sub-modal image was reconstructed after processing to obtain the denoised ultrasound image. In the comparative experiments of synthetic, simulation, and real images, the performance of this method was quantitatively evaluated. Various results show that the ability of this algorithm in denoising and maintaining structural details is significantly better than that of other algorithms.

3 citations

Journal ArticleDOI
TL;DR: In this article , the photoacoustic spectroscopy (PAS) was proposed as a biomedical hybrid imaging modality based on the use of laser-generated ultrasound due to the photo-acoustic effect.
Abstract: HOTOACOUSTIC imaging (PAI), or optoacoustic imaging, is a biomedical hybrid imaging modality based on the use of laser-generated ultrasound due to the photoacoustic effect [1-2]. The photoacoustic effect as a physical phenomenon was reported in 1880 by A. G. Bell [3-6]. The reason is the energy exchange process which transforms the absorbed light energy into kinetic energy, which in role results in a temperature rise thus a pressure wave or sound [6]. Measuring the sound at different wavelengths produced the origin of the photoacoustic spectroscopy (PAS); also, called optoacoustic

2 citations

Journal ArticleDOI
TL;DR: In this paper , an analog RSM with 3D ion transport channels was proposed to provide unprecedented high reliability and robustness, and a neural network was designed to perform the biological tissue classification task using the ultrasound signals.
Abstract: Reversible metal‐filamentary mechanism has been widely investigated to design an analog resistive switching memory (RSM) for neuromorphic hardware‐implementation. However, uncontrollable filament‐formation, inducing its reliability issues, has been a fundamental challenge. Here, an analog RSM with 3D ion transport channels that can provide unprecedentedly high reliability and robustness is demonstrated. This architecture is realized by a laser‐assisted photo‐thermochemical process, compatible with the back‐end‐of‐line process and even applicable to a flexible format. These superior characteristics also lead to the proposal of a practical adaptive learning rule for hardware neural networks that can significantly simplify the voltage pulse application methodology even with high computing accuracy. A neural network, which can perform the biological tissue classification task using the ultrasound signals, is designed, and the simulation results confirm that this practical adaptive learning rule is efficient enough to classify these weak and complicated signals with high accuracy (97%). Furthermore, the proposed RSM can work as a diffusive‐memristor at the opposite voltage polarity, exhibiting extremely stable threshold switching characteristics. In this mode, several crucial operations in biological nervous systems, such as Ca2+ dynamics and nonlinear integrate‐and‐fire functions of neurons, are successfully emulated. This reconfigurability is also exceedingly beneficial for decreasing the complexity of systems—requiring both drift‐ and diffusive‐memristors.
References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Journal ArticleDOI
01 Oct 2001
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

79,257 citations

Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations

Journal Article
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Abstract: Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.

33,597 citations

Frequently Asked Questions (14)
Q1. What contributions have the authors mentioned in the paper "Ultrasound tissue classification: a review" ?

In this paper, the authors present a survey on ultrasound tissue classification, focusing on recent advances in this area. The authors start with a brief review on the main clinical applications. The authors then introduce the traditional approaches, where the existing research on feature extraction and classifier design are reviewed. As deep learning approaches becoming popular for medical image analysis, the recent deep learning methods for tissue classification are also introduced. The authors briefly discuss the FDA-cleared techniques being used clinically. The authors conclude with the discussion on the challenges and research focus in future. 

The features extracted from RF signals are not subject to machine dependent processing, subsampling, interpolation, quantization and even operator-dependent settings [35]. 

Echocardiography (ultrasound imaging of the heart) is the most common cardiac imaging procedure performed in clinical practice, due to its portability, low cost, and patient acceptance. 

Grey Level Co-occurrence Matrix (GLCM) [122]: GLCM is a well-known statistical tool for extracting texture information from images. 

Convolutional Neural Networks (CNN) — Convolutional neural network is the most popular and successful deep learning architecture. 

To differentiate different cardiac tissue types, thirteen spectral parameters were computed from the power spectrum of the RF data in three different bandwidth ranges [11]. 

Considering the RF signals may not be available for spectral analysis, the work [68] proposes to extract spectral features from TRUS images, where each ROI is first scanned to form 1-D signals and then spectral features are extracted. 

It is shown that spectral analysis of RF signals, which is related to the physical scatter properties, can potentially be used for monitoring the evolution of HIFU lesions. 

Due to the large variations in US images (signals), most features have low discriminative power and it is often difficult to find a boundary separating different tissue types in the feature space. 

In clinical practice, to improve the visualization of certain tissue, the physicians often change the US imaging parameters such as contrast, depth and gain. 

Following the seminal work of Lizzi et al. [103], [130], spectral parameters of local RF signals are the most often used features, based on the hypothesis that different tissue types behave differently in the frequency domain. 

Methods have also been developed to estimate the temperature increase during the ablation process by detecting the change of sound speed, attenuation coefficient, and backscattering. 

In an earlier work [21], both the textural and spectral features were considered for analyzing the ablated regions in the tissue. 

To alleviate this problem, the best approach is to improve the standardization of the imaging via technology or guideline to make the procedure less-user dependent and image-looks consistent.