scispace - formally typeset
Proceedings ArticleDOI

Segmentation of Vascular Regions in Ultrasound Images: A Deep Learning Approach

27 May 2018-pp 1-5

TL;DR: A pipelined network comprising of a convolutional neural network followed by unsupervised clustering is proposed to perform vessel segmentation in liver ultrasound images, motivated by the tremendous success of CNNs in object detection and localization.

AbstractVascular region segmentation in ultrasound images is necessary for applications like automatic registration, and surgical navigation. In this paper, a pipelined network comprising of a convolutional neural network (CNN) followed by unsupervised clustering is proposed to perform vessel segmentation in liver ultrasound images. The work is motivated by the tremendous success of CNNs in object detection and localization. CNN here is trained to localize vascular regions, which are subsequently segmented by the clustering. The proposed network results in 99.14% pixel accuracy and 69.62% mean region intersection over union on 132 images. These values are better than some existing methods.

...read more


Citations
More filters
Proceedings ArticleDOI
01 Nov 2019
TL;DR: This work represents a first successful step towards the automated identification of the vessel lumen in carotid artery ultrasound images and is an important first step in creating a system that can independently evaluate carOTid ultrasounds.
Abstract: Carotid ultrasound is a screening modality used by physicians to direct treatment in the prevention of ischemic stroke in high-risk patients. It is a time intensive process that requires highly trained technicians and physicians. Evaluation of a carotid ultrasound requires identification of the vessel wall, lumen, and plaque of the carotid artery. Automated machine learning methods for these tasks are highly limited. We propose and evaluate here single and multi-path convolutional U-neural network for lumen identification from ultrasound images. We obtained de-identified images under IRB approval from 98 patients. We isolated just the internal carotid artery ultrasound images for these patients giving us a total of 302 images. We manually segmented the vessel lumen, which we use as ground truth to develop and validate our model. With a basic simple convolutional U-Net we obtained a 10-fold cross-validation accuracy of 95%. We also evaluated a dual-path U-Net where we modified the original image and used it as a synthetic modality but we found no improvement in accuracy. We found that the sample size made a considerable difference and thus expect our accuracy to rise as we add more training samples to the model. Our work here represents a first successful step towards the automated identification of the vessel lumen in carotid artery ultrasound images and is an important first step in creating a system that can independently evaluate carotid ultrasounds.

9 citations

Journal ArticleDOI
TL;DR: The experiment shows that the compression of 86.2% can be achieved using the threshold of two intensity levels and the compressed image can be reconstructed with the PSNR of 45.87 dB.
Abstract: A superpixel based on-chip compression is proposed in this paper. Pixels are compared in spatial domain and the pixels with similar characteristics are grouped to form the superpixels. Only one pixel corresponding to each superpixel is read to achieve the compression. The on-chip compression circuit is designed and simulated in UMC 180 nm CMOS technology. For 70% compression, the proposed design results in about 33% power saving. The reconstruction of the compressed image is performed off-chip using bilinear interpolation. Further, two enhancement approaches are developed to improve the output image quality. The first approach is based on wavelet decomposition whereas the second approach uses a deep convolutional neural network. The proposed reconstruction technique takes two orders of magnitude lesser time as compared to the state-of-the-art technique. On an average, it results in peak signal to noise ratio (PSNR) and structural similarity index measure values of 30.999 and 0.9088 dB, respectively, for 70% compression in natural images. On the other hand, the best values observed from the existing approaches for the two metrics are 28.634 and 0.8115 dB, respectively. Further, the proposed technique is found useful for thermal image compression and reconstruction. The experiment shows that the compression of 86.2% can be achieved using the threshold of two intensity levels and the compressed image can be reconstructed with the PSNR of 45.87 dB.

7 citations


Additional excerpts

  • ...connected layers of neurons [26], [27]....

    [...]

Book ChapterDOI
04 Oct 2020
TL;DR: A workflow consisting of multi-class segmentation combined with selective non-rigid registration that leads to sufficient accuracy for integration in computer assisted liver surgery is developed using a reduced 3D U-Net for segmentation, followed by non- Rigid coherent point drift (CPD) registration.
Abstract: Accurate hepatic vessel segmentation and registration using ultrasound (US) can contribute to beneficial navigation during hepatic surgery. However, it is challenging due to noise and speckle in US imaging and liver deformation. Therefore, a workflow is developed using a reduced 3D U-Net for segmentation, followed by non-rigid coherent point drift (CPD) registration. By means of electromagnetically tracked US, 61 3D volumes were acquired during surgery. Dice scores of 0.77, 0.65 and 0.66 were achieved for segmentation of all vasculature, hepatic vein and portal vein respectively. This compares to inter-observer variabilities of 0.85, 0.88 and 0.74 respectively. Target registration error at a tumor lesion of interest was lower (7.1 mm) when registration is performed either on the hepatic or the portal vein, compared to using all vasculature (8.9 mm). Using clinical data, we developed a workflow consisting of multi-class segmentation combined with selective non-rigid registration that leads to sufficient accuracy for integration in computer assisted liver surgery.

7 citations

Posted Content
TL;DR: This method comprises a reduced filter 3D U-Net implementation to automatically detect hepatic vasculature in 3D US volumes based on electromagnetic tracking, comparing promising to literature and inter-observer performance.
Abstract: Accurate hepatic vessel segmentation on ultrasound (US) images can be an important tool in the planning and execution of surgery, however proves to be a challenging task due to noise and speckle. Our method comprises a reduced filter 3D U-Net implementation to automatically detect hepatic vasculature in 3D US volumes. A comparison is made between volumes acquired with a 3D probe and stacked 2D US images based on electromagnetic tracking. Experiments are conducted on 67 scans, where 45 are used in training, 12 in validation and 10 in testing. This network architecture yields Dice scores of 0.740 and 0.781 for 3D and stacked 2D volumes respectively, comparing promising to literature and inter-observer performance (Dice = 0.879).

5 citations

Journal ArticleDOI
TL;DR: This paper presents a survey on ultrasound tissue classification, focusing on recent advances in this area, and introduces the traditional approaches and the recent deep learning methods for tissue classification.
Abstract: Ultrasound imaging is the most widespread medical imaging modality for creating images of the human body in clinical practice. Tissue classification in ultrasound has been established as one of the most active research areas, driven by many important clinical applications. In this paper, we present a survey on ultrasound tissue classification, focusing on recent advances in this area. We start with a brief review on the main clinical applications. We then introduce the traditional approaches, where the existing research on feature extraction and classifier design are reviewed. As deep learning approaches becoming popular for medical image analysis, the recent deep learning methods for tissue classification are also introduced. We briefly discuss the FDA-cleared techniques being used clinically. We conclude with the discussion on the challenges and research focus in future.

3 citations


References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

93,356 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

33,931 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

29,453 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

15,107 citations

Posted Content
TL;DR: This paper proposes a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012---achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also compare R-CNN to OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset. Source code for the complete system is available at this http URL.

13,081 citations


"Segmentation of Vascular Regions in..." refers background in this paper

  • ...Convolutional neural networks (CNNs) have become an ideal choice for high level vision tasks, for example, object detection [1], [2], [3], [4]....

    [...]