scispace - formally typeset
Journal ArticleDOI: 10.1080/21681163.2020.1818628

Automatic segmentation of brain tumour in MR images using an enhanced deep learning approach

04 Mar 2021-Computer methods in biomechanics and biomedical engineering. Imaging & visualization (Informa UK Limited)-Vol. 9, Iss: 2, pp 121-130
Abstract: The presented manuscript proposes a fully automatic deep learning method to quantify the tumour region in brain Magnetic Resonance images as the accurate diagnosis of brain tumour region is necessa...

... read more

Topics: Image segmentation (58%), Deep learning (53%)
Citations
  More

10 results found


Open access
01 Jan 2006-

2,669 Citations


Open accessJournal ArticleDOI: 10.1155/2021/6695108
Abstract: One of the main requirements of tumor extraction is the annotation and segmentation of tumor boundaries correctly. For this purpose, we present a threefold deep learning architecture. First, classifiers are implemented with a deep convolutional neural network (CNN) and second a region-based convolutional neural network (R-CNN) is performed on the classified images to localize the tumor regions of interest. As the third and final stage, the concentrated tumor boundary is contoured for the segmentation process by using the Chan-Vese segmentation algorithm. As the typical edge detection algorithms based on gradients of pixel intensity tend to fail in the medical image segmentation process, an active contour algorithm defined with the level set function is proposed. Specifically, the Chan-Vese algorithm was applied to detect the tumor boundaries for the segmentation process. To evaluate the performance of the overall system, Dice Score, Rand Index (RI), Variation of Information (VOI), Global Consistency Error (GCE), Boundary Displacement Error (BDE), Mean Absolute Error (MAE), and Peak Signal to Noise Ratio (PSNR) were calculated by comparing the segmented boundary area which is the final output of the proposed, against the demarcations of the subject specialists which is the gold standard. Overall performance of the proposed architecture for both glioma and meningioma segmentation is with an average Dice Score of 0.92 (also, with RI of 0.9936, VOI of 0.0301, GCE of 0.004, BDE of 2.099, PSNR of 77.076, and MAE of 52.946), pointing to the high reliability of the proposed architecture.

... read more

Topics: Image segmentation (63%), Segmentation (55%), Convolutional neural network (51%) ... read more

6 Citations


Journal ArticleDOI: 10.1080/02564602.2021.1937349
Abstract: The segmentation of cardiac MR images requires extensive attention as it needs a high level of care and analysis for the diagnosis of affected part. The advent of deep learning technology has paved...

... read more

Topics: Feature (computer vision) (63%), Segmentation (50%)

3 Citations


Journal ArticleDOI: 10.1080/02564602.2021.1955760
Abstract: Cardiovascular diseases are leading cause of death worldwide. Timely and accurate detection of disease is required to reduce load on healthcare system and number of deaths. For this, accurate and f...

... read more

3 Citations


Journal ArticleDOI: 10.1080/21681163.2021.1944914
Sumit Tripathi1, Neeraj Sharma1Institutions (1)
Abstract: This paper proposes a dual path deep convolution network based on discriminative learning for denoising MR images. The noise in MR images causes problems in identifying the regions of interest. The...

... read more

2 Citations


References
  More

34 results found


Open accessProceedings ArticleDOI: 10.1109/CVPR.2016.90
Kaiming He1, Xiangyu Zhang1, Shaoqing Ren1, Jian Sun1Institutions (1)
27 Jun 2016-
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

... read more

Topics: Deep learning (53%), Residual (53%), Convolutional neural network (53%) ... read more

93,356 Citations


Open accessProceedings Article
03 Dec 2012-
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

... read more

Topics: Convolutional neural network (61%), Deep learning (59%), Dropout (neural networks) (54%) ... read more

73,871 Citations


Open accessProceedings Article
Karen Simonyan1, Andrew Zisserman1Institutions (1)
01 Jan 2015-
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

... read more

49,857 Citations


Journal ArticleDOI: 10.1109/5.726791
Yann LeCun1, Léon Bottou2, Léon Bottou3, Yoshua Bengio3  +3 moreInstitutions (5)
01 Jan 1998-
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

... read more

Topics: Neocognitron (64%), Intelligent character recognition (64%), Artificial neural network (60%) ... read more

34,930 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2015.7298594
Christian Szegedy1, Wei Liu2, Yangqing Jia1, Pierre Sermanet1  +5 moreInstitutions (3)
07 Jun 2015-
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

... read more

29,453 Citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20219
20061