scispace - formally typeset
Journal ArticleDOI

Comparative assessment of CNN architectures for classification of breast FNAC images.

TLDR
A comparative assessment of the models giving a new dimension to FNAC study where GoogLeNet-V3 (fine-tuned) achieved an accuracy of 96.25% which is highly satisfactory.
Abstract
Fine needle aspiration cytology (FNAC) entails using a narrow gauge (25-22 G) needle to collect a sample of a lesion for microscopic examination. It allows a minimally invasive, rapid diagnosis of tissue but does not preserve its histological architecture. FNAC is commonly used for diagnosis of breast cancer, with traditional practice being based on the subjective visual assessment of the breast cytopathology cell samples under a microscope to evaluate the state of various cytological features. Therefore, there are many challenges in maintaining consistency and reproducibility of findings. However, the advent of digital imaging and computational aid in diagnosis can improve the diagnostic accuracy and reduce the effective workload of pathologists. This paper presents a comparison of various deep convolutional neural network (CNN) based fine-tuned transfer learned classification approach for the diagnosis of the cell samples. The proposed approach has been tested using VGG16, VGG19, ResNet-50 and GoogLeNet-V3 (aka Inception V3) architectures of CNN on an image dataset of 212 images (99 benign and 113 malignant), later augmented and cleansed to 2120 images (990 benign and 1130 malignant), where the network was trained using images of 80% cell samples and tested on the rest. This paper presents a comparative assessment of the models giving a new dimension to FNAC study where GoogLeNet-V3 (fine-tuned) achieved an accuracy of 96.25% which is highly satisfactory.

read more

Citations
More filters
Journal ArticleDOI

Distracted driver detection by combining in-vehicle and image data using deep learning

TL;DR: This work proposes to integrate sensor data into the vision-based distracted driver detection model to improve the generalization ability of the system and shows that integrating sensor data to image-based driver detection significantly increases the overall performance with both of the fusion techniques.
Journal ArticleDOI

Reviewing Machine Learning and Image Processing Based Decision-Making Systems for Breast Cancer Imaging

TL;DR: In this paper, a Structured Literature Review (SLR) of the use of Machine Learning (ML) and Image Processing (IP) techniques to deal with breast cancer imaging is conducted.
Journal ArticleDOI

Deep hybrid architectures for binary classification of medical breast cancer images

TL;DR: In this paper , the authors developed and evaluated twenty-eight hybrid architectures combining seven recent deep learning techniques for feature extraction (DenseNet 201, Inception V3, INception ReseNet V2, MobileNet V1, ResNet 50, VGG16, and VGG19) for binary classification of breast pathological images over the BreakHis and FNAC datasets.
Journal ArticleDOI

Deep hybrid architectures for binary classification of medical breast cancer images

TL;DR: In this article, the authors developed and evaluated twenty-eight hybrid architectures combining seven deep learning techniques for feature extraction (DenseNet 201, SVM, DT, and KNN) for binary classification of breast pathological images over the BreakHis and FNAC datasets.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI

Rethinking the Inception Architecture for Computer Vision

TL;DR: In this article, the authors explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Related Papers (5)