scispace - formally typeset
D

Deepika Kumar

Researcher at Bharati Vidyapeeth's College of Engineering

Publications -  24
Citations -  427

Deepika Kumar is an academic researcher from Bharati Vidyapeeth's College of Engineering. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 5, co-authored 16 publications receiving 103 citations. Previous affiliations of Deepika Kumar include GD Goenka University.

Papers
More filters
Journal ArticleDOI

An ensemble approach for classification and prediction of diabetes mellitus using soft voting classifier

TL;DR: The proposed ensemble soft voting classifier gives binary classification and uses the ensemble of three machine learning algorithms viz. random forest, logistic regression, and Naive Bayes for the classification.
Journal ArticleDOI

Detecting Pneumonia Using Convolutions and Dynamic Capsule Routing for Chest X-ray Images

TL;DR: A combination of convolutions and capsules is used to obtain two models that outperform all models previously proposed and detect pneumonia from chest X-ray (CXR) images with test accuracy of 95.33% and 95.90%, respectively.
Journal ArticleDOI

Automatic Detection of White Blood Cancer From Bone Marrow Microscopic Images Using Convolutional Neural Networks

TL;DR: This study indicates that the DCNN model’s performance is close to that of the established CNN architectures with far fewer parameters and computation time tested on the retrieved dataset, Thus, the model can be used effectively as a tool for determining the type of cancer in the bone marrow.
Journal ArticleDOI

nFake News Classification using transformer based enhanced LSTM and BERT

TL;DR: In this paper , the authors proposed a model for fake news classification based on news titles, following the content-based classification approach, which uses a BERT model with its outputs connected to an LSTM layer.
Journal ArticleDOI

Classification of Fake News by Fine-tuning Deep Bidirectional Transformers based Language Model

TL;DR: This paper demonstrates how even with minimal text pre-processing, the fine-tuned BERT model is robust enough to perform significantly well on the downstream task of classification of news articles.