A
Ansh Mittal
Researcher at Bharati Vidyapeeth's College of Engineering
Publications - 6
Citations - 117
Ansh Mittal is an academic researcher from Bharati Vidyapeeth's College of Engineering. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 3, co-authored 3 publications receiving 61 citations.
Papers
More filters
Journal ArticleDOI
Detecting Pneumonia Using Convolutions and Dynamic Capsule Routing for Chest X-ray Images
Ansh Mittal,Deepika Kumar,Mamta Mittal,Tanzila Saba,Ibrahim Abunadi,Amjad Rehman,Sudipta Roy +6 more
TL;DR: A combination of convolutions and capsules is used to obtain two models that outperform all models previously proposed and detect pneumonia from chest X-ray (CXR) images with test accuracy of 95.33% and 95.90%, respectively.
Journal ArticleDOI
Data augmentation based morphological classification of galaxies using deep convolutional neural network
TL;DR: An implementation accentuating the use of deep learning algorithms with certain Data Augmentation techniques and certain different activation functions, named daMCOGCNN (data augmentation-based MOrphological Classifier Galaxy Using Convolutional Neural Networks) had been proposed for morphological classification of galaxies.
Journal ArticleDOI
AiCNNs (Artificially-integrated Convolutional Neural Networks) for Brain Tumor Prediction
Ansh Mittal,Deepika Kumar +1 more
TL;DR: A model named Artificially-integrated Convolutional Neural Networks (AiCNNs) is proposed that accurately classifies brain MRI scans to 3 classes of brain tumor and negative diagnosis results.
Journal ArticleDOI
On Multi-Agent Deep Deterministic Policy Gradients and their Explainability for SMARTS Environment
Ansh Mittal,Aditya Malte +1 more
TL;DR: In this article , the authors discuss two approaches, MAPPOI and MADDPG, which are based on-policy and off-policy RL approaches for cooperative multi-agent learning.
Journal ArticleDOI
SAVCHOI: Detecting Suspicious Activities using Dense Video Captioning with Human Object Interactions
TL;DR: This work mod-ify a pre-existing approach for this task by leveraging the Human-Object Interaction model for the Visual features in the Bi-Modal Transformer for the Dense Video Captioning task for the ActivityNet Captions dataset and observes that this formulation for Dense Captioning performs better than other discussed BMT-based approaches.