scispace - formally typeset
A

Abdul Waheed

Researcher at Maharaja Agrasen Institute of Technology

Publications -  7
Citations -  666

Abdul Waheed is an academic researcher from Maharaja Agrasen Institute of Technology. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 2, co-authored 6 publications receiving 259 citations.

Papers
More filters
Journal ArticleDOI

CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection

TL;DR: This research presents a method to generate synthetic chest X-ray (CXR) images by developing an Auxiliary Classifier Generative Adversarial Network (ACGAN) based model called CovidGAN and demonstrates that the synthetic images produced by this model can be utilized to enhance the performance of CNN for COVID-19 detection.
Journal ArticleDOI

An optimized dense convolutional neural network model for disease recognition and classification in corn leaf

TL;DR: This study indicates that the performance of the optimized DenseNet model is close to that of the established CNN architectures with far fewer parameters and computation time.
Posted Content

Speaker and Time-aware Joint Contextual Learning for Dialogue-act Classification in Counselling Conversations.

TL;DR: In this article, the authors proposed a transformer-based architecture with a novel speaker-and time-aware contextual learning for dialogue-act classification in counseling conversations, which achieved state-of-the-art performance on HOPE.
Posted Content

BloomNet: A Robust Transformer based model for Bloom's Learning Outcome Classification.

TL;DR: This paper proposed a transformer-based model named BloomNet that captures linguistic as well semantic information to classify the course learning outcomes (CLOs) and compared BloomNet with a diverse set of basic as well as strong baselines.
Journal ArticleDOI

GPTAraEval: A Comprehensive Evaluation of ChatGPT on Arabic NLP

TL;DR: The authors evaluated ChatGPT on 32 diverse natural language understanding and generation tasks on over 60 different datasets and found that despite its success on English benchmarks, ChatPT trained in context (few-shot) is consistently outperformed by much smaller dedicated models finetuned on Arabic.