scispace - formally typeset
Search or ask a question
Author

Praveen Kumar Reddy Maddikunta

Bio: Praveen Kumar Reddy Maddikunta is an academic researcher from VIT University. The author has contributed to research in topics: Computer science & Wireless sensor network. The author has an hindex of 12, co-authored 15 publications receiving 639 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A deep neural network (DNN) is used to develop effective and efficient IDS in the IoMT environment to classify and predict unforeseen cyberattacks and performs better than the existing machine learning approaches with an increase in accuracy and decreases in time complexity.

243 citations

Journal ArticleDOI
TL;DR: A hybrid principal component analysis (PCA)-firefly based machine learning model to classify intrusion detection system (IDS) datasets and experimental results confirm the fact that the proposed model performs better than the existing machine learning models.
Abstract: The enormous popularity of the internet across all spheres of human life has introduced various risks of malicious attacks in the network. The activities performed over the network could be effortlessly proliferated, which has led to the emergence of intrusion detection systems. The patterns of the attacks are also dynamic, which necessitates efficient classification and prediction of cyber attacks. In this paper we propose a hybrid principal component analysis (PCA)-firefly based machine learning model to classify intrusion detection system (IDS) datasets. The dataset used in the study is collected from Kaggle. The model first performs One-Hot encoding for the transformation of the IDS datasets. The hybrid PCA-firefly algorithm is then used for dimensionality reduction. The XGBoost algorithm is implemented on the reduced dataset for classification. A comprehensive evaluation of the model is conducted with the state of the art machine learning approaches to justify the superiority of our proposed approach. The experimental results confirm the fact that the proposed model performs better than the existing machine learning models.

226 citations

Journal ArticleDOI
TL;DR: The proposed model is evaluated against the prevalent machine learning models and the results justify the superiority of the proposed model in terms of Accuracy, Precision, Recall, Sensitivity and Specificity.
Abstract: Diabetic Retinopathy is a major cause of vision loss and blindness affecting millions of people across the globe. Although there are established screening methods - fluorescein angiography and optical coherence tomography for detection of the disease but in majority of the cases, the patients remain ignorant and fail to undertake such tests at an appropriate time. The early detection of the disease plays an extremely important role in preventing vision loss which is the consequence of diabetes mellitus remaining untreated among patients for a prolonged time period. Various machine learning and deep learning approaches have been implemented on diabetic retinopathy dataset for classification and prediction of the disease but majority of them have neglected the aspect of data pre-processing and dimensionality reduction, leading to biased results. The dataset used in the present study is a diabetes retinopathy dataset collected from the UCI machine learning repository. At its inceptions, the raw dataset is normalized using the Standardscalar technique and then Principal Component Analysis (PCA) is used to extract the most significant features in the dataset. Further, Firefly algorithm is implemented for dimensionality reduction. This reduced dataset is fed into a Deep Neural Network Model for classification. The results generated from the model is evaluated against the prevalent machine learning models and the results justify the superiority of the proposed model in terms of Accuracy, Precision, Recall, Sensitivity and Specificity.

214 citations

Journal ArticleDOI
TL;DR: An attempt has been made to explore the types of sensors suitable for smart farming, potential requirements and challenges for operating UAVs in smart agriculture, and the future applications of using UAV's in smart farming.
Abstract: In the next few years, smart farming will reach each and every nook of the world. The prospects of using unmanned aerial vehicles (UAV) for smart farming are immense. However, the cost and the ease in controlling UAVs for smart farming might play an important role for motivating farmers to use UAVs in farming. Mostly, UAVs are controlled by remote controllers using radio waves. There are several technologies such as Wi-Fi or ZigBee that are also used for controlling UAVs. However, Smart Bluetooth (also referred to as Bluetooth Low Energy) is a wireless technology used to transfer data over short distances. Smart Bluetooth is cheaper than other technologies and has the advantage of being available on every smart phone. Farmers can use any smart phone to operate their respective UAVs along with Bluetooth Smart enabled agricultural sensors in the future. However, certain requirements and challenges need to be addressed before UAVs can be operated for smart agriculture-related applications. Hence, in this article, an attempt has been made to explore the types of sensors suitable for smart farming, potential requirements and challenges for operating UAVs in smart agriculture. We have also identified the future applications of using UAVs in smart farming.

201 citations

Journal ArticleDOI
TL;DR: The present study uses principal component analysis based deep neural network model using Grey Wolf Optimization (GWO) algorithm to classify the extracted features of diabetic retinopathy dataset and shows that the proposed model offers better performance compared to the traditional machine learning algorithms.
Abstract: Diabetic retinopathy is a prominent cause of blindness among elderly people and has become a global medical problem over the last few decades. There are several scientific and medical approaches to screen and detect this disease, but most of the detection is done using retinal fungal imaging. The present study uses principal component analysis based deep neural network model using Grey Wolf Optimization (GWO) algorithm to classify the extracted features of diabetic retinopathy dataset. The use of GWO enables to choose optimal parameters for training the DNN model. The steps involved in this paper include standardization of the diabetic retinopathy dataset using a standardscaler normalization method, followed by dimensionality reduction using PCA, then choosing of optimal hyper parameters by GWO and finally training of the dataset using a DNN model. The proposed model is evaluated based on the performance measures namely accuracy, recall, sensitivity and specificity. The model is further compared with the traditional machine learning algorithms—support vector machine (SVM), Naive Bayes Classifier, Decision Tree and XGBoost. The results show that the proposed model offers better performance compared to the aforementioned algorithms.

151 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Two of the prominent dimensionality reduction techniques, Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are investigated on four popular Machine Learning (ML) algorithms using publicly available Cardiotocography dataset from University of California and Irvine Machine Learning Repository to prove that PCA outperforms LDA in all the measures.
Abstract: Due to digitization, a huge volume of data is being generated across several sectors such as healthcare, production, sales, IoT devices, Web, organizations. Machine learning algorithms are used to uncover patterns among the attributes of this data. Hence, they can be used to make predictions that can be used by medical practitioners and people at managerial level to make executive decisions. Not all the attributes in the datasets generated are important for training the machine learning algorithms. Some attributes might be irrelevant and some might not affect the outcome of the prediction. Ignoring or removing these irrelevant or less important attributes reduces the burden on machine learning algorithms. In this work two of the prominent dimensionality reduction techniques, Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are investigated on four popular Machine Learning (ML) algorithms, Decision Tree Induction, Support Vector Machine (SVM), Naive Bayes Classifier and Random Forest Classifier using publicly available Cardiotocography (CTG) dataset from University of California and Irvine Machine Learning Repository. The experimentation results prove that PCA outperforms LDA in all the measures. Also, the performance of the classifiers, Decision Tree, Random Forest examined is not affected much by using PCA and LDA.To further analyze the performance of PCA and LDA the eperimentation is carried out on Diabetic Retinopathy (DR) and Intrusion Detection System (IDS) datasets. Experimentation results prove that ML algorithms with PCA produce better results when dimensionality of the datasets is high. When dimensionality of datasets is low it is observed that the ML algorithms without dimensionality reduction yields better results.

414 citations

Journal ArticleDOI
TL;DR: This paper aims to provide a survey-based tutorial on potential applications and supporting technologies of Industry 5.0 from the perspective of different industry practitioners and researchers.

314 citations

Journal ArticleDOI
TL;DR: An overview of deep learning and its applications to healthcare found in the last decade is provided and three use cases in China, Korea, and Canada are presented to show deep learning applications for COVID-19 medical image processing.

282 citations

Journal ArticleDOI
07 Apr 2021
TL;DR: In this paper, the authors provide a comprehensive survey of the current developments towards 6G and elaborate the requirements that are necessary to realize the 6G applications, and summarize lessons learned from state-of-the-art research and discuss technical challenges that would shed a new light on future research directions toward 6G.
Abstract: Emerging applications such as Internet of Everything, Holographic Telepresence, collaborative robots, and space and deep-sea tourism are already highlighting the limitations of existing fifth-generation (5G) mobile networks. These limitations are in terms of data-rate, latency, reliability, availability, processing, connection density and global coverage, spanning over ground, underwater and space. The sixth-generation (6G) of mobile networks are expected to burgeon in the coming decade to address these limitations. The development of 6G vision, applications, technologies and standards has already become a popular research theme in academia and the industry. In this paper, we provide a comprehensive survey of the current developments towards 6G. We highlight the societal and technological trends that initiate the drive towards 6G. Emerging applications to realize the demands raised by 6G driving trends are discussed subsequently. We also elaborate the requirements that are necessary to realize the 6G applications. Then we present the key enabling technologies in detail. We also outline current research projects and activities including standardization efforts towards the development of 6G. Finally, we summarize lessons learned from state-of-the-art research and discuss technical challenges that would shed a new light on future research directions towards 6G.

273 citations

Journal ArticleDOI
TL;DR: This paper provides a comprehensive literature review of the security issues and problems that impact the deployment of blockchain systems in smart cities, and presents a detailed discussion of several key factors for the convergence of Blockchain and AI technologies that will help form a sustainable smart society.

261 citations