scispace - formally typeset
Search or ask a question
Author

Aswathy Ravikumar

Bio: Aswathy Ravikumar is an academic researcher from VIT University. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 4, co-authored 18 publications receiving 84 citations. Previous affiliations of Aswathy Ravikumar include College of Engineering, Trivandrum.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents amalgam KNN and ANFIS algorithm, which combines the features of adaptive neural network and Fuzzy Inference System, and aims to provide higher classification accuracy than the existing approaches.
Abstract: mellitus or simply diabetes is a disease caused due to the increase level of blood glucose. Various available traditional methods for diagnosing diabetes are based on physical and chemical tests. These methods can have errors due to different uncertainties. A number of Data mining algorithms were designed to overcome these uncertainties. Among these algorithms, amalgam KNN and ANFIS provides higher classification accuracy than the existing approaches. The main data mining algorithms discussed in this paper are EM algorithm, KNN algorithm, K-means algorithm, amalgam KNN algorithm and ANFIS algorithm. EM algorithm is the expectation-maximization algorithm used for sampling, to determine and maximize the expectation in successive iteration cycles. KNN algorithm is used for classifying the objects and used to predict the labels based on some closest training examples in the feature space. K means algorithm follows partitioning methods based on some input parameters on the datasets of n objects. Amalgam combines both the features of KNN and K means with some additional processing. ANFIS is the Adaptive Neuro Fuzzy Inference System which combines the features of adaptive neural network and Fuzzy Inference System. The data set chosen for classification and experimental simulation is based on Pima Indian Diabetic Set from University of California, Irvine (UCI) Repository of Machine Learning databases. Keywordsmining, Diabetes, EM algorithm, KNN algorithm, K- means algorithm, amalgam KNN algorithm, ANFIS algorithm

57 citations

Journal ArticleDOI
03 Mar 2022-PeerJ
TL;DR: The convolutional neural network used for various image applications was studied and its acceleration in the various platforms like CPU, GPU, TPU was done and its structure and computing power and characteristics was analyzed and summarized.
Abstract: Background In deep learning the most significant breakthrough in the field of image recognition, object detection language processing was done by Convolutional Neural Network (CNN). Rapid growth in data and neural networks the performance of the DNN algorithms depends on the computation power and the storage capacity of the devices. Methods In this paper, the convolutional neural network used for various image applications was studied and its acceleration in the various platforms like CPU, GPU, TPU was done. The neural network structure and the computing power and characteristics of the GPU, TPU was analyzed and summarized, the effect of these on accelerating the tasks is also explained. Cross-platform comparison of the CNN was done using three image applications the face mask detection (object detection/Computer Vision), Virus Detection in Plants (Image Classification: agriculture sector), and Pneumonia detection from X-ray Images (Image Classification/medical field). Results The CNN implementation was done and a comprehensive comparison was done on the platforms to identify the performance, throughput, bottlenecks, and training time. The CNN layer-wise execution in GPU and TPU is explained with layer-wise analysis. The impact of the fully connected layer and convolutional layer on the network is analyzed. The challenges faced during the acceleration process were discussed and future works are identified.

12 citations

Journal ArticleDOI
TL;DR: Four types of metaheuristic algorithms such as ant colony optimization algorithm, firefly algorithm, bat algorithm and cuckoo search algorithms were used as the basis for comparison.
Abstract: are various metaheuristic algorithms which can be used to solve optimization problems efficiently. Among these algorithms, nature-inspired optimization algorithms are attractive because of their better results. In this paper, four types of metaheuristic algorithms such as ant colony optimization algorithm, firefly algorithm, bat algorithm and cuckoo search algorithms were used as the basis for comparison. Ant colony optimization algorithm is based on the interactions between social insect, ants. Firefly algorithm is influenced by the flashing behavior of swarming firefly. Cuckoo search uses brooding parasitism of cuckoo species and bat algorithm is inspired by the echolocation of foraging bats. Keywordsant colony optimization algorithm, firefly algorithm, bat algorithm, cuckoo search algorithm.

11 citations

Proceedings ArticleDOI
08 Apr 2021
TL;DR: In this article, a UNET based architecture for segmentation of tumor region in histopathological images is proposed, which is based on a fully convolutional network and its design is updated and expanded to operate with less pictures of training and create more accurate segmentations.
Abstract: Cancer stands in second leading cause of death worldwide, an average of one in six deaths is due to cancer. The occurrence of breast cancer is more in women compared to men. Breast cancer signs are of a breast lump, differences in the nipples or breasts form or texture etc. Its therapy based on the cancer stages. Early detection of cancer will reduce the death risk for patients. The paper's target is to detect the breast cancer area. To give accurate treatment for the patients, symptoms should be observed properly, and a prediction automatic system is needed that will classify the tumor into benign or malignant. As a general convolutional neural network its role focuses on the classification of images, where input is an image and output are one label, but in biomedical cases, not only does it enable us to discern whether a disease occurs, but also to locate the region of abnormality. U-Net is devoted to solving this problem. This research work has proposed a UNET based architecture for segmentation of tumor region in histopathological images. The network is depending on a fully convolutional network and its design is updated and expanded to operate with less pictures of training and to create more accurate segmentations. The proposed method gives an overall accuracy of 94.2 with very less dataset.

10 citations

Journal ArticleDOI
TL;DR: Improved data mining- based models for variable filtering and for prediction of graft status and survival period in renal transplantation using the patient profile information prior to the transplantation are proposed.
Abstract: transplantation has become the treatment of choice for most patients with end-stage renal disease. Recent advances in renal transplantation notably, the matching of Major Histocompatibility Complex (MHC) and improved immunosuppressants have improved short-term and long-term graft survival rates. In light of recent developments optimization of kidney transplant outcomes is paramount to further augment the graft survival time and the quality of life of the patient. An intuitive understanding of the post transplantation interaction mechanisms involving graft and host is intricate and on account of this prognosis of planned organ transplantation outcomes is an involved problem. Consequently, machine learning approaches based on donor and recipient data are indespensible for improved prognosis of graft outcomes. This study proposes improved data mining- based models for variable filtering and for prediction of graft status and survival period in renal transplantation using the patient profile information prior to the transplantation.

7 citations


Cited by
More filters
Posted Content
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

1,783 citations

Journal ArticleDOI
01 Jun 2021
TL;DR: The proposed ensemble soft voting classifier gives binary classification and uses the ensemble of three machine learning algorithms viz. random forest, logistic regression, and Naive Bayes for the classification.
Abstract: Diabetes is a dreadful disease identified by escalated levels of glucose in the blood Machine learning algorithms help in identification and prediction of diabetes at an early stage The main objective of this study is to predict diabetes mellitus with better accuracy using an ensemble of machine learning algorithms The Pima Indians Diabetes dataset has been considered for experimentation, which gathers details of patients with and without having diabetes The proposed ensemble soft voting classifier gives binary classification and uses the ensemble of three machine learning algorithms viz random forest, logistic regression, and Naive Bayes for the classification Empirical evaluation of the proposed methodology has been conducted with state-of-the-art methodologies and base classifiers such as AdaBoost, Logistic Regression,Support Vector machine, Random forest, Naive Bayes, Bagging, GradientBoost, XGBoost, CatBoost by taking accuracy, precision, recall, F1-score as the evaluation criteria The proposed ensemble approach gives the highest accuracy, precision, recall, and F1_score value with 7904%, 7348%, 7145% and 806% respectively on the PIMA diabetes dataset Further, the efficiency of the proposed methodology has also been compared and analysed with breast cancer dataset The proposed ensemble soft voting classifier has given 9702% accuracy on the breast cancer dataset

141 citations

Proceedings ArticleDOI
01 Dec 2015
TL;DR: A decision support system is proposed that uses AdaBoost algorithm with Decision Stump as base classifier for classification that is greater compared to that of Support Vector Machine, Naive Bayes and Decision Tree.
Abstract: Diabetes is a disease caused due of the expanded level of sugar fixation in the blood. Various computerized information systems were outlined utilizing diverse classifiers for anticipating and diagnosing diabetes. Selecting legitimate classifiers clearly expands the exactness and proficiency of the system. Here a decision support system is proposed that uses AdaBoost algorithm with Decision Stump as base classifier for classification. Additionally Support Vector Machine, Naive Bayes and Decision Tree are also implemented as base classifiers for AdaBoost algorithm for accuracy verification. The accuracy obtained for AdaBoost algorithm with decision stump as base classifier is 80.72% which is greater compared to that of Support Vector Machine, Naive Bayes and Decision Tree.

106 citations

Book ChapterDOI
01 Jan 2018
TL;DR: J48 and Naive Bayesian techniques are used for the early detection of diabetes and a model is proposed and elaborated, in order to make medical practitioner to explore and to understand the discovered rules better.
Abstract: The diabetes mellitus disease (DMD) commonly referred as diabetes is a significant public health problem. Predicting the disease at the early stage can save the valuable human resource. Voluminous datasets are available in various medical data repositories in the form of clinical patient records and pathological test reports which can be used for real-world applications to disclose the hidden knowledge. Various data mining (DM) methods can be applied to these datasets, stored in data warehouses for predicting DMD. The aim of this research is to predict diabetes based on some of the DM techniques like classification and clustering. Out of which, classification is one of the most suitable methods for predicting diabetes. In this study, J48 and Naive Bayesian techniques are used for the early detection of diabetes. This research will help to propose a quicker and more efficient technique for diagnosis of disease, leading to timely and proper treatment of patients. We have also proposed a model and elaborated it step-by-step, in order to make medical practitioner to explore and to understand the discovered rules better. The study also shows the algorithm generated on the dataset collected from college medical hospital as well as from online repository. In the end, an article also outlines how an intelligent diagnostic system works. A clinical trial of this proposed method involves local patients, which is still continuing and requires longer research and experimentation.

78 citations

Journal Article
TL;DR: This paper proposes the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud, and exploits ring signatures to compute the verification information needed to audit the integrity of shared data.
Abstract: We believe that sharing data among multiple users is perhaps one of the most engaging features that motivates cloud storage. A unique problem introduced during the process of public auditing for shared data in the cloud is how to preserve identity privacy from the TPA, because the identities of signers on shared data may indicate that a particular user in the group or a special block in shared data is a higher valuable target than others. Abstract—With cloud storage services, it is common place for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data — while preserving identity privacy— remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to publicly verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data. mechanism for cloud data, so that during public auditing, the content of private data belonging to a personal user is not disclosed to the third party auditor.

72 citations