scispace - formally typeset
Search or ask a question
Author

L.M. Jenila Livingston

Bio: L.M. Jenila Livingston is an academic researcher from VIT University. The author has contributed to research in topics: Big data & Cloud computing. The author has an hindex of 4, co-authored 35 publications receiving 55 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper discuses SSL and TLS architectures and presents survey on attacks against SSL/TLS and highlights the factors influence on those attacks.
Abstract: The boom of internet, web technologies bring the whole world under a single roof. Transferring information through e-ways leads security to be an important aspect to deal with. In IP network, SSL/ TLS is the protocol works on the top of the transport layer to secure application traffic and provides end to end secure communication. A security hole in those protocols makes the communication channel vulnerable to be eavesdropped and modified information later. This paper discuses SSL and TLS architectures and presents survey on attacks against SSL/TLS. It also highlights the factors influence on those attacks.

22 citations

Journal ArticleDOI
01 Jan 2021
TL;DR: Four different models of the CNN, such as ResNet50, DenseNet169, VGG16, and AlexNet, trained on ImageNet, are used to extract features from images and feed them into a classifier to make predictions and distinguish a type of waste from its corresponding category using Convolutional Neural Network.
Abstract: Waste segregation is one of the primary challenges to recycling systems in major cities in our country. In India, 62 million tons of garbage is generated annually. Of this 5.6 million tons of wastes consist of plastic materials. About 60 percent of this is recycled every year. In addition, 11.9 million tons are recycled from 43 million tons of solid waste produced. Though the numbers sound good, a serious problem in the recycling industry is the segregation of waste before recycling or any other waste treatment processes. In India, at present situation waste is not segregated when collected from households. So a lot of workforce and effort are needed to separate this waste. In addition to this people working in this industry are prone to various infections caused due to toxic materials present in the waste. So the idea is to decrease the human intervention and make this waste segregation process more productive. The proposed work is aimed to build an image classifier that identifies the object and detects the type of waste material using Convolutional Neural Network. In this work, four different models of the CNN, such as ResNet50, DenseNet169, VGG16, and AlexNet, trained on ImageNet, are used to extract features from images and feed them into a classifier to make predictions and distinguish a type of waste from its corresponding category. The experimental results showed that the performance of DenseNet169 was significantly greater than all four models and the performance of ResNet50 is closer to DenseNet169.

20 citations

Journal ArticleDOI
01 Dec 2020
TL;DR: This paper discusses about medical chatbot using the machine learning algorithm which predicts the accuracy of the disease and uses Natural Language Processing to achieve the style of chatting.
Abstract: Chatbot is used extensively to check the state of health at any time. It is the same as going to a doctor and having the medication prescribed. This paper discusses about medical chatbot using the machine learning algorithm which predicts the accuracy of the disease. There are many machine learning algorithms that can be used to predict the disease. Support Vector Machine learning technique is primarily used to achieve precise prediction and boost the efficiency of the model. The system uses Natural Language Processing to achieve the style of chatting. Using this approach people can reduce spending time in hospitals and receive low cost or cost-free services.

14 citations

Journal ArticleDOI
TL;DR: A brand new implementation architecture is provided for developers those who are interested to develop a virtual laboratories for performing network security experiments as an academic course or research purpose.
Abstract: Virtual Laboratories has been accomplished as an economic support in educational institutions. Using emerging technology such as cloud computing and rapid development in the mobile operating system and mobile applications Virtual laboratories are becoming popular in the educational as well as in business organizations. Time and place are major constraints on the learning system. By taking facilities provided by M-learning technology learners and researchers are able to perform their learning task very efficiently and well organized way. In this paper a brand new implementation architecture is provided for developers those who are interested to develop a virtual laboratories. Mobile phones are used as a front end for GUI. XEN Cloud Platform (XCP) and OpenStack both are open source are used to create virtual laboratories. By using java API’s application developers are able to create virtual laboratories for android phones. This paper is mainly focused on network security experiments which require highly available, reliable, flexible, reconfigurable and isolated laboratories for performing network security experiments as an academic course or research purpose. Creating large number of cluster of Xen Servers we are able to run multiple VMs which can be used as router, switch, host, DHCP server, FTP Server, firewall etc. and using OpenStack virtualization technique we can create a network, configure a network as per our requirement by using android phone as a user interface.

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work presents details on how virtual remote laboratories are implemented for a subject in a post-degree program in the authors' University, and presents an evaluation of the system used on such subject aiming at assessing the quality of thesystem regarding three different concepts, namely perceived usefulness, perceived ease of use, and perceived interaction.
Abstract: The use of practical laboratories is a key in engineering education in order to provide our students with the resources needed to acquire practical skills. This is specially true in the case of distance education, where no physical interactions between lecturers and students take place, so virtual or remote laboratories must be used. UNED has developed a system to create and manage virtual remote laboratories, aimed at improving the way how practical exercises are conducted. This system is based on cloud computing and virtualization concepts. These Virtual Remote Laboratories (VRLabs) combine features of traditional virtual and remote laboratories but with clear differences over them, among others, VRLabs do not necessarily access real physical devices but are not based on simulations either. Each student is provided with a virtual remote laboratory based on virtualization that he/she will access through the Internet and will use to implement his/her practical assignments. We present details on how these laboratories are implemented for a subject in a post-degree program in our University. Furthermore, we also present an evaluation of the system used on such subject aiming at assessing the quality of the system regarding three different concepts, namely perceived usefulness, perceived ease of use, and perceived interaction. This evaluation is twofold, first, we have conducted a survey over the students of the subject, and second, we have performed another survey over the teaching team of the subject, both have been performed for the 2012-2013 and 2013-2014 academic years.

38 citations

Journal ArticleDOI
07 Apr 2021
TL;DR: A classification system is proposed for accurate distinguishing between human user and chatbots based on the measurements obtained from the study and the improved efficiency of this system is proved by testing and comparison with the existing schemes.
Abstract: Internet users are largely threatened by abuse and manipulation of several automated chat service programs called as chat bots. Malware and spam is distributed by the popular chat networks using chat bots. The commercial chat network is surveyed in this paper with a series of measurements. A series of 15 advanced to simple chatbots are used for this purpose. When compared to the bot behavior, the complexity of human behavior is high. A classification system is proposed for accurate distinguishing between human user and chatbots based on the measurements obtained from the study. Naïve Bayes Classifier and entropy classifier are used for the purpose of classification. Chat bot detection is performed with improved efficiency and accuracy using these classifiers. The speed of Naïve Bayes Classifier and accuracy of entropy classifier compliments each other in the process of detection of chat bots. The improved efficiency of the proposed system is proved by testing and comparison with the existing schemes.

34 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a correlation graph-based approach for personalized and compatible Web APIs recommendation in mobile app development. But their approach is not suitable for mobile applications with a large volume of available APIs.
Abstract: Using Web APIs registered in service sharing communities for mobile APP development can not only reduce development period and cost, but also fully reuse state-of-the-art research outcomes in broad domain so as to ensure up-to-date APP development and applications. However, the big volume of available APIs in Web communities as well as their differences make it difficult for APIs selection considering compatibility, preferred partial APIs and expected APIs functions which are often of high variety. Accordingly, how to recommend a set of functional-satisfactory and compatibility-optimal APIs based on the APP developer's multiple function expectation and pre-chosen partial APIs is on demand as a significant challenge for successful APP development. To address this challenge, we first construct a Web APIs correlation graph that incorporates functional descriptions and compatibility information of Web APIs, and then propose a correlation graph-based approach for personalized and compatible Web APIs recommendation in mobile APP development. Finally, through extensive experiments on a real dataset crawled from Web APIs websites, we prove the feasibility of our proposed recommendation approach.

23 citations

Journal ArticleDOI
TL;DR: Accountable and Transparent TLS Certificate Management is proposed: an alternate Public-Key Infrastructure with verifiable trusted parties (ATCM) that can handle CA hierarchy and introduces an improved revocation system and revocation policy and shows that it is feasible for practical use.
Abstract: Current Transport Layer Security (TLS) Public-Key Infrastructure (PKI) is a vast and complex system; it consists of processes, policies, and entities that are responsible for a secure certificate management process. Among them, Certificate Authority (CA) is the central and most trusted entity. However, recent compromises of CA result in the desire for some other secure and transparent alternative approaches. To distribute the trust and mitigate the threats and security issues of current PKI, publicly verifiable log-based approaches have been proposed. However, still, these schemes have vulnerabilities and inefficiency problems due to lack of specifying proper monitoring, data structure, and extra latency. We propose Accountable and Transparent TLS Certificate Management: an alternate Public-Key Infrastructure (PKI) with verifiable trusted parties (ATCM) that makes certificate management phases; certificate issuance, registration, revocation, and validation publicly verifiable. It also guarantees strong security by preventing man-in-middle-attack (MitM) when at least one entity is trusted out of all entities taking part in the protocol signing and verification. Accountable and Transparent TLS Certificate Management: an alternate Public-Key Infrastructure (PKI) with verifiable trusted parties (ATCM) can handle CA hierarchy and introduces an improved revocation system and revocation policy. We have compared our performance results with state-of-the-art log-based protocols. The performance results and evaluations show that it is feasible for practical use. Moreover, we have performed formal verification of our proposed protocol to verify its core security properties using Tamarin Prover.

22 citations

Proceedings ArticleDOI
09 Mar 2020
TL;DR: This research collects all the vulnerabilities associated with the Data Analytic Framework Implemented with MongoDB on Linux Containers by using the vulnerability analysis testbed with seven deferent analyzing tools and discovers and analyzes the root cause of fifteen various vulnerabilities.
Abstract: A Vulnerability Management system is a disciplined, programmatic approach to discover and mitigate vulnerabilities in a system. While securing systems from data exploitation and theft, Vulnerability Management works as a cyclical practice of identifying, assessing, prioritizing, remediating, and mitigating security weaknesses. In this approach, root cause analysis is conducted to find solutions for the problematic areas in policy, process, and standards including configuration standards. Three major reasons make Vulnerability Assessment and Management a vital part in IT risk management. The reasons are, namely, 1. Persistent Threats - Attacks exploiting security vulnerabilities for financial gain and criminal agendas continue to dominate headlines, 2. Regulations - Many government and industry regulations, such as the Health Insurance Portability and Accountability Act (HIPAA) and Sarbanes-Oxley (SOX), mandate rigorous vulnerability management practices, and 3. Risk Management - Mature organizations treat vulnerability assessment and management as a key risk management component [1]. Thus, as opposed to a reactive and technology-oriented approach, a well-organized and executed Vulnerability Management system is proactive and business-oriented. This research initially collects all the vulnerabilities associated with the Data Analytic Framework Implemented with MongoDB on Linux Containers (LXCs) by using the vulnerability analysis testbed with seven deferent analyzing tools. Thereafter, this research work first prioritizes all the vulnerabilities using "Low", "Medium", and "High" according to their severity level. Then, it discovers and analyzes the root cause of fifteen various vulnerabilities with different severities. Finally, according to each of the vulnerability root causes, this research proposes security techniques, to avoid or mitigate those vulnerabilities from the current system.

20 citations