scispace - formally typeset
Search or ask a question
Author

Parmeet Kaur

Bio: Parmeet Kaur is an academic researcher from Jaypee Institute of Information Technology. The author has contributed to research in topics: Cloud computing & Fault tolerance. The author has an hindex of 5, co-authored 40 publications receiving 204 citations.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: An augmented Shuffled Frog Leaping Algorithm (ASFLA) based technique for resource provisioning and workflow scheduling in the Infrastructure as a service (IaaS) cloud environment is presented and outperforms Particle Swarm Optimization and SFLA.

107 citations

Journal ArticleDOI
TL;DR: A comprehensive overview of fault tolerance-related issues in cloud computing is presented, emphasizing upon the significant concepts, architectural details, and the state-of-art techniques and methods.

84 citations

Journal ArticleDOI
TL;DR: A comprehensive survey of the previous research done to develop techniques for ensuring privacy of patient data that includes demographics data, diagnosis codes and the data containing both demographics and diagnosis codes is presented.

23 citations

Journal ArticleDOI
TL;DR: A data visualization and prediction tool in which an open-source, distributed, and non-relational database, HBase is utilized to keep the data related to IPL (Indian Premier League) cricket matches and players, which is used for visualizing the past performance of players’ performance.

18 citations

Proceedings ArticleDOI
01 Aug 2019
TL;DR: The results suggest that it is possible to train machine learning models in order to predict the region and country of terrorist attack if certain parameters are known.
Abstract: The objective of this work is to predict the region and country of a terrorist attack using machine learning approaches. The work has been carried out upon the Global Terrorism Database (GTD), which is an open database containing list of terrorist activities from 1970 to 2017. Six machine learning algorithms have been applied on a selected set of features from the dataset to achieve an accuracy of up to 82%. The results suggest that it is possible to train machine learning models in order to predict the region and country of terrorist attack if certain parameters are known. It is postulated that the work can be used for enhancing security against terrorist attacks in the world.

13 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is argued that the outbreak of IoT and Big Data has resulted in a mass of disorganized knowledge, and a lack of widespread knowledge and adoption has led research to evolve into multiple, yet inconsistent paths.

202 citations

Journal ArticleDOI
16 Jan 2020
TL;DR: The proposed Improved WOA for Cloud task scheduling (IWC) has better convergence speed and accuracy in searching for the optimal task scheduling plans, compared to the current metaheuristic algorithms, and can also achieve better performance on system resource utilization.
Abstract: Task scheduling in cloud computing can directly affect the resource usage and operational cost of a system. To improve the efficiency of task executions in a cloud, various metaheuristic algorithms, as well as their variations, have been proposed to optimize the scheduling. In this article, for the first time, we apply the latest metaheuristics whale optimization algorithm (WOA) for cloud task scheduling with a multiobjective optimization model, aiming at improving the performance of a cloud system with given computing resources. On that basis, we propose an advanced approach called I mproved W OA for C loud task scheduling (IWC) to further improve the optimal solution search capability of the WOA-based method. We present the detailed implementation of IWC and our simulation-based experiments show that the proposed IWC has better convergence speed and accuracy in searching for the optimal task scheduling plans, compared to the current metaheuristic algorithms. Moreover, it can also achieve better performance on system resource utilization, in the presence of both small and large-scale tasks.

125 citations

Journal ArticleDOI
TL;DR: A state-of-the-art review of issues and challenges associated with existing load-balancing techniques for researchers to develop more effective algorithms is presented.
Abstract: With the growth in computing technologies, cloud computing has added a new paradigm to user services that allows accessing Information Technology services on the basis of pay-per-use at any time and any location. Owing to flexibility in cloud services, numerous organizations are shifting their business to the cloud and service providers are establishing more data centers to provide services to users. However, it is essential to provide cost-effective execution of tasks and proper utilization of resources. Several techniques have been reported in the literature to improve performance and resource use based on load balancing, task scheduling, resource management, quality of service, and workload management. Load balancing in the cloud allows data centers to avoid overloading/underloading in virtual machines, which itself is a challenge in the field of cloud computing. Therefore, it becomes a necessity for developers and researchers to design and implement a suitable load balancer for parallel and distributed cloud environments. This survey presents a state-of-the-art review of issues and challenges associated with existing load-balancing techniques for researchers to develop more effective algorithms.

120 citations

Journal ArticleDOI
TL;DR: A review of the development of system-call-based HIDS and future research trends is provided, namely, the reduction of the false-positive rate, the improvement of detection efficiency, and the enhancement of collaborative security.
Abstract: In a contemporary data center, Linux applications often generate a large quantity of real-time system call traces, which are not suitable for traditional host-based intrusion detection systems deployed on every single host. Training data mining models with system calls on a single host that has static computing and storage capacity is time-consuming, and intermediate datasets are not capable of being efficiently handled. It is cumbersome for the maintenance and updating of host-based intrusion detection systems (HIDS) installed on every physical or virtual host, and comprehensive system call analysis can hardly be performed to detect complex and distributed attacks among multiple hosts. Considering these limitations of current system-call-based HIDS, in this article, we provide a review of the development of system-call-based HIDS and future research trends. Algorithms and techniques relevant to system-call-based HIDS are investigated, including feature extraction methods and various data mining algorithms. The HIDS dataset issues are discussed, including currently available datasets with system calls and approaches for researchers to generate new datasets. The application of system-call-based HIDS on current embedded systems is studied, and related works are investigated. Finally, future research trends are forecast regarding three aspects, namely, the reduction of the false-positive rate, the improvement of detection efficiency, and the enhancement of collaborative security.

92 citations

Journal ArticleDOI
TL;DR: A new mobile application to automatically classify pests using a deep-learning solution for supporting specialists and farmers to accomplish the recognition task of insect pests based on cloud computing is introduced.
Abstract: Agricultural pests cause between 20 and 40 percent loss of global crop production every year as reported by the Food and Agriculture Organization (FAO). Therefore, smart agriculture presents the best option for farmers to apply artificial intelligence techniques integrated with modern information and communication technology to eliminate these harmful insect pests. Consequently, the productivity of their crops can be increased. Hence, this article introduces a new mobile application to automatically classify pests using a deep-learning solution for supporting specialists and farmers. The developed application utilizes faster region-based convolutional neural network (Faster R-CNN) to accomplish the recognition task of insect pests based on cloud computing. Furthermore, a database of recommended pesticides is linked with the detected crop pests to guide the farmers. This study has been successfully validated on five groups of pests; called Aphids, Cicadellidae, Flax Budworm, Flea Beetles, and Red Spider. The proposed Faster R-CNN showed highest accurate recognition results of 99.0% for all tested pest images. Moreover, our deep learning method outperforms other previous recognition methods, i.e., Single Shot Multi-Box Detector (SSD) MobileNet and traditional back propagation (BP) neural networks. The main prospect of this study is to realize our developed application for on-line recognition of agricultural pests in both the open field such as large farms and greenhouses for specific crops.

89 citations