scispace - formally typeset
Search or ask a question
Author

Harshpreet Singh

Bio: Harshpreet Singh is an academic researcher from Lovely Professional University. The author has contributed to research in topics: Cloud computing & Scalability. The author has an hindex of 7, co-authored 18 publications receiving 217 citations. Previous affiliations of Harshpreet Singh include Punjabi University & Newcastle University.

Papers
More filters
Journal ArticleDOI
TL;DR: An algorithm using Artificial Neural Network for fault detection which will overcome the gaps of previously implemented algorithms and provide a fault tolerant model is proposed.
Abstract: With the immense growth of internet and its users, Cloud computing, with its incredible possibilities in ease, Quality of service and on-interest administrations, has turned into a guaranteeing figuring stage for both business and nonbusiness computation customers. It is an adoptable technology as it provides integration of software and resources which are dynamically scalable. The dynamic environment of cloud results in various unexpected faults and failures. The ability of a system to react gracefully to an unexpected equipment or programming malfunction is known as fault tolerance. In order to achieve robustness and dependability in cloud computing, failure should be assessed and handled effectively. Various fault detection methods and architectural models have been proposed to increase fault tolerance ability of cloud. The objective of this paper is to propose an algorithm using Artificial Neural Network for fault detection which will overcome the gaps of previously implemented algorithms and provide a fault tolerant model.

98 citations

Journal ArticleDOI
TL;DR: This paper aims to provide a better understanding of fault tolerance techniques used for fault tolerance in cloud environments along with some existing model and further compare them on various parameters.
Abstract: computing is the result of evolution of on demand service in computing paradigms of large scale distributed computing. It is the adoptable technology as it provides integration of software and resources which are dynamically scalable. These systems are more or less prone to failure. Fault tolerance assesses the ability of a system to respond gracefully to an unexpected hardware or software failure. In order to achieve robustness and dependability in cloud computing, failure should be assessed and handled effectively . This paper aims to provide a better understanding of fault tolerance techniques used for fault tolerance in cloud environments along with some existing model and further compare them on various parameters.

64 citations

Journal ArticleDOI
TL;DR: A model which provides fault tolerance namedReplication and resubmission based adaptive decision for fault tolerance in real-time cloud computing (RRADFTRC) for real time cloud computing is projected with result and in the projected model, the system endure the faults and makes the adaptive decision on the basis of proper resource allocation of tasks with a new style of approach inreal time cloud vicinity.
Abstract: Cloud computing an adoptable technology is the upshot evolution of on demand service in the computing epitome of immense scale distributed computing. With the raising asks and welfares of cloud computing infrastructure, society can take leverage of intensive computing capability services and scalable, virtualized vicinity of cloud computing to carry out real time tasks executed on a remote cloud computing node. Due to the indeterminate latency and minimal control over computing node, sway the reliability factor. Therefore, there is a raise of requisite for fault tolerance to achieve reliability in the real time cloud infrastructure. In this paper, a model which provides fault tolerance named “Replication and resubmission based adaptive decision for fault tolerance in real-time cloud computing (RRADFTRC)†for real time cloud computing is projected with result. In the projected model, the system endure the faults and makes the adaptive decision on the basis of proper resource allocation of tasks with a new style of approach in real time cloud vicinity.

17 citations

Proceedings ArticleDOI
08 May 2014
TL;DR: A reengineered Feature driven reuse development (FDRD) process model which integrate reuse concept with feature driven development process model, which improves the productivity of organization and quality of the produced product.
Abstract: As fast the business requirements changes, the need of rapid of development and economical feasible software also increases. The new software development techniques and models are coming to picture to solve the problems of rapid changing requirements. Agile methodology is one of the approaches to fulfill the current business requirements, which is flexible to adapt the change at any phase of development. Feature driven development (FDD) is an agile based process model based on feature development, adapted by many organizations. The limitation of agile process is its incapability to reuse components those are developed through agile processes. Adopting reuse is a challenging task but it can be used at an initial level by integrating with various development processes. Reuse oriented development of software is considered to be one of the most efficient techniques to improve software quality as it increases the productivity and reduces the development effort and cost. This paper purposes a reengineered Feature driven reuse development (FDRD) process model which integrate reuse concept with feature driven development process model. The model improves the productivity of organization and quality of the produced product.

16 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This survey will help the industry and research community synthesize and identify the requirements for Fog computing and present some open issues, which will determine the future research direction for the Fog computing paradigm.
Abstract: Emerging technologies such as the Internet of Things (IoT) require latency-aware computation for real-time application processing. In IoT environments, connected things generate a huge amount of data, which are generally referred to as big data. Data generated from IoT devices are generally processed in a cloud infrastructure because of the on-demand services and scalability features of the cloud computing paradigm. However, processing IoT application requests on the cloud exclusively is not an efficient solution for some IoT applications, especially time-sensitive ones. To address this issue, Fog computing, which resides in between cloud and IoT devices, was proposed. In general, in the Fog computing environment, IoT devices are connected to Fog devices. These Fog devices are located in close proximity to users and are responsible for intermediate computation and storage. One of the key challenges in running IoT applications in a Fog computing environment are resource allocation and task scheduling. Fog computing research is still in its infancy, and taxonomy-based investigation into the requirements of Fog infrastructure, platform, and applications mapped to current research is still required. This survey will help the industry and research community synthesize and identify the requirements for Fog computing. This paper starts with an overview of Fog computing in which the definition of Fog computing, research trends, and the technical differences between Fog and cloud are reviewed. Then, we investigate numerous proposed Fog computing architectures and describe the components of these architectures in detail. From this, the role of each component will be defined, which will help in the deployment of Fog computing. Next, a taxonomy of Fog computing is proposed by considering the requirements of the Fog computing paradigm. We also discuss existing research works and gaps in resource allocation and scheduling, fault tolerance, simulation tools, and Fog-based microservices. Finally, by addressing the limitations of current research works, we present some open issues, which will determine the future research direction for the Fog computing paradigm.

376 citations

Journal ArticleDOI
TL;DR: This survey presents a comprehensive overview of the security issues for different factors affecting cloud computing, and encompasses the requirements for better security management and suggests 3-tier security architecture.

340 citations

Journal ArticleDOI
TL;DR: It is pointed out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming while introducing new open issues.
Abstract: Fog computing (FC) and Internet of Everything (IoE) are two emerging technological paradigms that, to date, have been considered standing-alone. However, because of their complementary features, we expect that their integration can foster a number of computing and network-intensive pervasive applications under the incoming realm of the future Internet. Motivated by this consideration, the goal of this position paper is fivefold. First, we review the technological attributes and platforms proposed in the current literature for the standing-alone FC and IoE paradigms. Second, by leveraging some use cases as illustrative examples, we point out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming, while introducing new open issues. Third, we propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, that integrates FC and IoE and then we detail the main building blocks and services of the corresponding technological platform and protocol stack. Fourth, as a proof-of-concept, we present the simulated energy-delay performance of a small-scale FoE prototype, namely, the V-FoE prototype. Afterward, we compare the obtained performance with the corresponding one of a benchmark technological platform, e.g., the V-D2D one. It exploits only device-to-device links to establish inter-thing “ad hoc” communication. Last, we point out the position of the proposed FoE paradigm over a spectrum of seemingly related recent research projects.

267 citations

Journal ArticleDOI
TL;DR: This study contributes towards identifying a unified taxonomy for security requirements, threats, vulnerabilities and countermeasures to carry out the proposed end-to-end mapping and highlights security challenges in other related areas like trust based security models, cloud-enabled applications of Big Data, Internet of Things, Software Defined Network (SDN) and Network Function Virtualization (NFV).

152 citations

Book ChapterDOI
01 Jan 1988

151 citations