scispace - formally typeset
Search or ask a question
Author

Rinkaj Goyal

Bio: Rinkaj Goyal is an academic researcher from Guru Gobind Singh Indraprastha University. The author has contributed to research in topics: Cloud computing & Software quality. The author has an hindex of 6, co-authored 31 publications receiving 204 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This study contributes towards identifying a unified taxonomy for security requirements, threats, vulnerabilities and countermeasures to carry out the proposed end-to-end mapping and highlights security challenges in other related areas like trust based security models, cloud-enabled applications of Big Data, Internet of Things, Software Defined Network (SDN) and Network Function Virtualization (NFV).

152 citations

Journal ArticleDOI
TL;DR: The hypothesis that the performance of KNN regression remains ordinarily unaffected with increasing number of interacting predictors and simultaneously provides superior performance over widely used multiple linear regression (MLR) is empirically established and validated.

44 citations

Journal ArticleDOI
09 Sep 2019-Symmetry
TL;DR: An elitist Genetic Algorithm with an improved fitness function to expose maximum faults while also minimizing the cost of testing by generating less complex and asymmetric test cases and an iterative elimination of redundant test cases is implemented.
Abstract: Manual test case generation is an exhaustive and time-consuming process. However, automated test data generation may reduce the efforts and assist in creating an adequate test suite embracing predefined goals. The quality of a test suite depends on its fault-finding behavior. Mutants have been widely accepted for simulating the artificial faults that behave similarly to realistic ones for test data generation. In prior studies, the use of search-based techniques has been extensively reported to enhance the quality of test suites. Symmetry, however, can have a detrimental impact on the dynamics of a search-based algorithm, whose performance strongly depends on breaking the “symmetry” of search space by the evolving population. This study implements an elitist Genetic Algorithm (GA) with an improved fitness function to expose maximum faults while also minimizing the cost of testing by generating less complex and asymmetric test cases. It uses the selective mutation strategy to create low-cost artificial faults that result in a lesser number of redundant and equivalent mutants. For evolution, reproduction operator selection is repeatedly guided by the traces of test execution and mutant detection that decides whether to diversify or intensify the previous population of test cases. An iterative elimination of redundant test cases further minimizes the size of the test suite. This study uses 14 Java programs of significant sizes to validate the efficacy of the proposed approach in comparison to Initial Random tests and a widely used evolutionary framework in academia, namely Evosuite. Empirically, our approach is found to be more stable with significant improvement in the test case efficiency of the optimized test suite.

23 citations

Journal ArticleDOI
TL;DR: A conceptual security model, ADOC, is proposed to facilitate adopting DevSecOps for the business processes capitalizing OSS over the cloud, which enables businesses to deliver time-to-market security ready applications and services with accelerated velocity and sustainable agility in a cost-effective way.

22 citations

Journal ArticleDOI
15 Dec 2021
TL;DR: This study considers a set of topological features of the network for training the machine learning classifiers and contributes towards identifying four community-based features for the proposed mechanism.
Abstract: The emergence of complex real-world networks has put forth a plethora of information about different domains. Link-prediction is one of the emerging research problems that utilizes the information from the networks to find future relationships between the nodes. The structure of real-world networks varies from having homogeneous relationships to having multiple associations. The homogeneous relationships are modeled by single-layer networks, while the multiplex networks represent the multiple associations. This study proposes a solution for finding future links in single-layer and multiplex networks by using supervised machine learning techniques. This study considers a set of topological features of the network for training the machine learning classifiers. The training and testing data set construction framework devised in this work helps in evaluating the proposed method on different networks. This study also contributes towards identifying four community-based features for the proposed mechanism.

15 citations


Cited by
More filters
01 Jan 2012

3,692 citations

Posted Content
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

1,783 citations

01 Jun 2008
TL;DR: This chapter discusses designing and Developing Agent-Based Models, and building the Collectivities Model Step by Step, as well as reporting on advances in agent-Based Modeling.
Abstract: Series Editor's Introduction Preface Acknowledgments 1. The Idea of Agent-Based Modeling 1.1 Agent-Based Modeling 1.2 Some Examples 1.3 The Features of Agent-Based Modeling 1.4 Other Related Modeling Approaches 2. Agents, Environments, and Timescales 2.1 Agents 2.2 Environments 2.3 Randomness 2.4 Time 3. Using Agent-Based Models in Social Science Research 3.1 An Example of Developing an Agent-Based Model 3.2 Verification: Getting Rid of the Bugs 3.3 Validation 3.4 Techniques for Validation 3.5 Summary 4. Designing and Developing Agent-Based Models 4.1 Modeling Toolkits, Libraries, Languages, Frameworks, and Environments 4.2 Using NetLogo to Build Models 4.3 Building the Collectivities Model Step by Step 4.4 Planning an Agent-Based Model Project 4.5 Reporting Agent-Based Model Research 4.6 Summary 5. Advances in Agent-Based Modeling 5.1 Geographical Information Systems 5.2 Learning 5.3 Simulating Language Resources Glossary References Index About the Author

473 citations

Journal ArticleDOI
TL;DR: The purpose of this paper is to identify and discuss the main issues involved in the complex process of IoT-based investigations, particularly all legal, privacy and cloud security challenges, as well as some promising cross-cutting data reduction and forensics intelligence techniques.
Abstract: Today is the era of the Internet of Things (IoT). The recent advances in hardware and information technology have accelerated the deployment of billions of interconnected, smart and adaptive devices in critical infrastructures like health, transportation, environmental control, and home automation. Transferring data over a network without requiring any kind of human-to-computer or human-to-human interaction, brings reliability and convenience to consumers, but also opens a new world of opportunity for intruders, and introduces a whole set of unique and complicated questions to the field of Digital Forensics. Although IoT data could be a rich source of evidence, forensics professionals cope with diverse problems, starting from the huge variety of IoT devices and non-standard formats, to the multi-tenant cloud infrastructure and the resulting multi-jurisdictional litigations. A further challenge is the end-to-end encryption which represents a trade-off between users’ right to privacy and the success of the forensics investigation. Due to its volatile nature, digital evidence has to be acquired and analyzed using validated tools and techniques that ensure the maintenance of the Chain of Custody. Therefore, the purpose of this paper is to identify and discuss the main issues involved in the complex process of IoT-based investigations, particularly all legal, privacy and cloud security challenges. Furthermore, this work provides an overview of the past and current theoretical models in the digital forensics science. Special attention is paid to frameworks that aim to extract data in a privacy-preserving manner or secure the evidence integrity using decentralized blockchain-based solutions. In addition, the present paper addresses the ongoing Forensics-as-a-Service (FaaS) paradigm, as well as some promising cross-cutting data reduction and forensics intelligence techniques. Finally, several other research trends and open issues are presented, with emphasis on the need for proactive Forensics Readiness strategies and generally agreed-upon standards.

440 citations