Author
Rinkaj Goyal
Bio: Rinkaj Goyal is an academic researcher from Guru Gobind Singh Indraprastha University. The author has contributed to research in topic(s): Cloud computing & Population. The author has an hindex of 6, co-authored 31 publication(s) receiving 204 citation(s).
Papers
More filters
[...]
TL;DR: This study contributes towards identifying a unified taxonomy for security requirements, threats, vulnerabilities and countermeasures to carry out the proposed end-to-end mapping and highlights security challenges in other related areas like trust based security models, cloud-enabled applications of Big Data, Internet of Things, Software Defined Network (SDN) and Network Function Virtualization (NFV).
Abstract: The world is witnessing a phenomenal growth in the cloud enabled services and is expected to grow further with the improved technological innovations. However, the associated security and privacy challenges inhibit its widespread adoption, and therefore require further exploration. Researchers from academia, industry, and standards organizations have provided potential solutions to these challenges in the previously published studies. The narrative review presented in this survey, however, provides an integrationist end-to-end mapping of cloud security requirements, identified threats, known vulnerabilities, and recommended countermeasures, which seems to be not presented before at one place. Additionally, this study contributes towards identifying a unified taxonomy for security requirements, threats, vulnerabilities and countermeasures to carry out the proposed end-to-end mapping. Further, it highlights security challenges in other related areas like trust based security models, cloud-enabled applications of Big Data, Internet of Things (IoT), Software Defined Network (SDN) and Network Function Virtualization (NFV).
78 citations
[...]
TL;DR: The hypothesis that the performance of KNN regression remains ordinarily unaffected with increasing number of interacting predictors and simultaneously provides superior performance over widely used multiple linear regression (MLR) is empirically established and validated.
Abstract: Accurate fault prediction is an indispensable step, to the extent of being a critical activity in software engineering. In fault prediction model development research, combination of metrics significantly improves the prediction capability of the model, but it also gives rise to the issue of handling an increased number of predictors and evolved nonlinearity due to complex interaction among metrics. Ordinary least square (OLS) based parametric regression techniques cannot effectively model such nonlinearity with a large number of predictors because the global parametric function to fit the data is not known beforehand. In our previous studies [1–3] , we showed the impact of interaction in the combined metrics approach of fault prediction and statistically established the simultaneous increment in the predictive accuracy of the model with interaction. In this study we use K-Nearest Neighbor (KNN) regression as an example of nonparametric regression technique, otherwise well known for classification tasks in the data mining community. Through the results derived here, we empirically establish and validate the hypothesis that the performance of KNN regression remains ordinarily unaffected with increasing number of interacting predictors and simultaneously provides superior performance over widely used multiple linear regression (MLR).
29 citations
[...]
TL;DR: An elitist Genetic Algorithm with an improved fitness function to expose maximum faults while also minimizing the cost of testing by generating less complex and asymmetric test cases and an iterative elimination of redundant test cases is implemented.
Abstract: Manual test case generation is an exhaustive and time-consuming process. However, automated test data generation may reduce the efforts and assist in creating an adequate test suite embracing predefined goals. The quality of a test suite depends on its fault-finding behavior. Mutants have been widely accepted for simulating the artificial faults that behave similarly to realistic ones for test data generation. In prior studies, the use of search-based techniques has been extensively reported to enhance the quality of test suites. Symmetry, however, can have a detrimental impact on the dynamics of a search-based algorithm, whose performance strongly depends on breaking the “symmetry” of search space by the evolving population. This study implements an elitist Genetic Algorithm (GA) with an improved fitness function to expose maximum faults while also minimizing the cost of testing by generating less complex and asymmetric test cases. It uses the selective mutation strategy to create low-cost artificial faults that result in a lesser number of redundant and equivalent mutants. For evolution, reproduction operator selection is repeatedly guided by the traces of test execution and mutant detection that decides whether to diversify or intensify the previous population of test cases. An iterative elimination of redundant test cases further minimizes the size of the test suite. This study uses 14 Java programs of significant sizes to validate the efficacy of the proposed approach in comparison to Initial Random tests and a widely used evolutionary framework in academia, namely Evosuite. Empirically, our approach is found to be more stable with significant improvement in the test case efficiency of the optimized test suite.
14 citations
[...]
TL;DR: This study contributes towards the development of an efficient predictive model involving interaction among predictive variables with a reduced set of influential terms, obtained by applying stepwise regression.
Abstract: Fault prediction is a pre-eminent area of empirical software engineering which has witnessed a huge surge over the last couple of decades. In the development of a fault prediction model, combination of metrics results in better explanatory power of the model. Since the metrics used in combination are often correlated, and do not have an additive effect, the impact of a metric on another i.e. interaction should be taken into account. The effect of interaction in developing regression based fault prediction models is uncommon in software engineering; however two terms and three term interactions are analyzed in detail in social and behavioral sciences. Beyond three terms interactions are scarce, because interaction effects at such a high level are difficult to interpret. From our earlier findings (Softw Qual Prof 15(3):15-23) we statistically establish the pertinence of considering the interaction between metrics resulting in a considerable improvement in the explanatory power of the corresponding predictive model. However, in the aforesaid approach, the number of variables involved in fault prediction also shows a simultaneous increment with interaction. Furthermore, the interacting variables do not contribute equally to the prediction capability of the model. This study contributes towards the development of an efficient predictive model involving interaction among predictive variables with a reduced set of influential terms, obtained by applying stepwise regression.
13 citations
[...]
TL;DR: Simulation results validate a critical proportion of committed individuals as a plausible basis for ideological shifts in societies and delineate the role of evangelism through social and non-social methods in propagating views.
Abstract: Opinions continuously evolve in society. While conservative ideas may get replaced by a new one, some views remain immutable. Opinion formation and innovation diffusion have witnessed lots of attention in the last decade due to its widespread applicability in the diverse domain of science and technology. We analyse these scenarios in which interactions at the micro level results in the changes in opinions at the macro level in a population of predefined ideological groups. We use the Bass model, otherwise well known for understanding innovation diffusion phenomena, to compute adoption probabilities of three opinion states-zealot, extremists and moderates. Thereafter, we employ cellular automata to explore the emergence of opinions through local and overlapped interactions between agents (people). NetLogo environment has been used to develop an agent-based model, simulating different ideological scenarios. Simulation results validate a critical proportion of committed individuals as a plausible basis for ideological shifts in societies. The analysis elucidates upon the role of moderates in the population and emergence of varying opinions. The results further delineate the role of evangelism through social and non-social methods in propagating views. The results obtained from these simulations endorse the conclusions reported in previous studies regarding the role of a critical zealot population, and the preponderance of non-social influence. We, however, use two-phase opinion model with different experimental settings. Additionally, we examine global observable, such as entropy of the system to reveal common patterns of adoption in the views and evenness of population after reaching a consensus.
8 citations
Cited by
More filters
[...]
01 Jan 2012
3,352 citations
Posted Content•
[...]
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.
1,655 citations
[...]
01 Jun 2008
TL;DR: This chapter discusses designing and Developing Agent-Based Models, and building the Collectivities Model Step by Step, as well as reporting on advances in agent-Based Modeling.
Abstract: Series Editor's Introduction Preface Acknowledgments 1. The Idea of Agent-Based Modeling 1.1 Agent-Based Modeling 1.2 Some Examples 1.3 The Features of Agent-Based Modeling 1.4 Other Related Modeling Approaches 2. Agents, Environments, and Timescales 2.1 Agents 2.2 Environments 2.3 Randomness 2.4 Time 3. Using Agent-Based Models in Social Science Research 3.1 An Example of Developing an Agent-Based Model 3.2 Verification: Getting Rid of the Bugs 3.3 Validation 3.4 Techniques for Validation 3.5 Summary 4. Designing and Developing Agent-Based Models 4.1 Modeling Toolkits, Libraries, Languages, Frameworks, and Environments 4.2 Using NetLogo to Build Models 4.3 Building the Collectivities Model Step by Step 4.4 Planning an Agent-Based Model Project 4.5 Reporting Agent-Based Model Research 4.6 Summary 5. Advances in Agent-Based Modeling 5.1 Geographical Information Systems 5.2 Learning 5.3 Simulating Language Resources Glossary References Index About the Author
473 citations
Journal Article•
[...]
TL;DR: The DBLP Computer Science Bibliography of the University of Trier as discussed by the authors is a large collection of bibliographic information used by thousands of computer scientists, which is used for scientific communication.
Abstract: Publications are essential for scientific communication. Access to publications is provided by conventional libraries, digital libraries operated by learned societies or commercial publishers, and a huge number of web sites maintained by the scientists themselves or their institutions. Comprehensive meta-indices for this increasing number of information sources are missing for most areas of science. The DBLP Computer Science Bibliography of the University of Trier has grown from a very specialized small collection of bibliographic information to a major part of the infrastructure used by thousands of computer scientists. This short paper first reports the history of DBLP and sketches the very simple software behind the service. The most time-consuming task for the maintainers of DBLP may be viewed as a special instance of the authority control problem; how to normalize different spellings of person names. The third section of the paper discusses some details of this problem which might be an interesting research issue for the information retrieval community.
356 citations