scispace - formally typeset
Search or ask a question
Author

Sona Malhotra

Bio: Sona Malhotra is an academic researcher from Kurukshetra University. The author has contributed to research in topics: Steganography & Software quality. The author has an hindex of 5, co-authored 15 publications receiving 66 citations.

Papers
More filters
Proceedings ArticleDOI
28 Aug 2014
TL;DR: The proposed MOPSO approach is compared with other prioritization techniques such as No Ordering, Reverse Ordering and Random Ordering by calculating Average Percentage of fault detected (APFD) for each technique and it can be concluded that the proposed approach outperformed all techniques mentioned above.
Abstract: The goal of regression testing is to validate the modified software. Due to the resource and time constraints, it becomes necessary to develop techniques to minimize existing test suites by eliminating redundant test cases and prioritizing them. This paper proposes a 3-phase approach to solve test case prioritization. In the first phase, we are removing redundant test cases by simple matrix operation. In the second phase, test cases are selected from the test suite such that selected test cases represent the minimal set which covers all faults and also at the minimum execution time. For this phase, we are using multi objective particle swarm optimization (MOPSO) which optimizes fault coverage and execution time. In the third phase, we allocate priority to test cases obtained from the second phase. Priority is obtained by calculating the ratio of fault coverage to the execution time of test cases, higher the value of the ratio higher will be the priority and the test cases which are not selected in phase 2 are added to the test suite in sequential order. We have also performed experimental analysis based on maximum fault coverage and minimum execution time. The proposed MOPSO approach is compared with other prioritization techniques such as No Ordering, Reverse Ordering and Random Ordering by calculating Average Percentage of fault detected (APFD) for each technique and it can be concluded that the proposed approach outperformed all techniques mentioned above.

27 citations

Journal ArticleDOI
TL;DR: This paper presents an approach to prioritize regression test cases based on three factors which are rate of fault detection, percentage of fault detected and risk detection ability, and the proposed approach outperformed all approaches.
Abstract: The main aim of regression testing is to test the modified software during maintenance level. It is an expensive activity, and it assures that modifications performed in software are correct. An easiest strategy to regression testing is to re-test all test cases in a test suite, but due to limitation of resource and time, it is inefficient to implement. Therefore, it is necessary to discover the techniques with the goal of increasing the regression testing‟s effectiveness, by arranging test cases of test suites according to some objective criteria. Test case prioritization intends to arrange test cases in such a manner that higher priority test cases execute earlier than test cases of lower priority according to some performance criteria. This paper presents an approach to prioritize regression test cases based on three factors which are rate of fault detection [6], percentage of fault detected and risk detection ability. The proposed approach is compared with different prioritization techniques such as no prioritization, reverse prioritization, random prioritization, and also with previous work of kavitha et al [6], using APFD (average percentage of fault detected) metric. The results represent that proposed approach outperformed all approaches mentioned above.

13 citations

Proceedings ArticleDOI
17 Apr 2014
TL;DR: The technique resolves the various inherent problems in using traditional substitution techniques and improves the data hiding capacity while being robust to various intentional as well as unintentional attacks.
Abstract: In this paper, a robust substitution technique to implement audio steganography is proposed The technique resolves the various inherent problems in using traditional substitution techniques It improves the data hiding capacity while being robust to various intentional as well as unintentional attacks

9 citations

Journal Article
TL;DR: It is suggested that for a strategic solution, the hub and spoke / centralised architecture is the more likely choice.
Abstract: Data warehousing is a collection of decision support technologies, aimed at enabling the knowledge worker to make better and faster decisions. A data warehouse is a subject-oriented, integrated, time varying, non-volatile collection of data that is used primarily in organizational decision making. Data warehouse supports on-line analytical processing, the functional and performance requirements of which are quite different from those of the on-line transaction processing applications traditionally supported by the operational databases. In this paper author suggest that for a strategic solution, the hub and spoke / centralised architecture is the more likely choice

9 citations

Journal Article
TL;DR: In this article, the authors provide a review for the researches related to Vehicular Ad Hoc Networks and also try to propose solution for related issues and challenges, such as network architecture, vanet protocols, routing algorithms, as well as security issues.
Abstract: Vehicular Ad hoc Networks is a kind of special wireless ad hoc network, which has the characteristics of high node mobility and fast topology changes. The Vehicular Networks can provide wide variety of services, range from safety-related warning systems to improved navigation mechanisms as well as information and entertainment applications. These additional features make the routing and other services more challenging and cause vulnerability in network services. These problems include network architecture, vanet protocols, routing algorithms, as well as security issues. In this paper, we provide a review for the researches related to Vehicular Ad Hoc Networks and also try to propose solution for related issues and challenges.

8 citations


Cited by
More filters
Book
01 Jan 1975
TL;DR: The major change in the second edition of this book is the addition of a new chapter on probabilistic retrieval, which I think is one of the most interesting and active areas of research in information retrieval.
Abstract: The major change in the second edition of this book is the addition of a new chapter on probabilistic retrieval. This chapter has been included because I think this is one of the most interesting and active areas of research in information retrieval. There are still many problems to be solved so I hope that this particular chapter will be of some help to those who want to advance the state of knowledge in this area. All the other chapters have been updated by including some of the more recent work on the topics covered. In preparing this new edition I have benefited from discussions with Bruce Croft, The material of this book is aimed at advanced undergraduate information (or computer) science students, postgraduate library science students, and research workers in the field of IR. Some of the chapters, particularly Chapter 6 * , make simple use of a little advanced mathematics. However, the necessary mathematical tools can be easily mastered from numerous mathematical texts that now exist and, in any case, references have been given where the mathematics occur. I had to face the problem of balancing clarity of exposition with density of references. I was tempted to give large numbers of references but was afraid they would have destroyed the continuity of the text. I have tried to steer a middle course and not compete with the Annual Review of Information Science and Technology. Normally one is encouraged to cite only works that have been published in some readily accessible form, such as a book or periodical. Unfortunately, much of the interesting work in IR is contained in technical reports and Ph.D. theses. For example, most the work done on the SMART system at Cornell is available only in reports. Luckily many of these are now available through the National Technical Information Service (U.S.) and University Microfilms (U.K.). I have not avoided using these sources although if the same material is accessible more readily in some other form I have given it preference. I should like to acknowledge my considerable debt to many people and institutions that have helped me. Let me say first that they are responsible for many of the ideas in this book but that only I wish to be held responsible. My greatest debt is to Karen Sparck Jones who taught me to research information retrieval as an experimental science. Nick Jardine and Robin …

822 citations

Journal ArticleDOI
01 Jan 2015
TL;DR: A geometry-based sparse coverage protocol GeoCover is proposed, which aims to consider the geometrical attributes of road networks, movement patterns of vehicles and resource limitations, and provides a buffering operation to suit different types of road topology.
Abstract: Vehicular ad hoc networks have emerged as a promising research area. Designing a realistic coverage protocol for RSU deployment in vehicular networks presents a challenge due to different service area, assorted mobility patterns, and resource constraints. In order to resolve these problems, this paper proposes a geometry-based sparse coverage protocol GeoCover , which aims to consider the geometrical attributes of road networks, movement patterns of vehicles and resource limitations. By taking the dimensions of road segments into account, GeoCover provides a buffering operation to suit different types of road topology. By discovering hotspots from trace files, GeoCover is able to depict the mobility patterns and to discover the most valuable road area to be covered. To solve the resource-constrained coverage problem, we provide two variants of sparse coverage which take into consideration budget constraints and quality constraints, respectively. The coverage problem is resolved by both genetic algorithm and greedy algorithm. The simulation results verify that our coverage protocol is reliable and scalable for urban vehicular networks.

37 citations

Proceedings ArticleDOI
03 Nov 2013
TL;DR: A geometry-based coverage strategy to handle the deployment problem over urban scenarios is proposed, taking the shape and area of road segments into account, which suits different kinds of road topology and effectively solves the maximum coverage problem.
Abstract: Vehicular ad hoc networks have emerged as a promising field in wireless networking research. Unlike traditional wireless sensor networks, vehicular networks demand more consideration due to their assorted road topology, the high mobility of vehicles and the irregularly placed feasible region of deployment. As one of the most complex issues in vehicular networks, coverage strategy has been researched extensively, especially in complex urban scenarios. However, most existing coverage approaches are based on an ideal traffic map consisting of straight lines and nodes. These simplifications misrepresent the road networks. In order to provide more realistic vehicular networks deployment, this paper proposes a geometry-based coverage strategy to handle the deployment problem over urban scenarios. By taking the shape and area of road segments into account, our scheme suits different kinds of road topology and effectively solves the maximum coverage problem. To evaluate the effectiveness of our scheme, we compare this coverage strategy with α-coverage algorithm. The simulation result verifies that geometry-based coverage strategy culminates in a higher coverage ratio and a lower drop rate than α-coverage under the same constraints. The results also show that the deployment of Road Side Units (RSUs) in regions with high traffic flow is able to cover the majority of communication, so that less RSUs are able to provide better communication performance.

34 citations

Journal ArticleDOI
TL;DR: The Test Case Prioritization (TCP) technique using Firefly Algorithm with similarity distance model demonstrated better if not equal in terms of APFD and time execution performance compared to existing works, indicating that Firefly Al algorithm is a promising competitor in TCP applications.
Abstract: Software testing is a vital and complex part of the software development life cycle. Optimization of software testing is still a major challenge, as prioritization of test cases remains unsatisfactory in terms of Average Percentage of Faults Detected (APFD) and time execution performance. This is attributed to a large search space to find an optimal ordering of test cases. In this paper, we have proposed an approach to prioritize test cases optimally using Firefly Algorithm. To optimize the ordering of test cases, we applied Firefly Algorithm with fitness function defined using a similarity distance model. Experiments were carried on three benchmark programs with test suites extracted from Software-artifact Infrastructure Repository (SIR). Our Test Case Prioritization (TCP) technique using Firefly Algorithm with similarity distance model demonstrated better if not equal in terms of APFD and time execution performance compared to existing works. Overall APFD results indicate that Firefly Algorithm is a promising competitor in TCP applications.

30 citations

Proceedings Article
20 Nov 2012
TL;DR: The principal objective of the study was to investigate empirically the architectures of data warehouse (DW), or more specifically, the types of the architectures and a number of factors, which are believed to influence their selection, were explored.
Abstract: In this abridged version, which is a summary of the authors' study, we present some of our findings for consideration. The principal objective of the study was to investigate empirically the architectures of data warehouse (DW), or more specifically, the types of the architectures and a number of factors, which are believed to influence their selection, were explored. A questionnaire survey, which targeted the information systems managers, was used to collect data from 150 Polish companies about the respondents' firms, the architecture they use, and the factors which influence the selection of the architecture. The findings of the study give us practical insights into DW's field in Poland.

22 citations