scispace - formally typeset
Search or ask a question

Showing papers by "S. M. K. Quadri published in 2012"


Dissertation
01 Jan 2012
TL;DR: To establish a useful theory for testing, existing and novel testing techniques need to be evaluated not only for effectiveness and efficiency but also for their ability of enhancing software reliability.
Abstract: We have a great number of software testing techniques at our disposal for testing a software product. Although the utilization of these techniques is growing, we do have a very inadequate knowledge about their relative quantitative and qualitative statistics. The choice of a software testing technique in software testing influences both process and product quality. So it is imperative for us to find a testing technique which is effective as well as efficient. However it is not sufficient if testing techniques are only compared on fault detecting ability. They should also be evaluated to check which among them enhances reliability most. To establish a useful theory for testing, we need to evaluate existing and novel testing techniques not only for effectiveness and efficiency but also for their ability of enhancing software reliability.

25 citations


Proceedings ArticleDOI
01 Jun 2012
TL;DR: This paper presents an in-depth analysis of all metrics, models and measurements used in software reliability, and concludes that no metric or model can be used in all situations.
Abstract: Reliability is always important in all systems but sometimes it is more important than other quality attributes. Software reliability engineering approach is focused on comprehensive techniques for developing reliable software and for proper assessment and improvement of the reliability. Reliability metrics, models and measurements form an essential part of software reliability engineering process. We should apply appropriate metrics, models and measurement techniques in SRE to produce reliable software, as no metric or model can be used in all situations. So, we should have profound knowledge of metrics, models and measurement process before applying them in SRE. In this paper, we present an in-depth analysis of all metrics, models and measurements used in software reliability.

21 citations


Journal ArticleDOI
TL;DR: A framework for security evaluation at the design and architectural phase of the system development of component-based software design and derived the security metrics for the main three pillars of security, confidentiality, integrity and availability based on the component composition, dependency and inter component data/information flow is presented.
Abstract: Evaluating the security of software systems is a complex problem for the research communities due to the multifaceted and complex operational environment of the system involved. Many efforts towards the secure system development methodologies like secSDLC by Microsoft have been made but the measurement scale on which the security can be measured got least success. As with a shift in the nature of software development from standalone applications to distributed environment where there are a number of potential adversaries and threats present, security has been outlined and incorporated at the architectural level of the system and so is the need to evaluate and measure the level of security achieved . In this paper we present a framework for security evaluation at the design and architectural phase of the system development. We have outlined the security objectives based on the security requirements of the system and analyzed the behavior of various software architectures styles. As the component-based development (CBD) is an important and widely used model to develop new large scale software due to various benefits like increased reuse, reduce time to market and cost. Our emphasis is on CBD and we have proposed a framework for the security evaluation of Component based software design and derived the security metrics for the main three pillars of security, confidentiality, integrity and availability based on the component composition, dependency and inter component data/information flow. The proposed framework and derived metrics are flexible enough, in way that the system developer can modify the metrics according to the situation and are applicable both at the development phases and as well as after development.

16 citations


Proceedings ArticleDOI
01 Jun 2012
TL;DR: Results indicate that restFS can save 28-98% of block overwrites which otherwise need necessarily to be overwritten in existing overwriting techniques, and can reduce the number of write commands issued to disk by 88%.
Abstract: After deletion data recovery is trivial and can be performed by novice hackers. Secure deletion of data can be achieved by overwriting the file's metadata and user data during its deletion. We propose restFS, a reliable and efficient stackable file system, to fulfill the reliability and efficiency lacking in existing transparent per-file secure data deletion file system extensions. restFS is design compatible with all file systems which export block allocation map of a file to VFS and is currently implemented for EXT2 file system. Instead of overwriting at file level found in existing techniques, restFS overwrites at block level for reliability and efficiency. We evaluated the efficiency of restFS using Postmark benchmark and results indicate that restFS can save 28–98% of block overwrites which otherwise need necessarily to be overwritten in existing overwriting techniques. In addition to this, it can reduce the number of write commands issued to disk by 88%.

14 citations


Journal Article
TL;DR: The importance of integration of BI with KM is discussed, a framework to integrate BI and KM is provided and the importance of improving the knowledge with information that allows managers to make effective decisions to achieve organizational objectives is provided.
Abstract: The rapid advancement in Information and Communication Technology is driving a revolutionary change in the way organizations do business. The fast growing capabilities of both generating and collecting data has generated an imperative need for new techniques and tools that can intelligently and automatically transform the processed data into valuable information and knowledge for effective decision making. Business intelligence (BI) plays an important role extracting valuable information and discovering the hidden patterns in internal as well as external sources of data. The main purpose of the BI is to improve the knowledge with information that allows managers to make effective decisions to achieve organizational objectives. However majority of organizational knowledge is in unstructured form or in the minds of its employees. On the other hand, Knowledge Management (KM) encompasses both tacit and explicit knowledge to enhance s the organizations performance by providing collaborative tools to learn, create and share the knowledge within the organization. Therefore, it is imperative for the organizations to integrate BI with KM. The purpose of this paper is to discuss the importance of integration of BI with KM and provide a framework to integrate BI and KM. Keywords: Business Intelligence (BI), Knowledge Management (KM), Scorecard, Dashboard, ETL, Data Mining, OLAP, Tacit Knowledge, Explicit Knowledge

11 citations


Journal ArticleDOI
TL;DR: A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing, and usually contains historical data derived from transaction data, but it can include data from other sources.
Abstract: A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data, but it can include data from other sources. In addition to a relational database, a data warehouse environment includes an extraction, transportation, transformation, and loading (ETL) solution, an online analytical processing (OLAP) engine, client analysis tools, and other applications that manage the process of gathering data and delivering it to business users [2][10]. This was the case at University of Kashmir (UOK) Examination Department, where such a project brought together various attributes of Examination System which included Conduct, Secrecy, Transit, Tabulation, Accounts and other related data sources in to an integrated data warehouse.

9 citations


Journal ArticleDOI
TL;DR: This approach allows us to create new text file and delete existing file, modifiying wrapper, making modifications later and managing data retrieval in a simple unified style and is flexible enough to incorporate variety of data models and query capabilities by various protocols.
Abstract: Information Retrieval from heterogeneous information systems is required but challenging at the same as data is stored and represented in different data models in different information systems.Information integrated from heterogeneous data sources into single data source are faced upon by major challenge of information transformation- were in different formats and constraints in data transformation are used in data integration for the purpose of integrating information systems, at the same is not cost effective. This paper introduces ideaof Information integration based on search criteria from heterogeneous data sources into single data source. Every element of information source such as entity, field, and relation is mapped to component of new single text source-created every time heterogeneous information systems are searched and result is saved into new text file. This approach allows us to create new text file and delete existing file, modifiying wrapper, making modifications later and managing data retrieval in a simple unified style. This architecture is flexible enough to incorporate variety of data models and query capabilities by various protocols. It is possible to select logically tied information from all available legacy data sources.

5 citations


30 May 2012
TL;DR: The research highlights the fact that kernel mode file system development is difficult, bug prone, time consuming, exhaustive and so on, even with source code at disposal, and highlights the existence of user mode alternative which is easy, reliable, portable, etc.
Abstract: Purpose: One of the most significant and attractive features of Open Source Software (OSS), other than its cost, is its open source code. It is available in both flavours; system and application. It can be customized and ported as per the requirements of the end user. As most of the system software run in the kernel mode of operating system and system programmers constitute a small chunk of the programmers, the code customization of Open Source System Software is less realized practically. In this paper, the authors present file system development as a case of Kernel Mode System Software development and argue that customization of Open Source Code available for file systems is not preferred. To support the argument, the authors discuss various challenges that a developer faces in this process. Furthermore, the authors look into the user mode file system development for possible solution and discuss the architecture, advantages and limitations of most popular and widely used framework called File system in User-Space (FUSE). Finally, the authors conclude that the user mode alternative for file system development and/or extension supersedes kernel mode development. Design/Methodology/Approach: The broad domain, complexity, irregularity and limitations of kernel development environment are made as a base to put forth our argument. Moreover, the existence of rich and capable user-mode file system development frameworks are used to supplement the argument. Findings: The research highlights the fact that kernel mode file system development is difficult, bug prone, time consuming, exhaustive and so on, even with source code at disposal. Furthermore, it highlights the existence of user mode alternative which is easy, reliable, portable, etc. Research Implications: The research considers file system development as a case of kernel mode development. Fortunately, in this case, the authors have choice of user mode alternatives. However, author argument cannot be generalised for those kernel modules wherein there is no user mode alternative. Furthermore, the authors did not take into consideration the benefits of extending file systems in kernel mode. Originality/Value: The research stresses that having open source code is not enough to make a choice when we cannot use it in a reliable and productive manner. Keywords: Open Source Software); Open Source System Software, Source Code, File System, Kernel Mode, User Mode, File system in User-Space (FUSE) Paper Type: Argumentative

4 citations


Journal ArticleDOI
TL;DR: This paper introduces GENERIC SEARCH PRINCPLE: Solution making use of Knowledge base where in users of the organization irrespective of their technical ability, data source knowledge and location can search heterogeneous data sources including legacy data sources of organization and retrieve information, also taking into consideration user attributes like his/her location, work profile, designation etc so as to make search more relevant and results more precise.
Abstract: Data Retrieval is still a pervasive challenge faced in applications that need to query across multiple autonomous and heterogeneous data sources. There is decent amount of standardization as far as World-Wide Web is concerned, while google is universal access tool to search and determine source of the information user requires there is still no such tool that can be implemented at enterprise level where there are multitude of data sources and organization users are still facing difficulty in accessing data available on the intranet of the organization and not on the WWW, in order to access such data users within the organizations need to know a lot including location, access techniques etc while still data consistency & redundancy is beyond the scope of common organization user/s. This paper introduces GENERIC SEARCH PRINCPLE: Solution making use of Knowledge base where in users of the organization irrespective of their technical ability, data source knowledge and location can search heterogeneous data sources including legacy data sources of organization and retrieve information, also taking into consideration user attributes like his/her location, work profile, designation etc so as to make search more relevant and results more precise.

3 citations


Journal Article
TL;DR: Methods of data transformation at application level without need to modify underlying structure are introduced to make data stored in such flexible manner so that user can be provided information in his/her desired format.
Abstract: With the advent of computerization primary goal of organization across the globe was automation of their system, this result in massive collection of data in respective of organization business logic and process, not much was thought about integration of application and data. Once a blessing became huge problem in organizations, data all over the organization was becoming difficult to manage and inconsistency of data resulted in creation of team not meant for development but data management. Many organizations have started reinvesting in data management in the form of creation of Data Warehouse and again organisation across the globe are not stressing upon user needs and demands but only focusing on integration of heterogeneous data sources with goal of making data centralised and consistent by creating Warehouse. 21st century user has needs, he/she not only needs data but needs refined and cleaned data, he/she wants data in his/her desired format. While data integration is paramount need of the hour is to make data stored in such flexible manner so that user can be provided information in his/her desired format. In this paper we introduce methods of data transformation at application level without need to modify underlying structure.

3 citations


01 Jan 2012
TL;DR: A novel experiment is presented which compares three defect detection techniques for reliability and preliminary results suggest that testing techniques differ in terms of their ability to reduce risk in the software.
Abstract: One of the major goals of software testing is to increase reliability of the software. As pointed out by many studies, fault detection does not necessarily increase the reliability of the software when all failures are considered to be equivalent to one another. Accordingly, we need to evaluate software testing techniques to check which technique is more suitable and effective in terms of increasing confidence by reducing risk (by detecting and isolating faults which affect reliability most). We here present a novel experiment which compares three defect detection techniques for reliability. Preliminary results suggest that testing techniques differ in terms of their ability to reduce risk in the software.

Journal ArticleDOI
TL;DR: Some of the theoretical developments that have influenced research in NLP are described and automatic abstracting and information retrieval in natural language processing applications is discussed.
Abstract: Natural Language Processing (NLP) is that field of computer science which consists of interfacing computer representations of information with natural languages used by humans. It examines the use of computers in understanding and manipulating the natural language text and speech. The main aim of the researchers in this field is to collect the necessary details about how natural languages are being used and understood by humans. They use these details to develop the tools for making the computers understand and manipulate the natural languages to perform the desired tasks. In this paper we describe some of the theoretical developments that have influenced research in NLP. We also discuss automatic abstracting and information retrieval in natural language processing applications. We conclude with a discussion on Natural Language Interfaces, NLP software and the future research in NLP.ing and information retrieval in natural language processing applications. We conclude with a discussion on Natural Language Interfaces, NLP software and the future research in NLP.

30 May 2012
TL;DR: A mix of strengths and weaknesses make it hard to pronounce open source as the panacea, however, the open source does have very promising prospect; it has spectacularly managed to carve a “mainstream” role, that too in just over a few decades.
Abstract: Purpose: This paper reviews the open source software systems (OSSS) and the open source software engineering with reference to their strengths, weaknesses and prospects. Though, it is not possible to spell out the better of the two software engineering processes, the paper outlines the areas where the open source methodology holds edge over conventional closed source software engineering. Then, the weaknesses are also highlighted, which tilt the balance the other way. Design/Methodology/Approach: The study is based on the works carried out earlier by the scholars regarding the potentialities and shortcomings of OSSS. Findings: A mix of strengths and weaknesses make it hard to pronounce open source as the panacea. However, the open source does have very promising prospect; owing to its radical approach to the established software engineering principles, it has spectacularly managed to carve a “mainstream” role, that too in just over a few decades. Keywords: Open Source Software (OSS); Open Source Development Paradigm; Software Engineering; Open Source Software Engineering.

01 Jan 2012
TL;DR: Design of a Data Warehouse for heterogeneous data sources using University Examination System as an example is discussed and various challenges that were faced by the team in implementation of the same are discussed.
Abstract: Data warehousing systems enable enterprise managers to acquire and integrate information from heterogeneous sources and to query very large databases efficiently. Building a data warehouse requires adopting design and implementation techniques completely different from those underlying information systems (1). Issue of information integration is vital part of any data warehouse and the design challenge increases by the variety of heterogeneous data sources present in the system. These data sources have varying conceptual models, semantic heterogeneity which seems to be an unavoidable burden in data integration. In this paper we discuss designing of a Data Warehouse for heterogeneous data sources using University Examination Systemas an example. This paper also discusses various challenges that were faced by the team in implementation of the same.