scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 1998"


Journal ArticleDOI
TL;DR: An evaluation of six different methods for prioritizing software requirements for a telephony system found the analytic hierarchy process to be the most promising method, although it may be problematic to scale-up.
Abstract: This article describes an evaluation of six different methods for prioritizing software requirements. Based on the quality requirements for a telephony system, the authors individually used all six methods on separate occasions to prioritize the requirements. The methods were then characterized according to a number of criteria from a user's perspective. We found the analytic hierarchy process to be the most promising method, although it may be problematic to scale-up. In an industrial follow-up study we used the analytic hierarchy process to further investigate its applicability. We found that the process is demanding but worth the effort because of its ability to provide reliable results, promote knowledge transfer and create consensus among project members.

462 citations


Journal ArticleDOI
TL;DR: How a number of program-analysis problems can be solved by transforming them to graph-reachability problems is described, and relationships between graph reachability and other approaches to program analysis are described.
Abstract: This paper describes how a number of program-analysis problems can be solved by transforming them to graph-reachability problems. Some of the program-analysis problems that are amenable to this treatment include program slicing, certain dataflow-analysis problems, one version of the problem of approximating the possible “shapes” that heap-allocated structures in a program can take on, and flow-insensitive points-to analysis. Relationships between graph reachability and other approaches to program analysis are described. Some techniques that go beyond pure graph reachability are also discussed.

297 citations


Journal ArticleDOI
TL;DR: It is shown how slices deriving from other statement deletion based slicing models can be defined as conditioned slices, which is used to formally define a partial ordering relation between slicing models and to build a classification framework.
Abstract: Slicing is a technique to decompose programs based on the analysis of the control and data flow. In the original Weiser's definition, a slice consists of any subset of program statements preserving the behaviour of the original program with respect to a program point and a subset of the program variables (slicing criterion), for any execution path. We present conditioned slicing, a general slicing model based on statement deletion. A conditioned slice consists of a subset of program statements which preserves the behaviour of the original program with respect to a slicing criterion for a given set of execution paths. The set of initial states of the program that characterise these paths is specified in the form of a first order logic formula on the input variables. We also show how slices deriving from other statement deletion based slicing models can be defined as conditioned slices. This is used to formally define a partial ordering relation between slicing models and to build a classification framework.

222 citations


Journal ArticleDOI
TL;DR: It is suggested that combining study results is unlikely to solve all the problems encountered in empirical software engineering studies, but some of the infrastructure and controls used by medical researchers to improve the quality of their empirical studies would be useful in the field of software engineering.
Abstract: In this paper we investigate the techniques used in medical research to combine results from independent empirical studies of a particular phenomenon: meta-analysis and vote-counting. We use an example to illustrate the benefits and limitations of each technique and to indicate the criteria that should be used to guide your choice of technique. Meta-analysis is appropriate for homogeneous studies when raw data or quantitative summary information, e.g. correlation coefficient, are available. It can also be used for heterogeneous studies where the cause of the heterogeneity is due to well-understood partitions in the subject population. In other circumstances, meta-analysis is usually invalid. Although intuitively appealing, vote-counting has a number of serious limitations and should usually be avoided. We suggest that combining study results is unlikely to solve all the problems encountered in empirical software engineering studies, but some of the infrastructure and controls used by medical researchers to improve the quality of their empirical studies would be useful in the field of software engineering.

170 citations


Journal ArticleDOI
TL;DR: A heuristic towards the optimization of a test suite reduction is proposed, which states that the constructed test suite contains redundancy when some of its proper subsets can still satisfy the same testing objective.
Abstract: A testing objective has to be defined in testing a program. A test suite is then constructed to satisfy the testing objective. The constructed test suite contains redundancy when some of its proper subsets can still satisfy the same testing objective. Since the costs of executing test cases and maintaining a test suite for regression testing may be expensive, the problem of test suite reduction arises. This paper proposes a heuristic towards the optimization of a test suite.

168 citations


Journal ArticleDOI
TL;DR: The goal of regression testing is to ensure that bug fixes and new functionality do not adversely affect the correct functionality inherited from the original program.
Abstract: Software maintainers are faced with the task of regression testing: retesting a program after a modification. The goal of regression testing is to ensure that bug fixes and new functionality do not adversely affect the correct functionality inherited from the original program. Regression testing often involves running a large program on a large number of test cases; thus, it can be expensive in terms of both human and machine time. Many approaches for reducing the cost of regression testing have been proposed. Those that make use of program slicing are surveyed.

144 citations


Journal ArticleDOI
TL;DR: There is an urgent need for further empirical work to study the impact of the inheritance mechanism using more subjects and in an industrial setting.
Abstract: For some years software engineering researchers have been advocating object-oriented (OO) methods as a powerful approach to overcome many of the difficulties associated with software development. A central concept within OO is the use of the inheritance mechanism, primarily with a view to enhancing opportunities for reuse. This paper reviews the evidence we have concerning the use and the impact of inheritance within OO software. A striking result is how little the mechanism is used in practice. The paper then goes onto describe an experiment conducted at Bournemouth University to investigate the impact of class inheritance upon the maintenance of C++ software. The experiment was a replication of an experiment previously conducted at the University of Strathclyde. Two versions of a bibliographic database program were used. One used inheritance and the other a flat structure. It was found that subjects took significantly longer to make the same change on the program with inheritance, however, their changes were more compact. Interestingly, the Strathclyde group found that the time to make changes for the inheritance version was reduced. We conclude that there is an urgent need for further empirical work to study the impact of the inheritance mechanism using more subjects and in an industrial setting.

107 citations


Journal ArticleDOI
TL;DR: This paper presents a transformation called tuck for restructuring programs by decomposing large functions into small functions by separating statements in a wedge from the rest of the code and folded into a new function.
Abstract: Changing the internal structure of a program without changing its behavior is called restructuring. This paper presents a transformation called tuck for restructuring programs by decomposing large functions into small functions. Tuck consists of three steps: Wedge, Split, and Fold. A wedge a subset of statements in a slice-contains computations that are related and that may create a meaningful function. The statements in a wedge are split from the rest of the code and folded into a new function. A call to the new function is placed in the now restructured function. That tuck does not alter the behavior of the original function follows from the semantic preserving properties of a slice.

99 citations


Journal ArticleDOI
TL;DR: This paper presents a classification of existing dynamic slicing methods and discusses the algorithms to compute dynamic slices and compares the existing methods of dynamic slice computation.
Abstract: A dynamic program slice is that part of a program that “affects” the computation of a variable of interest during program execution on a specific program input. Dynamic program slicing refers to a collection of program slicing methods that are based on program execution and may significantly reduce the size of a program slice because run-time information, collected during program execution, is used to compute program slices. Dynamic program slicing was originally proposed only for program debugging, but its application has been extended to program comprehension, software testing, and software maintenance. Different types of dynamic program slices, together with algorithms to compute them, have been proposed in the literature. In this paper we present a classification of existing dynamic slicing methods and discuss the algorithms to compute dynamic slices. In the second part of the paper, we compare the existing methods of dynamic slice computation.

87 citations


Journal ArticleDOI
TL;DR: A framework for business process modelling that provides guidance about appropriate approaches at different points in the modelling programme without prescribing particular notations is introduced, described in terms of three iterative and generic categories or phases: Capture, Analysis and Presentation.
Abstract: Business process modelling is an area of work that is increasingly used in conjunction with software development. For example, many development methods note the importance of strategic or business modelling, typically as a prerequisite to analysis. In addition, Systems Engineering for Business Process Change suggests the need to model the business process in maintaining and evolving existing (legacy) systems. In order to model business processes, one needs to consider what notations are most suitable, and what methods to adopt. However, the most appropriate notation typically depends on a number of contextual issues, the purpose of the modelling, the audience for the models and so on. Furthermore, this context changes with the progress of the modelling. Hence, the modeller needs guidance about appropriate approaches at different points in the modelling programme. This paper introduces a framework for business process modelling that provides such guidance without prescribing particular notations. This is achieved by describing business process modelling in terms of three iterative and generic categories or phases: Capture, Analysis and Presentation. The paper shows how different kinds of notational approaches can be used within these categories, discussing the choices available to the modeller. The (CAP) framework is generally applicable, and is illustrated both by a simple theoretical example, and by examples from industrial business process modelling.

75 citations


Journal ArticleDOI
TL;DR: The use of methods has also been found to be quite conservative, old methods from the 1970s still dominating as mentioned in this paper, and there is surprisingly little research on the actual use of systems development methods.
Abstract: There is surprisingly little research on the actual use of systems development methods. The general finding of earlier studies is, however, that the use of methods is weak and, as far as they are used, the use is not literal. The use of methods has also been found to be quite conservative, old methods from the 1970s still dominating. The present paper makes two contributions to this ongoing discussion on the use of systems development methods. Firstly, it raises a number of conceptual problems related to their use. These conceptual problems help to devise theoretically more justified research on method use. Secondly, the paper provides evidence that the repertoire of systems development methods may be undergoing a considerable change. Results of a recent survey of method use among CASE user organisations in Finland indicate that object-oriented methods have become the dominant systems development approach in this population. Even though the sample may be biased towards more innovative organisations, the results also show that organisations are not necessarily stuck to old practices of systems development, and gives hope that object-oriented methods will gradually achieve wider acceptance. At the same time they indicate considerable differences in the effective use of object-oriented methods. Overall, the paper underlines the need future research on the usage of systems development methods.

Journal ArticleDOI
TL;DR: The aim of the study is to provide application guidelines for choosing the most appropriate heuristic for test suite reduction by presenting the result of a simulation study of these four heuristics.
Abstract: In order to test a program, software testers must first define a testing objective. From this a test suite is then constructed. In fact, some subsets of the constructed test suite may still satisfy the same testing objective. Such subsets are referred to as representative sets of the test suite. Finding the optimal representative sets of the test suite reduction problem is known to be NP-complete. Different heuristics, including G, GE, grE and H, have been proposed by different researchers. However, it has been demonstrated that none of these four heuristics are guaranteed to deliver smaller sized representative sets. This paper presents the result of a simulation study of these four heuristics. The aim of the study is to provide application guidelines for choosing the most appropriate heuristic for test suite reduction.

Journal ArticleDOI
TL;DR: This paper first state why it is believed the conceptual space of software evaluation is so broad, then develops some basic principles that provide structure and boundaries to this conceptual space.
Abstract: The genesis of this paper lies in our observation that there are almost as many perspectives on the topic of software evaluation as there are evaluation techniques. Is this diversity an inherent characteristic of software evaluation itself, or is it reflective of the confusion found in an immature discipline? We believe that both are substantially true. In this paper we first state why we believe the conceptual space of software evaluation is so broad. We then develop some basic principles that provide structure and boundaries to this conceptual space. Although these principles apply to the evaluation of commercial-off-the-shelf software in particular, we believe they have relevance to the topic of software evaluation in general.

Journal ArticleDOI
TL;DR: The empirical results from this study indicate that the structural properties of the software influencing the fault content is established before the coding phase, and both design and code metrics are correlated with the number of faults.
Abstract: Software metrics play an important role in measuring the quality of software. It is desirable to predict the quality of software as early as possible, and hence metrics have to be collected early as well. This raises a number of questions that has not been fully answered. In this paper we discuss, prediction of fault content and try to answer what type of metrics should be collected, to what extent design metrics can be used for prediction, and to what degree prediction accuracy can be improved if code metrics are included. Based on a data set collected from a real project, we found that both design and code metrics are correlated with the number of faults. When the metrics are used to build prediction models of the number of faults, the design metrics are as good as the code metrics, little improvement can be achieved if both design metrics and code metrics are used to model the relationship between the number of faults and the software metrics. The empirical results from this study indicate that the structural properties of the software influencing the fault content is established before the coding phase.

Journal ArticleDOI
TL;DR: This paper reviews the original slice-based cohesion measures defined to measure functional cohesion in the procedural paradigm as well as the derivative work aimed at measuring cohesion in other paradigms and situations.
Abstract: The basis for measuring many attributes in the physical world, such as size and mass, is fairly obvious when compared to the measurement of software attributes. Software has a very complex structure, and this makes it difficult to define meaningful measures that actually quantify attributes of interest. Program slices provide an abstraction that can be used to define important software attributes that can serve as a basis for measurement. We have successfully used program slices to define objective, meaningful, and valid measures of cohesion. Previously, cohesion was viewed as an attribute that could not be objectively measured; cohesion assessment relied on subjective evaluations. In this paper we review the original slice-based cohesion measures defined to measure functional cohesion in the procedural paradigm as well as the derivative work aimed at measuring cohesion in other paradigms and situations. By viewing software products at differing levels of abstraction or granularity, it is possible to define measures which are available at different points in the software life cycle and/or suitable for varying purposes.

Journal ArticleDOI
TL;DR: RolEnact is described: a process-modelling notation used to provide enactable models of process instances which can be understood both by process consultants and process users, whilst retaining the ability to generate enactable process scenarios.
Abstract: This paper describes RolEnact: a process-modelling notation used to provide enactable models of process instances. The paper shows how RolEnact models may be produced which are equivalent to role activity diagrams (RADs). This allows the modeller to describe processes in a notation (RADs); which can be understood both by process consultants and process users, whilst retaining the ability to generate enactable process scenarios.

Journal ArticleDOI
Christof Ebert1
TL;DR: This paper shows in this paper how requirement management for nonfunctional requirements is put into practice, thus integrating and exemplifying different key elements of requirement management, namely specification, conflict resolution, requirement tracing and progress tracking.
Abstract: Sound requirement management is the single most relevant technique to ensure that the need to provide customer-oriented solutions in a defined timeframe with limited budget can be matched with future safe evolution of system families. The last objective is especially closely related to nonfunctional requirements. We will show in this paper how requirement management for nonfunctional requirements is put into practice, thus integrating and exemplifying different key elements of requirement management, namely specification, conflict resolution, requirement tracing and progress tracking. Various practical examples are provided from the application of these techniques to development of telecommunication systems within Alcatel. The aspects of maintainability or reliability are naturally discussed as such systems are built on re-used components.

Journal ArticleDOI
TL;DR: It is shown how to combine program slicing and constraint solving in order to obtain better slice accuracy and reveal manipulations of the so-called calibration path in the VALSOFT slicing system.
Abstract: We show how to combine program slicing and constraint solving in order to obtain better slice accuracy The method is used in the VALSOFT slicing system One particular application is the validation of computer-controlled measurement systems VALSOFT will be used by the Physikalisch-Technische-Bundesanstalt for verification of legally required calibration standards The article describes the VALSOFT slicing system In particular, we describe how to generate and simplify path conditions based on program slices A case study shows that the technique can indeed increase slice precision and reveal manipulations of the so-called calibration path

Journal ArticleDOI
John Field1, Frank Tip1
TL;DR: This paper shows that general and semantically well-founded notions of slicing and dependence can be derived in a simple, uniform way from term rewriting systems (TRSs) and yield a method for automatically deriving certain minimal equational theorems on open terms as a consequence of deriving a single theorem about a closed term.
Abstract: Program slicing is a useful technique for debugging, testing, and analyzing programs. A program slice consists of the parts of a program that (potentially) affect the values computed at some point of interest. With rare exceptions, program slices have hitherto been computed and defined in ad-hoc and language-specific ways. The principal contribution of this paper is to show that general and semantically well-founded notions of slicing and dependence can be derived in a simple, uniform way from term rewriting systems (TRSs). Our slicing technique is applicable to any language whose semantics is specified in TRS form. Moreover, we show that our method admits an efficient implementation. Viewed more abstractly, our techniques yield a method for automatically deriving certain minimal equational theorems on open terms as a consequence of deriving a single theorem about a closed term. Our techniques can thus be used to augment the capabilities of equational theorem proving systems.

Journal ArticleDOI
TL;DR: The objective of the study is to evaluate the needed formalism to improve effort estimation and to study different approaches to record and reuse experiences from effort planning in software projects.
Abstract: This paper outlines a four step effort estimation study and focuses on the first and second step. The four steps are formulated to successively introduce a more formal effort experience base. The objective of the study is to evaluate the needed formalism to improve effort estimation and to study different approaches to record and reuse experiences from effort planning in software projects. In the first step (including seven projects), the objective is to compare estimation of effort based on a rough figure (indicating approximate size of the projects) with an informal experience base. The objective of the second step is on reuse of experiences from an effort experience base, where the outcomes of seven previous projects were stored. Seven new projects are planned based on the previous experiences.The plans are, after project completion, compared with the initial plans and with the data from six out of the seven new projects, to plan the seventh. It is clear from the studies that effort estimation is difficult and that the mean estimation error is in the range of 14%-19% independent of the approach used. Further, it is concluded that the best estimates are obtained when the projects use the previous experience and complement this information with their own thoughts and opinions. Finally, it is concluded that data collection is not enough in itself, the data collected must be processed, i.e. interpreted, generalized and synthesized into a reusable form.

Journal ArticleDOI
TL;DR: A method by which case based reasoning can be applied to the business software development process as a mechanism for improving productivity and quality problems currently afflicting the corporate software development field is demonstrated.
Abstract: This paper, supported by a commercial case-based reasoning tool, demonstrates a method by which case based reasoning can be applied to the business software development process. Requirements definition, effort estimation, software design, and troubleshooting, and maintenance processes are discussed in terms of candidacy for CBR technology. Proper planning for an adequate support infrastructure is stressed as well as clear expectation setting through ongoing training. CBR is explored as a mechanism for improving productivity and quality problems currently afflicting the corporate software development field.

Journal ArticleDOI
TL;DR: This paper reports on work undertaken by three UK universities to provide students with the opportunity to experience group working across multiple sites using low cost tools to support distributed cooperative working.
Abstract: Distributed group working of software engineering teams is increasingly evident in the `real world'. Tools to support such working are at present limited to general purpose groupware involving video, audio, chat, shared whiteboards and shared workspaces. Within software engineering education, group tasks have an established role in the curriculum. However, in general, groups are local to a particular university or institution and are composed of students who have a significant shared history (in terms of technical background and social interaction) and who are able to meet face-to-face on a regular basis. This paper reports on work undertaken by three UK universities to provide students with the opportunity to experience group working across multiple sites using low cost tools to support distributed cooperative working.

Journal ArticleDOI
TL;DR: Statistical analysis reveals that method use is dependent on the type of organisation, and varied between matured and novice organisations, and interesting differences are highlighted.
Abstract: The practices of software development methods and techniques have been widely reported in the information systems (IS) literature. However, a vast majority of the literature describes solely the use of well advocated methods and techniques, while a few studies are available that investigated influence of organisational attributes on method adoption. Moreover, experiences of method use within US organisations dominate IS literature, while reports from the Asian region are non-existent. Based on these rationales, a study was undertaken to examine the method adoption pattern of the public and private sector organisations in Brunei Darussalam. Out of 100 organisations, 36 (36%) participated in the survey. Two thirds (67%) of the participating organisations reported adoption of a systematic approach to software development by embracing a method. Even though it appears satisfactory for a newly established small country like Brunei, the use of individual methods, particularly the well known ones like structured methods, is less than expected. Statistical analysis reveals that method use is dependent on the type of organisation, and varied between matured and novice organisations. The implications of these findings are discussed. These findings are also compared with those of US, UK and Australian studies, and interesting differences are highlighted.

Journal ArticleDOI
TL;DR: This premise is examined via a survey of system developers and system users about their perception of frequency of system development problems, which indicates that users and developers of information systems perceive certain problems at different levels of occurrence.
Abstract: According to the expectation failure theory, information system failures can occur during development or during system use and may be viewed differently by various stakeholder groups. This premise is examined via a survey of system developers and system users about their perception of frequency of system development problems. The data indicates that users and developers of information systems perceive certain problems at different levels of occurrence.

Journal ArticleDOI
TL;DR: This paper proposes the use of group-oriented models to describe steps of co-operative work which cannot be represented using a workflow model, and proposes several improvements which consist of an organisational dimension to the models, and an information flow diagram.
Abstract: The main focus of this paper is demonstrating a methodology for capturing and designing co-operative work processes. We present a method, appropriate for group work analysis, and particularly well suited for the analysis and design of workflow applications. With this end in view, we have chosen the OSSAD (Office Support System Analysis and Design) method as a foundation for a specific method for cooperative applications analysis and design. We propose several improvements which consist of an organisational dimension to the models, and an information flow diagram. Within this framework, we propose the use of group-oriented models to describe steps of co-operative work which cannot be represented using a workflow model.

Journal ArticleDOI
TL;DR: The consideration of software quality from the implementation-oriented in the individual and organization-oriented directions is broadened and the main hypothesis is that these ‘soft’ aspects give us a deeper co-understanding ofSoftware quality.
Abstract: The major problem in understanding quality concepts has been explained by the characteristic that people see it quite differently from different perspectives. In the present paper we borrow this explanation for our starting point and broaden the consideration of software quality from the implementation-oriented in the individual and organization-oriented directions. Our main hypothesis is that these ‘soft’ aspects give us a deeper co-understanding of software quality. We proceed with three hermeneutic cycles such that after an introduction to five perspectives we analyse them through Kolb's experiential learning theory and finally deepen the analysis through the team and organization-oriented theory of Nonaka and Takeuchi.

Journal ArticleDOI
TL;DR: An approach to integrate multiple methods through the use of a meta-model that uses semantic equivalence between method components to establish an uniformity between individual methods and reduces the complexity associated with method integration.
Abstract: Software development methods, sometimes called methodologies, have been widely embraced and practiced due to the quality of the systems achieved using such systematic approaches. However, the need to develop more and more complex systems is stretching the individual methods beyond their limits. It is therefore believed that an approach employing multiple methods would be a more effective way to tackle the current software development method crisis. This paper presents an approach to integrate multiple methods through the use of a meta-model. The contribution of the approach lies in two aspects. The first is the use of semantic equivalence between method components to establish an uniformity between individual methods. Such an uniformity reduces the complexity associated with method integration. The second is the incorporation of procedural information, namely tasks and their order, into the meta-model. The meta-model does not, however, compromise the expressive power provided by individual methods since method-specific information is also captured. The effectiveness of the approach has been successfully demonstrated using a CASE tool developed using the meta-model to integrate object diagrams, state transition diagrams and data flow diagrams.

Journal ArticleDOI
TL;DR: An overview of the current status of idea processors is provided, pointing out their roots in brainstorming techniques, and how idea processors work, their nature, and their typical architecture are discussed.
Abstract: Idea processors, as a kind of software widely used in the business world, have not received much attention from academia. In this paper we provide an overview of the current status of idea processors. We start from the foundations of idea processors, pointing out their roots in brainstorming techniques. By examining several experimental systems and commercial products, we further discuss how idea processors work, their nature, and their typical architecture. We also summarize some research work related to idea processors, as well as relationships between idea processors and studies of computational creativity in artificial intelligence. Other related issues, such as group decision support systems and evaluation methods, are also briefly examined.

Journal ArticleDOI
TL;DR: Program slicing improves the search method by eliminating many irrelevant questions to the user during bug localization by combining program slicing with algorithmic debugging.
Abstract: Debugging has always been a costly part of software development and software maintenance, which makes it important to find methods and tools to support this activity. Algorithmic program debugging is an interactive process where the debugging system acquires knowledge about the intended behavior of the debugged program and uses this knowledge to localize errors semi-automatically. This knowledge is a set of partial specifications collected by the debugging system through a number of questions to the user. Although in theory the specifications about the intended program behavior can be stored in advance, this is still an error-prone task in practice. Thus, knowledge collection during debugging is a necessity. A major drawback of this method is the large number of user interactions during bug localization. An important improvement would be to supply the debugging system with some information which can reduce this number. This is achieved by combining program slicing with algorithmic debugging. Program slicing improves the search method by eliminating many irrelevant questions to the user during bug localization.

Journal ArticleDOI
TL;DR: This paper attempts to organise the available tools into a number of categories, according to their information acquisition and retrieval methods, with the intention of exposing the strengths and weaknesses of the various approaches.
Abstract: Search Engines and Classified Directories have become essential tools for locating information on the World Wide Web. A consequence of increasing demand, as the volume of information on the Web has expanded, has been a vast growth in the number of tools available. Each one claims to be more comprehensive, more accurate and more intuitive to use than the last. This paper attempts to organise the available tools into a number of categories, according to their information acquisition and retrieval methods, with the intention of exposing the strengths and weaknesses of the various approaches. The importance and implications of Information Retrieval (IR) techniques are discussed. Description of the evolution of automated tools enables an insight into the aims of recent and future implementations