scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 1997"


Journal ArticleDOI
TL;DR: The evaluation of information systems (IS) investments has been a recognized problem area for the last three decades, but has recently gained renewed interest of both management and academics as discussed by the authors, and there is a great call for methods and techniques that can be of help in evaluating IS investment at the proposal stage.
Abstract: The evaluation of information systems (IS) investments has been a recognized problem area for the last three decades, but has recently gained renewed interest of both management and academics. IS investments constitute a large and increasing portion of the capital expenditures of many organizations. However, it is difficult to evaluate the contribution of an IS investment to the goals pursued. Consequently, there is a great call for methods and techniques that can be of help in evaluating IS investment at the proposal stage. The contribution of the paper to the problem area is twofold. First, the different concepts which are used in evaluation are discussed and more narrowly defined. When speaking about IS investments, concepts are used that originate from different disciplines. In many cases there is not much agreement on the precise meaning of the different concepts used. However, a common language is a prerequisite for the successful communication between the different organizational stakeholders in evaluation. In addition to this, the paper reviews the current methods and puts them into a frame of reference. All too often new methods and guidelines for investment evaluation are introduced, without building on the extensive body of knowledge that is already incorporated in the available methods. Four basic approaches are discerned: the financial approach, the multi-criteria approach, the ratio approach and the portfolio approach. These approaches are subsequently compared on a number of characteristics on the basis of methods that serve as examples for the different approaches. The paper concludes with suggestions on how to improve evaluation practice and recommendations for future research.

218 citations


Journal ArticleDOI
TL;DR: A 12-model classification scheme for performing experimentation within the software development domain is discussed and evaluated to determine how well the computer science community is succeeding at validating its theories, and how computer science compares to other scientific disciplines.
Abstract: Although experimentation is an accepted approach toward scientific validation in most scientific disciplines, it only recently has gained acceptance within the software development community. In this paper we discuss a 12-model classification scheme for performing experimentation within the software development domain. We evaluate over 600 published papers in the computer science literature and over one hundred papers from other scientific disciplines in order to determine: (1) how well the computer science community is succeeding at validating its theories, and (2) how computer science compares to other scientific disciplines. Published by Elsevier Science B.V.

209 citations


Journal ArticleDOI
TL;DR: The use of regression analysis to derive predictive equations for software metrics has recently been complemented by increasing numbers of studies using non-traditional methods, such as neural networks, fuzzy logic models, case-based reasoning systems, and regression trees.
Abstract: The use of regression analysis to derive predictive equations for software metrics has recently been complemented by increasing numbers of studies using non-traditional methods, such as neural networks, fuzzy logic models, case-based reasoning systems, and regression trees. There has also been an increasing level of sophistication in the regression-based techniques used, including robust regression methods, factor analysis, and more effective validation procedures. This paper examines the implications of using these methods and provides some recommendations as to when they may be appropriate. A comparison of the various techniques is also made in terms of their modelling capabilities with specific reference to software metrics.

201 citations


Journal ArticleDOI
TL;DR: This paper develops, define, illustrate and assess diversity measures, voting strategies for diversity exploitation, and interactions between the two, and introduces inductive programming techniques, particularly neural computing, as a cost-effective route to the practical use of multiversion systems outside the demanding requirements of safety-critical systems.
Abstract: The topic of this paper is the exploitation of diversity to enhance computer system reliability. It is well established that a diverse system composed of multiple alternative versions is more reliable than any single version alone, and this knowledge has occasionally been exploited in safety-critical applications. However, it is not clear what this property is, nor how the available diversity in a collection of versions is best exploited. We develop, define, illustrate and assess diversity measures, voting strategies for diversity exploitation, and interactions between the two. We take the view that a proper understanding of such issues is required if multiversion software engineering is to be elevated from the current “try it and see” procedure to a systematic technology. In addition, we introduce inductive programming techniques, particularly neural computing, as a cost-effective route to the practical use of multiversion systems outside the demanding requirements of safety-critical systems, i.e. in general software engineering.

166 citations


Journal ArticleDOI
TL;DR: Assessment of back-propagation neural network models for effort estimation showed encouraging results, with the networks showing an ability to estimate development effort within 25% of actual effort more than 75% of the time for one large commercial data set.
Abstract: Accurate software development effort estimation is important for effective project management. Research studies indicate that effort estimation is a complex issue and results have in general not been encouraging. Artificial Neural Networks are recognised for their ability to provide good results when dealing with problems where there are complex relationships between inputs and outputs, and where the input data is distorted by high noise levels. This paper reports on the assessment of back-propagation neural network models for effort estimation. The models were tested on simulated data as well as actual data of commercial projects. This project data had large productivity variations, noise and missing data values, which enabled model evaluation under typical software development conditions. The results were encouraging, with the networks showing an ability to estimate development effort within 25% of actual effort more than 75% of the time for one large commercial data set.

139 citations


Journal ArticleDOI
TL;DR: This work assess and compare various compiler testing techniques with respect to some selected criteria and also proposes some new research directions in compiler testing of modem programming languages.
Abstract: Software testing is an important and critical phase of the application software development life cycle. Testing is a time consuming and costly stage that requires a high degree of ingenuity. In the development stages of safety-critical and dependable computer software such as language compilers and real-time embedded software, testing activities consume about 50% of the project time. In this work we address the area of compiler testing. The aim of compiler testing is to verify that the compiler implementation conforms to its specifications, which is to generate an object code that faithfully corresponds to the language semantic and syntax as specified in the language documentation. A compiler should be carefully verified before its release, since it has to be used by many users. Finding an optimal and complete test suite that can be used in the testing process is often an exhaustive task. Various methods have been proposed for the generation of compiler test cases. Many papers have been published on testing compilers, most of which address classical programming languages. In this paper, we assess and compare various compiler testing techniques with respect to some selected criteria and also propose some new research directions in compiler testing of modem programming languages.

84 citations


Journal ArticleDOI
TL;DR: These experiments show that, compared with conventional regression analysis, improved accuracy of prediction is possible in software cost modelling, and are conducted using a CMAC or Albus perceptron.
Abstract: Machine learning techniques such as neural networks, rule induction, genetic algorithms and case-based reasoning are finding applications in a wide variety of fields such as computer vision, econometrics and medicine, where human abilities have proven to be superior to those of computers. Such techniques hold the promise of being able to make sense of a variety of inputs of different types, in producing an output. Software cost modelling has always appeared to be a rather hit-or-miss business where statistical methods frequently result in low accuracy of prediction. Some experiments using a neural network — a CMAC or Albus perceptron — have been conducted, highlighting some of the problems that arise when machine learning techniques are applied to software cost modelling. These experiments show that, compared with conventional regression analysis, improved accuracy of prediction is possible.

76 citations


Journal ArticleDOI
TL;DR: This paper introduces Statistical Power, will attempt to demonstrate the potential difficulties of applying it to the design of Software Engineering experiments, and concludes with a discussion of what the authors believe is the most viable method of incorporating the evaluation of statistical power within the experimental design process.
Abstract: Recently we have witnessed a welcomed increase in the amount of empirical evaluation of Software Engineering methods and concepts. It is hoped that this increase will lead to establishing Software Engineering as a well-defined subject with a sound scientifically proven underpinning rather than a topic based upon unsubstantiated theories and personal belief. For this to happen the empirical work must be of the highest standard. Unfortunately producing meaningful empirical evaluations is a highly hazardous activity, full of uncertainties and often unseen difficulties. Any researcher can overlook or neglect a seemingly innocuous factor, which in fact invalidates all of the work. More serious is that large sections of the community can overlook essential experimental design guidelines, which bring into question the validity of much of the work undertaken to date. In this paper, the authors address one such factor — Statistical Power Analysis. It is believed, and will be demonstrated, that any body of research undertaken without considering statistical power as a fundamental design parameter is potentially fatally flawed. Unfortunately the authors are unaware of much Software Engineering research which takes this parameter into account. In addition to introducing Statistical Power, the paper will attempt to demonstrate the potential difficulties of applying it to the design of Software Engineering experiments and concludes with a discussion of what the authors believe is the most viable method of incorporating the evaluation of statistical power within the experimental design process.

76 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce Perspective-Based Reading (PBR) for code documents, a systematic technique to support individual defect detection, which is embodied within perspective-based algorithmic scenarios.
Abstract: Despite dramatic changes in software development in the two decades since the term software engineering was coined, software quality deficiencies and cost overruns continue to afflict the software industry. Inspections, developed at IBM by Fagan in the early 1970s 1], can be used to improve upon these problems because they allow the detection and removal of defects after each phase of the software development process. But, in most published inspection processes, individuals performing defect detection are not systematically supported. There, defect detection depends heavily upon factors like chance or experience. Further, there is an ongoing debate in the literature whether or not defect detection is more effective when performed as a group activity and hence should be conducted in meetings 5,11,13,14]. In this article we introduce Perspective-based Reading (PBR) for code documents, a systematic technique to support individual defect detection. PBR offers guidance to individual inspectors for defect detection. This guidance is embodied within perspective-based algorithmic scenarios which makes individual defect detection independent of experience. To test this assumption, we tailored and introduced PBR in the inspection process at Robert Bosch GmbH. We conducted two training sessions in the form of a 2 × 3 fractional-factorial experiment in which 11 professional software developers reviewed code documents from three different perspectives. The experimental results are: (1) Perspectivebased Reading and the type of document have an influence on individual defect detection, (2) multi-individual inspection meetings were not very useful to detect defects, (3) the overlap of detected defects among inspectors using different perspectives is low, and (4) there are no significant differences with respect to defect detection between inspectors having experiences in the programming language and/or the application domain and those that do not.

74 citations


Journal ArticleDOI
TL;DR: This report describes an empirical study comparing three defect detection techniques: (a) code reading by stepwise abstraction, (b) functional testing using equivalence partitioning and boundary value analysis, and (c) structural testing using branch coverage.
Abstract: This report describes an empirical study comparing three defect detection techniques: (a) code reading by stepwise abstraction, (b) functional testing using equivalence partitioning and boundary value analysis, and (c) structural testing using branch coverage. It is a replication of a study that has been carried out at least four times previously over the last 20 years. This study used 47 student subjects to apply the techniques to small C programs in a fractional factorial experimental design. The major findings of the study are: (a) that the individual techniques are of broadly similar effectiveness in terms of observing failures and finding faults, (b) that the relative effectiveness of the techniques depends on the nature of the program and its faults, (c) these techniques are consistently much more effective when used in combination with each other. These results contribute to a growing body of empirical evidence that supports generally held beliefs about the effectiveness of defect detection techniques in software engineering.

65 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide models for evaluating the business value of information systems quality, defined as the net value of an information system for the user organisation, which is affected by the cost of planning, developing, maintaining and using the system, and by the benefits achieved through systems use.
Abstract: This article provides models for evaluating the business value of IS quality. Business quality is defined as the net value of an information system for the user organisation. Thus, it is affected by the cost of planning, developing, maintaining and using the system, and by the benefits achieved through systems use. The article identifies ways IS work quality and IS user quality can influence the business quality of information systems. However, it is acknowledged that a high quality information system is not alone sufficient. To ensure business quality, integrative processes between IS professionals and business managers/analysts are needed. The article assists practising managers in evaluating and improving the business quality of their information systems. For information systems research, the article provides models for explaining and measuring the business value of information systems quality.

Journal ArticleDOI
TL;DR: This paper provides essential principles for the process of formalisation in the context of modelling techniques as well as a number of small but realistic formalisation case studies.
Abstract: Although the need for formalisation of modelling techniques is generally recognised, not much literature is devoted to the actual process involved. This is comparable to the situation in mathematics where focus is on proofs but not on the process of proving. This paper tries to accomodate for this lacuna and provides essential principles for the process of formalisation in the context of modelling techniques as well as a number of small but realistic formalisation case studies.

Journal ArticleDOI
TL;DR: The paper concludes with a discussion of two polar perspectives taken from the history of ethnographic studies which it feels are useful in framing the modern-day positioning of ethnography and IS development as well as highlighting some of the tensions involved in contemporary IS development.
Abstract: Ethnography is a style of qualitiative research method which has been discussed in a number of ways in association with the development of software systems. In this paper we identify a number of distinct uses to which ethnography and ethnographies have been put in information systems development: particularly the distinction between ethnographies for IS development and ethnographies of IS development. We also survey some of the recent proposals for integrating ethnographic practice into information systems work: ethnography within IS development. Our main aim in constructing this taxonomy is to serve as a useful framework for positioning our own ideas on the relationship between ethnography and IS development derived largely from our own ongoing research work in the area of software prototyping. The paper concludes with a discussion of two polar perspectives taken from the history of ethnographic studies which we feel are useful in framing the modern-day positioning of ethnography and IS development as well as highlighting some of the tensions involved in contemporary IS development.

Journal ArticleDOI
TL;DR: It is argued that, although the importance of the perceived product quality is recognised world-wide, there does not exist a rigorous method for measuring customer perception of product quality, and a method is presented to measure not only end-users' perception of the product, but also company employees' perceptions of the quality of the internal deliverables produced within the company.
Abstract: This paper presents a method for measuring customer's perception of software quality. We argue that, although the importance of the perceived product quality is recognised world-wide, there does not exist a rigorous method for measuring customer perception of product quality. This paper presents a method expanded to measure not only end-users' perception of the product, but also company employees' perception of the quality of the internal deliverables produced within the company. Additionally, in this paper we present examples of the method's application on a set of projects, in parallel with internal measurements, using a set of commonly used product metrics. Subsequently, we compare measurement results that are derived from customer perception measurements to results that are derived from internal measurements, and we discuss the advantages and disadvantages of each method.


Journal ArticleDOI
TL;DR: A number of alternative interview-based effort estimation methods are presented and an experiment in which software engineers were asked to use different methods to estimate the actual effort it would take to perform a number of tasks is presented.
Abstract: Effort estimation is difficult in general, and in software development it becomes even more complicated if the software process is changed In this paper a number of alternative interview-based effort estimation methods is presented The main focus of the paper is to present an experiment in which software engineers were asked to use different methods to estimate the actual effort it would take to perform a number of tasks The result from the subjective data is compared with the actual outcome from performing the tasks

Journal ArticleDOI
TL;DR: Though the three perspectives are not mutually exclusive as they all address IT utilisation in organisations, they do emphasise different activities and the different methods available of controlling and improving information systems quality are restricted to those commonly used in the three different fields.
Abstract: The differences between software quality and information systems quality are discussed from three different viewpoints—managerial, organisational, and engineering—to assist the development of quality improvement programs. The article uses interviews with open ended questions, with three experienced researchers in the information technology field, each representing one of the three viewpoints. It explores the management strategies for IS quality and software quality and discusses how the three distinct ontology and epistemology bases affect the views on quality from the three viewpoints; and questions the relevance of quantitative assessment of software quality to information systems quality. After considering quality improvement as a process reflecting, and influenced by, the interaction between customer and supplier, we outline strategies for organisational change to achieve continuous improvement of information systems quality. The article concludes that though the three perspectives are not mutually exclusive as they all address IT utilisation in organisations, they do emphasise different activities and the different methods available of controlling and improving information systems quality are restricted to those commonly used in the three different fields. More flexible methods are needed to facilitate successful quality improvement programs.

Journal ArticleDOI
TL;DR: The results suggest that projects with different characteristics not only complete the RCA process performing a different number of iterations, but also allocate their resources in different ways.
Abstract: In the light of certain important gaps observed in the management of the requirements capture and analysis (RCA) process, a study was conducted to investigate the presence of significant differences between projects developed by different people and organisations. This paper presents the results of a survey of 107 projects developed from 74 different organisations, in which an attempt is made to identify the relationship between developers, project characteristics and different aspects of the RCA process. The results suggest that projects with different characteristics not only complete the RCA process performing a different number of iterations, but also allocate their resources in different ways. It is also suggested that these differences might occur due to differences in the level of some important factors (concerning team members' and users' attitude and project management) between projects with different characteristics.

Journal ArticleDOI
TL;DR: This paper presents a formal model of phase containment metrics, a key objective in software development is phase containment of defects, and demonstrates how thephase containment metrics lead to software quality and process improvements.
Abstract: The development of high-quality software is an essential business activity in many organizations. Improving software quality requires the effective use of a software development process with well-defined phases of development and a metrics program to define and verify product and process quality. A key objective in software development is phase containment of defects. It is accepted knowledge that identifying and correcting defects as close to their source as possible produces higher quality software with enhanced development productivity. The essential goal is that a defect should not escape the phase in which it is introduced. This paper presents a formal model of phase containment metrics. We report on the implementation of phase containment metrics in a real software development project. Data tables and charts are proposed as effective means of collecting and reporting the metrics. We demonstrate how the phase containment metrics lead to software quality and process improvements. Future research directions are presented in the conclusion.

Journal ArticleDOI
TL;DR: By providing a mapping between these notations, the paper suggests that by providing the accessibility of a well understood user-facing modelling paradigm, (RADs), whilst retaining the formality of CSP, is gained.
Abstract: This paper examines two modelling paradigms, namely Hoare's Communicating Sequential Processes (CSP) and a subset of Role Activity Diagrams (RADs) and shows how they can be combined to give a new approach to process modelling. We examine the two notations by reference to processes from two different business domains. For each domain, we transform a RAD model (by way of methodical mapping) to arrive at an equivalent formal CSP model. The latter is then explored using a stepper, which allows for process simulation by executing the model. The paper suggests that by providing a mapping between these notations we gain the accessibility of a well understood user-facing modelling paradigm, (RADs), whilst retaining the formality of CSP. This provides us not only with the advantages of understandable user-facing models, for process elicitation and presentation, but also gives us the ability to experiment with (by process simulation) the effects of process change.

Journal ArticleDOI
TL;DR: A survey carried out in order to ascertain the current state of practice amongst developers of multimedia systems identifies some of the features which distinguish multimedia system development projects from more conventional software developments.
Abstract: A better understanding of the distinguishing characteristics of different forms of system development is vital if we are to tailor recommendations regarding good practice to the specific needs of particular projects. This paper presents the findings of a survey carried out in order to ascertain the current state of practice amongst developers of multimedia systems. The survey involved conducting structured interviews with professional multimedia developers from seven different organizations, and analysing feedback from postal questionnaires completed by developers working in 16 further organizations. The findings are used here to identify some of the features which distinguish multimedia system development projects from more conventional software developments.

Journal ArticleDOI
TL;DR: Use quality, system success and user satisfaction models based on the traditional information systems are discussed and useful features in service marketing models are suggested to be included in quality models for mass information systems like the web.
Abstract: Mass informations systems like the World Wide Web (WWW) are creating new challenges for information system designers. Users are becoming increasingly aware of what quality aspects of WWW-systems appeal to their user style, as well as which WWW-sites are of high quality and which to be avoided. Learning to attract WWW-visitors' attention and make them revisit a site is an important economic issue in electronic commerce and web surfing. Practices for designing traditional transaction systems designed for special groups of users might not be valid any longer when designing for unspecified users who usually voluntarily choose services. Use quality, system success and user satisfaction models based on the traditional information systems are discussed. Useful features in service marketing models are suggested to be included in quality models for mass information systems like the web. Satisfaction measures from service marketing should be taken into account as an information system and especially a mass information system like the WWW provides information services for the user.

Journal ArticleDOI
TL;DR: A CASE tool is described which addresses the problems of defining business needs and the IT systems required to support them and its two arenas of use with a case study and an example.
Abstract: A CASE tool is described which addresses the problems of defining business needs and the IT systems required to support them. The tool is based on Critical Success Factors (CSF) analysis for use within IT planning and requirements determination, and has three basic uses: (1) a database of information concerning organizational factors gathered during CSF analysis, linking business and IT needs, (2) a source of reports showing interrelationships between the organizational factors, (3) a ‘what-if’ facility that allows the analyst to vary the priorities of organizational factors and to study the effects on the priorities of business factors such as business units and IT-related factors such as applications. We describe the tool and its two arenas of use with a case study and an example.

Journal ArticleDOI
TL;DR: Information systems quality is achieved substantially from the system's subsequent maintenance after being entered into production, and IS maintenance is fundamentally an activity of quality achievement and maintenance.
Abstract: Information systems quality is most commonly thought of as achieved primarily from a system's original design and development In fact, it is achieved substantially from the system's subsequent maintenance after being entered into production IS maintenance is, in other words, fundamentally an activity of quality achievement and maintenance The term ‘quality’ may be associated with four definitions: quality is excellence, quality is value, quality is conformance to specifications and quality is meeting and/or exceeding customer's expectations IS maintenance is basic to achieving quality in each definitional case We explain why, drawing from our own empirical research as well as that of others

Journal ArticleDOI
TL;DR: This paper presents a short report on the invited lecture given by Dr. H.D. Rombach at the EASE-97 conference, where he demonstrated how a series of experimental activities were used to take a reading technique from initial conception into widespread use.
Abstract: This paper presents a short report on the invited lecture given by Dr. H.D. Rombach at the EASE-97 conference. In this lecture Dr. Rombach described his view of the importance of experimentation to the introduction of new techniques and methods. He demonstrated how a series of experimental activities were used to take a reading technique (i.e. reading by stepwise abstraction) from initial conception into widespread use. In providing this report, I will use a selection of the information from the slides presented by Dr. Rombach, linked by my own summaries. As such any misrepresentations of Dr. Rombach's views are my mistakes.

Journal ArticleDOI
TL;DR: There is a need and a demand for better quality practice that can be attained through cooperation between practitioners and researchers and some suggestions for bridging the gap are presented.
Abstract: Investing in quality was popular in the early 1990s. Several approaches were developed, but it seems that none of them provides a solution that is generally accepted and adequately detailed for both scientific and practical purposes within the IS field. We claim that most quality approaches concentrate too much on the technical and control oriented aspects of managing quality thus causing unsatisfactory results. There is a need and a demand for better quality practice that can be attained through cooperation between practitioners and researchers. This paper discusses these challenges to IS quality and presents some suggestions for bridging the gap.

Journal ArticleDOI
TL;DR: The proposed model provides a new way to improve the software process by enhancing the developer's cognitive skill by using a category-based representation and a set of problem-solving control strategies.
Abstract: A software process is a problem-solving process with human cognitive characteristics. This paper presents a cognitive-based problem-solving framework consisting of a problem-solving cognitive space, a category-based representation and a set of problem-solving control strategies. As an application of the framework, a cognitive-based software process model is proposed to unify the software process and the developer's cognition. The proposed model provides a new way to improve the software process by enhancing the developer's cognitive skill. The development process of management information systems (MIS) has been used to demonstrate the proposed model.

Journal ArticleDOI
TL;DR: The results of the analysis confirm the assertion that, within the automated environment class, specification size indicators (that may be automatically and objectively derived) are strongly related to process effort requirements.
Abstract: Advances in software process technology have rendered some existing methods of size assessment and effort estimation inapplicable. The use of automation in the software process, however, provides an opportunity for the development of more appropriate software size-based effort estimation models. A specification-based size assessment method has therefore been developed and tested in relation to process effort on a preliminary set of systems. The results of the analysis confirm the assertion that, within the automated environment class, specification size indicators (that may be automatically and objectively derived) are strongly related to process effort requirements.

Journal ArticleDOI
TL;DR: The DEVICE method re-uses the primitives of active OODB systems, without introducing low-level data structures and provides an infrastructure for the integration of all database rule paradigms into a single knowledge base system.
Abstract: This paper describes a technique for the smooth integration of production rules into an active Object-Oriented Database (OODB) system that provides Event-Condition-Action (ECA) rules only, called DEVICE. The emphasis is given on the compilation of rule conditions into a discrimination network for incremental matching at run-time. The network consists of primitive, logical and complex events that save information about partial condition element matching, as in RETE algorithm, and trigger one ECA rule that corresponds to the production rule. The DEVICE method re-uses the primitives of active OODB systems, without introducing low-level data structures and provides an infrastructure for the integration of all database rule paradigms into a single knowledge base system.

Journal ArticleDOI
TL;DR: This paper supplements their studies by proposing a methodology to construct classification trees systematically from given sets of classifications and their associated classes, via the notion of the classification-hierarchy table.
Abstract: The classification-tree method developed by Grochtmann and coworkers is a black box testing technique to assist test engineers to construct test cases systematically from the functional specifications, via the construction of classification trees This paper supplements their studies by proposing a methodology to construct classification trees systematically from given sets of classifications and their associated classes, via the notion of the classification-hierarchy table The intuition of the classification-hierarchy table is to capture the hierarchical relationship for every pair of distinct classifications from which classification trees can be constructed