scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 1990"



Journal ArticleDOI
TL;DR: The use of software complexity metrics in quantitative modelling scenarios has been frustrated by a lack of understanding of the precise nature of exactly what is being measured as discussed by the authors, which is particularly true in the application of these metrics to predictive models, and some basic issues associated with the modelling process, including problems of shared variance among metrics and the possible relationship between complexity metrics and measures of program quality.
Abstract: The use of software complexity metrics in the determination of software quality has met with limited success. Many metrics measure similar aspects of program differences. Some lack a sound theoretical foundation. Attempts to use these metrics in quantitative modelling scenarios have been frustrated by a lack of understanding of the precise nature of exactly what is being measured. This is particularly true in the application of these metrics to predictive models. The paper investigates some basic issues associated with the modelling process, including problems of shared variance among metrics and the possible relationship between complexity metrics and measures of program quality. The modelling techniques are applied to a sample data set to explore the differences between modelling techniques with raw complexity metrics and complexity metrics that have been simplified through factor analysis. The ultimate objective is to provide the foundation for the use of complexity metrics in predictive models. This, in turn, will permit the effective use of these measures in the management of complex software projects.

96 citations


Journal ArticleDOI
TL;DR: A definition and conceptual framework of software reuse representations is proposed that relates these methods to each other and to the software life-cycle.
Abstract: Many methods for representing software components for reuse have been proposed. These include traditional library and information science methods, knowledge-based methods, and hypertext. The paper surveys and categorizes these methods, and discusses systems in which they have been used. A definition and conceptual framework of software reuse representations is proposed that relates these methods to each other and to the software life-cycle.

91 citations


Journal ArticleDOI
TL;DR: A software costestimation model developed specifically to deal with decisions about software reuse is described, which shows significant potential as a technique for cutting costs and helping to free developers for work on the more complex or novel components of a software system.
Abstract: A major reason for increasing the level of reuse in the software development process is to lower costs. By replacing expensive development activities with the adoption of relatively inexpensive reused parts, software reuse has significant potential as a technique for cutting costs and helping to free developers for work on the more complex or novel components of a software system. Unfortunately, the level of reuse currently in place in the software industry is limited by an inability to predict how and where resources should be invested. The paper describes a software costestimation model developed specifically to deal with decisions about software reuse.

61 citations


Journal ArticleDOI
TL;DR: It is shown that multiversion testing involving more than four versions, offers rapidly diminishing returns in terms of failure-detection effectiveness, unless additional versions reduce the span of correlated failures.
Abstract: Software testing based on program execution may consume considerable development effort. In this context the testing of two or more functionally equivalent versions of a program against each other appeals because it offers a potentially simple way of checking for the correctness of the program responses to a large number of test cases. One name for the technique is back-to-back testing. The approach can be cost-effective for new software, as well as for testing software after major modifications. The paper reviews empirical information on back-to-back testing and analyses the strategy. A simple model is used to explore its efficiency and cost-effectiveness. The key parameters are the degree and nature of the interversion failure correlation, the per failure identification and correction effort required during single-version program testing, and the shape of single and multiversion reliability growth functions. The advantages and deficiencies of the technique are discussed and some suggestions given about its practical use.

47 citations


Journal ArticleDOI
TL;DR: This paper describes a method of software quality control based on the use of software metrics that is applied to software design metrics to illustrate how design metrics can be used constructively during the software production process.
Abstract: The paper describes a method of software quality control based on the use of software metrics. The method is applied to software design metrics to illustrate how design metrics can be used constructively during the software production process. The various types of design metrics and how they can be used to support module (procedure) quality-control are discussed. This involves adapting conventional quality control methods such as control charts to the realities of software, by using: ‘robust’ summary statistics to construct ranges of acceptable metric values; scatterplots to detect modules with unusual combinations of metric values; and different types of metrics to help identify the underlying reasons for a module having unacceptable (anomalous) metric values. The approach is illustrated with examples of metrics from a number of existing software products.

40 citations


Journal ArticleDOI
Juhani Iivari1
TL;DR: The purpose of the two-part article is to develop its theoretical basis, to introduce a hierarchical spiral model based on three levels of modelling for an information system (IS) and software (SW) product, and to develop the idea of the spiral model as an integrative framework that can incorporate more specific and detailed IS and SW development approaches.
Abstract: Boehm has recently suggested a spiral model for software development and enhancement, which has achieved considerable recognition. The purpose of the two-part article is to contribute to the spiral model in three respects: to develop its theoretical basis, to introduce a hierarchical spiral model based on three levels of modelling for an information system (IS) and software (SW) product, and to develop the idea of the spiral model as an integrative framework that can incorporate more specific and detailed IS and SW development approaches. Part 1 introduces the sociocybernetic metamodel for IS/SW design and the hierarchical model for an IS/SW product as a theoretical background for the hierarchical spiral model to be introduced in Part 2.

40 citations


Journal ArticleDOI
TL;DR: It is concluded that no proof is given that the models can be used for estimating projects at an early stage of system development, therefore, only limited confidence should be placed in estimates that are obtained with a model only.
Abstract: The use of a model is one way to estimate a software development project. The paper describes an experiment in which a number of automated versions of estimating models were tested. During the experiment, experienced project leaders were asked to make a number of estimates for a project. This related to a project that had actually been carried out. On the basis of the differences found between the estimates and reality, it is concluded that no proof is given that the models can be used for estimating projects at an early stage of system development. Therefore, only limited confidence should be placed in estimates that are obtained with a model only.

36 citations


Journal ArticleDOI
Juhani Iivari1
TL;DR: The hierarchical spiral model for IS and SW development is described as an integrative framework that includes the evolution dynamics of IS/SW products, defining their operational life-cycles at the three levels of modelling, the main phase dynamics of the IS/ SW design process, and the nonlinear structure of the main phases intertwining the three Levels of modelling in a specific way.
Abstract: Part 1 of the two-part paper on the hierarchical spiral model introduced its theoretical background, consisting of the sociocybernetic metamodel for information system (IS) and software (SW) design and a three-level model for the IS/SW product. Part 2 describes the hierarchical spiral model for IS and SW development in detail as an integrative framework that includes the evolution dynamics of IS/SW products, defining their operational life-cycles at the three levels of modelling, the main phase dynamics of the IS/SW design process, defining the main phases required for the generation of each new operational life-cycle, learning dynamics, suggesting that each main phase consists of successive subphases that may be repeated during IS/SW design, depending on the learning process, and the nonlinear structure of the main phases intertwining the three levels of modelling in a specific way.

34 citations


Journal ArticleDOI
TL;DR: A goal-directed approach is described, as a mechanism for integrating early metrics into software development environments, to improve software quality.
Abstract: Software quality is becoming a topic of increasing importance. While there is widespread acceptance of the desirability of quality, there are few proven techniques for obtaining it. Moreover, those techniques that are in existence rely heavily on exercising the finished product and the imposition of improved quality whenever defects are observed. However, software quality is primarily an early life-cycle issue — acceptance testing is too late to make a significant impact on many software quality characteristics. Metrics, particularly design metrics, provide an important means to assess quality at earlier stages of the software life-cycle. Some design metrics are presented, together with confirmatory empirical evidence of their efficacy. A goal-directed approach is described, as a mechanism for integrating early metrics into software development environments, to improve software quality.

32 citations


Journal ArticleDOI
F. J. Redmill1, B. Kitchenham
TL;DR: In this article, the authors point out that inadequate management and a lack of attention to quality are the main causes of software development problems, such as being late, above budget, and out of specification.
Abstract: Projects in which software is developed are notorious for being late, above budget, and out of specification. Typically, project managers point to a number of technical problems, most of which are specific to software development, as reasons for this. The paper attempts to show, however, that inadequate management and a lack of attention to quality are the main causes of the trouble.

Journal ArticleDOI
TL;DR: Prototyping has become a popular alternative to traditional systems development methodologies and as mentioned in this paper explores the various definitions of prototyping to determine its advantages and disadvantages and presents a systematic methodology for incorporating the prototyping process into the existing system development process within an organization.
Abstract: Prototyping has become a popular alternative to traditional systems development methodologies. The paper explores the various definitions of prototyping to determine its advantages and disadvantages and to present a systematic methodology for incorporating the prototyping process into the existing system development process within an organization. In addition, one negative and one positive case study of prototyping within industrial settings is included.

Journal ArticleDOI
TL;DR: In this paper, the authors present a claim about the nature of infeasible paths in programs and provide significant statistical evidence to support its validity, which they use to define an easily derived metric that, in view of the results of the statistical analysis, should provide a good predictor of the likely feasibility of program paths.
Abstract: The paper advances a claim about the nature of infeasible paths in programs and provides significant statistical evidence to support its validity. The characteristics of the claim permit the definition of an easily derived metric that, in view of the results of the statistical analysis presented, should provide a good predictor of the likely feasibility of program paths.

Journal ArticleDOI
TL;DR: This paper focuses on the development of intelligent databases that tightly couple database and object-oriented technologies with artificial intelligence, information retrieval, and multi-media data-manipulation techniques.
Abstract: Object orientation provides a more direct and natural representation of real-world problems. Object-oriented programming techniques allow the development of extensible and reusable modules. The object-oriented concepts are abstract data typing, inheritance, and object identity. Combining object-oriented concepts with database capabilities such as persistence, transactions, concurrency, query, etc. results in powerful systems called object-oriented databases. Object-oriented databases have become the dominant post-relational database management system and are a necessary evolutionary step towards the more powerful intelligent databases. Intelligent databases tightly couple database and object-oriented technologies with artificial intelligence, information retrieval, and multi-media data-manipulation techniques.

Journal ArticleDOI
TL;DR: A general view of quality and reliability as the difference between what is expected and what is experienced is expanded on with respect to the requirement specification stages of software development.
Abstract: A general view of quality and reliability as the difference between what is expected and what is experienced is expanded on with respect to the requirement specification stages of software development. The dual nature of software requirements specifications, as solution to the problem of what the final software will be and do, and as a starting point for further system development, is formulated. This distinction is used to structure the types of metric that will be required. A distinction is drawn between those requirements that can be formalized, using for example a formal development method such as VDM, and those that cannot, either because they have inherently informal expectations or because they are not articulated. For the formalized requirements, metrics based on classical measurement theory are recommended. Where the requirements are informal, a comprehensive checklist procedure is suggested.

Journal ArticleDOI
G. Tate1
TL;DR: The economics of prototyping are examined in some detail from two different and complementary points of view, namely, risk management and productivity and unadjusted function points could provide some quantitative basis for budgeting for user involvement in prototyping.
Abstract: The characteristics and types of prototyping are described, together with reasons for building software prototypes. The economics of prototyping are examined in some detail from two different and complementary points of view, namely, risk management and productivity. It is suggested that unadjusted function points could provide some quantitative basis for budgeting for user involvement in prototyping. A variety of prototyping methods are described, together with some analysis of their applicability and references to a few recent examples of their practical application. The study is balanced by a brief examination of the down-side of prototyping. Life-cycle issues in prototyping are complemented by needs/capabilities analysis. Finally, some comments are made on the future of prototyping, in particular in relation to CASE (computer-aided software engineering).

Journal ArticleDOI
TL;DR: The description of the functional specifications of a program (syntax and semantics) using a formalism based on two-level grammars is proposed, evaluating the economic advantages in industrial experiences.
Abstract: The paper deals with the automation of quality control activities in software development. It proposes the description of the functional specifications of a program (syntax and semantics) using a formalism based on two-level grammars. Some tools, which can generate automatically both tests and related expected results and can provide a measure of the ‘functional coverage’, are presented. Limitations are examined and discussed. Real applications are shown, evaluating the economic advantages in industrial experiences.

Journal ArticleDOI
TL;DR: A feature of the project, which contributed significantly to its ultimate success, was the high level of user involvement in the development process, including project management, requirements analysis, documentation, education, liaison, and some implementation.
Abstract: The planning, development, and implementation of an administrative information system for the New Zealand Correspondence School is described. This was a brand new computer system for users with little or no previous experience of computing. The constraints on the project were such that conventional waterfall development using a third-generation language was impracticable. There was, moreover, a number of significant project risks, including requirements uncertainty, a tight time-schedule, staffing constraints, and possible problems of data quality and user acceptance. The case study describes how risk management was used to determine development strategies, including the selection of an appropriate life-cycle model, development methodology, and fourth-generation language development environment. A feature of the project, which contributed significantly to its ultimate success, was the high level of user involvement in the development process, including project management, requirements analysis, documentation, education, liaison, and some implementation.

Journal ArticleDOI
Darrel C. Ince1
TL;DR: The nature of metrics, places them in historical context, and how they might be used are outlined, and a description of an example of how metrics can be employed on a software project is described.
Abstract: Software quality metrics are numerical measures that are used to quantify some aspect of a software product. The paper, which forms an introduction to the remaining metrics papers in this issue, is part tutorial and part review. It outlines the nature of such metrics, places them in historical context, and describes how they might be used. The paper finishes with a description of an example of how metrics can be employed on a software project.

Journal ArticleDOI
D. New1, Paul D. Amer1
TL;DR: It is expected that the GROPE tool will assist the original protocol specifier in the design and debugging process, promote faster understanding of a protocol by those using it for the first time, and facilitate development of effective test scenarios.
Abstract: GROPE (Graphical Representation Of Protocols in Estelle) is a tool for graphically animating the dynamic execution of an Estelle formal specification. Developed in smalltalk -80 and based on a Sun 3/110 workstation, GROPE is a window-based system that pictorially represents a protocol's architecture, animates transitions firing and the exchange of interactions between modules, graphically displays a module's extended finite state machine and the changing of states, etc. It is expected that the GROPE tool will assist the original protocol specifier in the design and debugging process, promote faster understanding of a protocol by those using it for the first time, and facilitate development of effective test scenarios.

Journal ArticleDOI
TL;DR: An integrated approach to software quality, reliability, and safety is described that is termed ‘software quality engineering’, which encompasses the three levels of quality assurance technology generally recognised in manufacturing, namely, product inspection, process control, and design improvement.
Abstract: An integrated approach to software quality, reliability, and safety is described that is termed ‘software quality engineering’. It encompasses the three levels of quality assurance technology generally recognised in manufacturing, namely, product inspection, process control, and design improvement. Most software organizations today operate at the product-inspection level of technology (or lower). Achieving higher levels of quality assurance technology depends on establishing effective measurement, control, and process improvement mechanisms within the software enterprise.

Journal ArticleDOI
TL;DR: Modelling temporal data according to each approach with their respective data-manipulation operations are covered, which includes the advantages and expressive power of each approach, the classification of temporal databases, and the handling of retroactive and postactive changes.
Abstract: The paper discusses modelling temporal variation of data in the context of the relational data model. This is done by employing time-stamps, which leads to two possible approaches: adding time-stamps to tuples and using first normal-form (1NF) relations, and adding time-stamps to attributes and using non-first normal form (NINF) relations. Modelling temporal data according to each approach with their respective data-manipulation operations are covered. This is followed by a discussion that includes the advantages and expressive power of each approach, the classification of temporal databases, and the handling of retroactive and postactive changes.

Journal ArticleDOI
TL;DR: The paper provides a taxonomy of methods and approaches for each of these issues, showing the extent to which various researchers have recently resolved the subissues deemed to be essential for temporal database systems to be feasible.
Abstract: A study of real-world database applications that include time-related queries, and an examination of various approaches to easing the development of such applications, leads to the identification of three major issues to be addressed in temporal database systems. These are the representation of time at a conceptual level, the design of a usable and effective means of querying temporal data, and the specification of physical access strategies and methods to ensure the efficiency of access to (bulky) temporal data. The paper provides a taxonomy of methods and approaches for each of these issues, showing the extent to which various researchers have recently resolved the subissues deemed to be essential for temporal database systems to be feasible.

Journal ArticleDOI
TL;DR: The article surveys and compares the basic approaches that address issues of imprecision and incompleteness in relational databases.
Abstract: Most database systems are designed under assumptions of precision and completeness of both the data they store and the requests to retrieve data, even though in reality these assumptions are often false. In recent years, considerable attention has been given to issues of imprecision and incompleteness in databases. The article surveys and compares the basic approaches that address these issues in relational databases.

Journal ArticleDOI
TL;DR: The paper reviews hypertext systems and discusses in some detail the fundamental features of hypertext technology, which are compared with other information management systems.
Abstract: Hypertext is data maintained as a network of interconnected discrete blocks of information. Hypertext systems are computer systems used to create and maintain hypertext databases and provide mechanisms for users to access the information. While the ideas behind hypertext are not new, the technology to make it effective is. The paper reviews hypertext systems and discusses in some detail the fundamental features of hypertext technology. Hypertext systems are compared with other information management systems, and some of the top-level design and implementation issues are introduced. The advantages and disadvantages of hypertext are discussed from the applications perspective and some issues relating to current research and future developments of the technology are presented.

Journal ArticleDOI
TL;DR: A model to determine the optimal time for releasing new versions of a given package was built, using data received from several software houses (on different packages) and the range of conceivable results was demonstrated.
Abstract: Software quality management is the whole process of managing for quality software production. It involves (among other functions) the continual process of updating and enhancing a given software by releasing new versions. These releases provide the customer with improved and error-free versions. The process continues throughout the software life-cycle. The paper builds a model to determine the optimal time for releasing new versions of a given package. Four cost factors were formulated to provide a total cost of the next release as a function of the release time. The model was validated, using data received from several software houses (on different packages). For each package, optimal time for the next release (given the history data of the current release) was obtained and demonstrated the range of conceivable results of the model.

Journal ArticleDOI
TL;DR: In this article, the authors present a number of criteria that can be used by the MIS manager for evaluation and selection of a CASE system for organizational use, based on a set of criteria.
Abstract: A large number of computer-aided software engineering (CASE) tools are now commercially available. These tools claim a varying range of facilities. It is a management information system (MIS) manager's responsibility to sort through these claimed facilities and choose a tool that can adequately meet system modelling needs. In the absence of any standards, the analysis preceding this choice is not easy. The paper analyses CASE facilities in detail and generates a number of criteria that can be used by the MIS manager for evaluation and selection of a CASE system for organizational use.

Journal ArticleDOI
A. Schill1
TL;DR: The paper surveys the distributed application support area, especially focusing on distributed programming and configuration techniques, use of object-oriented techniques, and development support, and several existing approaches are classified and compared with each other.
Abstract: Great advances have been achieved in distributed systems during the last decade. In particular, a rapid technological deployment has taken place in communication networks and protocols. On top of basic communication protocols, several higher-level communication facilities have been developed to support distributed applications. From the application point of view, there is a growing demand for distributed programming. Important example areas are computer-integrated manufacturing and office automation. The paper surveys the distributed application support area, especially focusing on distributed programming and configuration techniques, use of object-oriented techniques, and development support. Several existing approaches are classified and compared with each other. Important communication approaches covered are extended message passing and remote procedure call facilities, as well as new distributed object-oriented interaction mechanisms and multiparty communication support. Important distributed configuration management issues include placement and structure support for distributed applications, dynamic configuration change support, and associated configuration languages. Details are derived from foreign system developments as well as from personal experiences with distributed systems. As an integration effort, the architecture of an object-oriented environment for distributed application development support is outlined. The paper is for researchers interested in further work in the area of distributed systems and software developers interested in a survey of existing mechanisms to support their development work.

Journal ArticleDOI
TL;DR: An extended attribute-grammar (AG) interpreter is presented, which exhibits both inexact reasoning and full theorem-proving capabilities, and is wide enough to easily incorporate many inexact Reasoning schemes.
Abstract: An extended attribute-grammar (AG) interpreter is presented, which exhibits both inexact reasoning and full theorem-proving capabilities. Software engineering applications may be faced with the extra benefit of an inexact processing mechanism. The proposed tool allows the properties of AGs, full theorem-proving, and inexact representation and processing to be combined. The procedural and declarative characteristics may be expressed in the AGs' language, with additional calls to user-defined semantic functions, written in the host language. The inexact reasoning method proposed is wide enough to easily incorporate many inexact reasoning schemes.

Journal ArticleDOI
TL;DR: The PC-based ADISSA-supporting tools provide an automated environment that enables the analyst/designer to draw hierarchical dataflow diagrams and check their correctness, to design the transactions of the systems, the interface — a menu-tree — and the database schema, and to maintain an integrated data dictionary.
Abstract: ADISSA methodology for information systems analysis and design covers in a unified way the stages of functional analysis, transactions (process) design, interface design, database schema design, input-output design, and structured prototyping. The PC-based ADISSA-supporting tools provide an automated environment that enables the analyst/designer (who works according to the methodology) to draw hierarchical dataflow diagrams and check their correctness, to design the transactions of the systems, the interface — a menu-tree — and the database schema, and to maintain an integrated data dictionary.