scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 1992"


Journal ArticleDOI
TL;DR: An overview of the state of the art of software cost estimation (SCE), and what can software project management expect from SCE models, how accurate are estimations which are made using these kind of models, and what are the pros and cons of cost estimation models.
Abstract: The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be estimated? (4) What can software project management expect from SCE models, how accurate are estimations which are made using these kind of models, and what are the pros and cons of cost estimation models?

374 citations


Journal ArticleDOI
TL;DR: The results indicate that the assumption that there is a nonlinear relationship between size and effort is not supported, but the assumption of a non linear relationship between effort and duration is.
Abstract: The paper reviews some of the assumptions built into conventional cost models and identifies whether or not there is empirical evidence to support these assumptions. The results indicate that the assumption that there is a nonlinear relationship between size and effort is not supported, but the assumption of a nonlinear relationship between effort and duration is. Second, the assumption that a large number of subjective productivity adjustment factors is necessary is not supported. In addition, it also appears that a large number of size adjustment factors are unnecessary. Third, the assumption that staff experience and/or staff capability are the most significant cost drivers (after allowing for the effect of size) is not supported by the data available to the MERMAID project, but neither can it be confirmed from analysis of the COCOMO data set. Finally, the assumption that compression of schedule decreases productivity was not supported. In fact, none of the models of schedule compression currently included in existing cost models was supported by the data.

132 citations


Journal ArticleDOI
TL;DR: A concurrent path model is presented to model the execution behaviour of concurrent programs, and the potential reliability of path analysis testing for concurrent programs is assessed.
Abstract: Path analysis testing is a widely used approach to program testing. However, the conventional path analysis testing method is designed specifically for sequential program testing; it is inapplicable to concurrent program testing because of the existence of multi-loci of control and task synchronizations. A path analysis approach to concurrent program testing is proposed. A concurrent path model is presented to model the execution behaviour of concurrent programs. In the model, an execution of a concurrent program is seen as involving a concurrent path (which is comprised of the paths of all concurrent tasks), and the tasks' synchronizations are modelled as a concurrent route to traverse the concurrent path involved in the execution. Accordingly, testing is a process to examine the correctness of each concurrent route along all concurrent paths of concurrent programs. Examples are given to demonstrate the effectiveness of path analysis testing for concurrent programs and some practical issues of path analysis testing, namely, test path selection, test generation, and test execution, are discussed. Moreover, the errors of concurrent programs are classified into three classes: domain errors, computation errors, and missing path errors, similar to the error classification for sequential programs. Based on the error classification, the potential reliability of path analysis testing for concurrent programs is assessed.

55 citations


Journal ArticleDOI
TL;DR: The formalization of techniques in the context of information system development methodologies is discussed, and the Predicator Model is presented as an extended example.
Abstract: The formalization of techniques in the context of information system development methodologies is discussed. When such methodologies are developed, the primary goal is applicability. After the methodology has proved itself in practice, it will be applied in more sophisticated situations, pushing it to its limits. In those cases, informal definitions are known to be inappropriate. Some typical problems are considered. After that, a procedure for proper formalization is described. The Predicator Model is presented as an extended example. Finally, some experiences with this approach to formalization are discussed.

50 citations


Journal ArticleDOI
TL;DR: An environment is presented that integrates the tasks of translating a source program to machine instructions for a proposed architecture, imitating the execution of these instructions and collecting measurements, which facilitates experimentation with a proposed Architecture and a compiler.
Abstract: Often a computer architecture is designed and implemented without determining whether its associated compilers will actually use all of the architecture's features. A more effective machine can result when the interactions between an architecture and a compiler are addressed. This paper presents an environment that integrates the tasks of translating a source program to machine instructions for a proposed architecture, imitating the execution of these instructions and collecting measurements. The environment, which is easily retargeted and quickly collects detailed measurements, facilitates experimentation with a proposed architecture and a compiler.

37 citations


Journal ArticleDOI
TL;DR: Current European and other research projects in reuse and reverse engineering are reviewed from the viewpoint of domain knowledge and how it is used, to understand the application domain.
Abstract: While much progress has been made in software reverse engineering and reuse, significant problems remain. Reverse engineering methods predominantly address the code level, and for full effect the purpose for which the software was built, the application domain, should be understood. Reuse methods focus on library organization and on standards for component production, with much interest in object-oriented methods, but similarly to reuse software effectively it is necessary to understand the application domain, so that it is possible to choose the appropriate parts, organize these effectively in libraries, and deploy the library components in the solution to new problems. Research is now moving towards domain analysis, and current European and other research projects in reuse and reverse engineering are reviewed from the viewpoint of domain knowledge and how it is used. Other management, social, and economic issues also remain to be solved.

29 citations


Journal ArticleDOI
TL;DR: The paper applies principles of scientific measurement to the basic entities failure, fault, and change, and defines a set of attributes measured on orthogonal scales, so that measurements of different attributes may be made independently.
Abstract: Measurement is essential to any programme of quality improvement. The manufacturer of a system must be able to measure its dependability. This comprises attributes such as reliability, safety and security, which are external attributes of the system and are measured indirectly by analysis of direct measurements extracted from raw data. An essential part of the collection of raw data is the measurement of failures during system trial and operation. Some will be due to the physical failure of hardware components, others to design faults in hardware, software, or the user interface. Where a design fault is diagnosed, a corrective change is usually made to remove it, and improve dependability. Measurement of dependability requires that the basic entities failure, fault, and change, are measured in a rigorous, consistent, yet practical way. The paper applies principles of scientific measurement to these entities, and defines a set of attributes measured on orthogonal scales, so that measurements of different attributes may be made independently. The proposed scheme is completely general, but can be adapted to any real measurement programme.

24 citations


Journal ArticleDOI
TL;DR: The thesis of the paper is that measurement activities should focus on management purposes and should take into account the diverse concerns of managers at different levels, and identify productivity measures appropriate for such situations.
Abstract: The measurement of productivity is seen variously as the key to successful software project estimation, to improvement of the efficiency and effectiveness of information system development and maintenance, and to demonstration of the performance of the information technology (IT) function within the business. The thesis of the paper is that measurement activities should focus on management purposes and should take into account the diverse concerns of managers at different levels. Even the term ‘productivity’ has varying interpretations, depending on management level, so that single measures used in isolation are inappropriate and can be counterproductive, especially in the realm of software development. At the project management level, productivity concerns are often centred on efficiency. For example, when estimating the effort needed to produce a system, assumptions must be made about the level of efficiency that will be achieved—ideally based on data from past projects. The paper identifies productivity measures appropriate for such situations, and the impact of each on the project management process. A higher-level view of software productivity is concerned with effectiveness: how much useful function is delivered to users by the applications delivery function? Here the focus is on the utility of delivered products, rather than on the amount of software which needs to be written to deliver the products. Management is concerned to establish an environment that maximizes output in terms of usable function. Hence the use of function points as a size measure derived from a user view of the system, and the use of function points per man-month as a productivity measure. However, to provide a more complete view of the effectiveness of the information system function, development effort alone is an insufficient representation of the input to the system: due account also needs to be taken of the costs of using the system, which could include equipment costs as well as user effort. Ultimately, businesses are concerned with the bottom-line contribution of IT. While lower-level measures of efficiency and effectiveness are important in achieving objectives at project and functional levels, higher-level views of the value of IT to the business, compared with the associated costs, are of concern to senior management. However, traditional return on investment and cost-benefit analyses have been found to be inadequate when applied to IT and difficult to relate to the lower-level measures of efficiency and effectiveness. Whatever the management level of concern, measuring productivity is, in itself, insufficient: the set of measurements used must address key performance indicators—anything else is waste, or pure history.

22 citations


Journal ArticleDOI
TL;DR: The paper describes a method to translate from a nonrelational to a relational schema and uses reverse engineering to extract entities and relationships into an extended entity-relationship model from the semantics of a hierarchical or network schema.
Abstract: Large organizations have many databases and database management systems (DBMSs). Within these organizations, there is an increasing need for data to be converted from one DBMS to another and for the data from different DBMSs to be integrated. The paper describes a method to translate from a nonrelational to a relational schema. This translation is becoming increasingly popular because relational database technology is proving more user-friendly and adaptable. The methodology uses reverse engineering to extract entities and relationships into an extended entity-relationship model from the semantics of a hierarchical or network schema. The translation requires the help of a database administrator and experienced users. The logical equivalence of the translated relational schema with the hierarchical or network schema is validated by verifying the preservation of the functional and inclusion dependencies in the schemas. A reverse translation to recover the original hierarchical or network schema is also used to validate the translation. Examples are used to illustrate the procedures.

18 citations


Journal ArticleDOI
TL;DR: The approach adopted for software project estimation within the Telecommunications Systems Group, TSG, of GPT, UK, describes the material associated with this estimation strategy, which forms part of a general software metrics initiative known as the PRISM programme.
Abstract: The paper describes the approach adopted for software project estimation within the Telecommunications Systems Group, TSG, of GPT, UK. The material associated with this estimation strategy forms part of a general software metrics initiative known as the PRISM programme. PRISM stands for ‘PRocess Improvement Support via Measurement’ and is part of TSG's continual process improvement strategy. GPT is an international organization of some 24 000 employees world wide, manufacturing and marketing major telecommunications products that include the System X and DCO exchanges. This involves a significant investment in the production of real-time software. TSG accounts for approximately 50% of the engineering effort within GPT.

17 citations


Journal ArticleDOI
TL;DR: The authors motivate their terms of comparison, characterize three broad approaches to deductive object-oriented databases and introduce the notion of language convergence to help in the characterization of some shortcomings that have been perceived in them.
Abstract: The paper is concerned with the problem of combining deductive and object-oriented features to produce a deductive object-oriented database system which is comparable to those currently available under the relational view of data modelling not only in its functionality but also in the techniques employed in its construction and use. Under this assumption, the kinds of issues that have to be tackled for a similar research strategy to produce comparable results are highlighted. The authors motivate their terms of comparison, characterize three broad approaches to deductive object-oriented databases and introduce the notion of language convergence to help in the characterization of some shortcomings that have been perceived in them. Three proposals that have come to light in the past three years are looked into in some detail, in so far as they exemplify some of the positions in the space of choices defined. The main contribution of the paper is towards a characterization of the language convergence property of deductive database languages which has a key role in addressing critiques of the deductive and object-oriented database research enterprise. A basic familiarity with notions from deductive databases and from object-oriented databases is assumed.

Journal ArticleDOI
TL;DR: An empirical study of software maintenance in a system software department in 1989 and 1990 showed no relation between the phase of error occurrence and the solution time and an explanation is the gap between the methods as they are supposed to be applied and reality.
Abstract: The paper describes an empirical study of software maintenance that was carried out in a system software department in 1989 and 1990. The study focused on error occurrence and fault detection. Over 400 problem reports were studied. The study showed some unexpected results. It showed, for example, no relation between the phase of error occurrence and the solution time. An explanation is the gap between the methods as they are supposed to be applied and reality. Assessment of the size of the gap is one of the contributions of this kind of empirical study.

Journal ArticleDOI
TL;DR: The Mark II Function Point method was used to predict the cost of a number of projects at the Inland Revenue's Information Technology to give some cause for optimism in the use of the function point model that was used.
Abstract: The paper describes the results of an experiment in software costing. The Mark II Function Point method was used to predict the cost of a number of projects at the Inland Revenue's Information Technology. The results were compared with individual managers' estimates and the actual expenditure. The results give some cause for optimism in the use of the function point model that was used.

Journal ArticleDOI
TL;DR: An overview of a model for the representation of both raw data and macro data and a set of transformations on macro operations (similar to those in relational algebra) are introduced, which can be used for optimizing queries in DS-DBMSs.
Abstract: Recently, there has been a growing interest in statistical database (SDB) research. When SDBs are dispersed among computing facilities at various sites (e.g., in health-care networks) an additional dimension is added to the already difficult problems faced by the SDB designer. A distributed statistical database management system (DS-DBMS) consists of micro data (i.e., raw data) and macro data (i.e., aggregated objects called summary tables), which can be considered essentially as aggregated views of the raw data in a special format. The first part of the paper gives an overview of a model for the representation of both raw data (micro data) and summary tables (macro data). The model is an extension of the relational model (so that existing distributed database systems can be exploited). Most of the first part is devoted to defining operations on macro data sets. Based on these operations, a set of equivalent relational operations is described, as one of the main objectives in defining the micro and macro data sets, and the operations on them, has been to use as much as possible the capabilities that are already offered by most relational DBMSs. The second part of the paper deals with one of the important aspects of performance in a DS-DBMS, namely, the efficient processing of queries. This is heavily influenced by the performance of query optimizers. However, to provide query optimization in a DS-DBMS, special issues are raised that manifest themselves in different scenarios. Some of the important issues and problems raised are discussed and solutions proposed. In addition, a set of transformations on macro operations (similar to those in relational algebra) are introduced, which can be used for optimizing queries in DS-DBMSs.

Journal ArticleDOI
TL;DR: It is demonstrated, by means of a simple design metric example, that the application of concepts from the area of software process modelling to product metrics can help overcome many of these deficiencies and result in quantitative process models that have potential for the design and construction of software tools and environments.
Abstract: The paper reviews developments in the arena of software engineering product metrics, with special reference to system architecture metrics. Some of the weaknesses of current approaches are examined, in particular the very weak notion of process embodied by a product metric. It is argued that the consequence of this oversight is uncertainty in the application and interpretation of metrics. This in turn has led to a slow uptake of product metrics by the software industry. The paper then demonstrates, by means of a simple design metric example, that the application of concepts from the area of software process modelling to product metrics can help overcome many of these deficiencies; it also results in quantitative process models that have potential for the design and construction of software tools and environments.

Journal ArticleDOI
TL;DR: How conceptual modelling as applied to database systems can move relatively painlessly into the domain of object orientation is portrayed to position the techniques firmly in the history of semantic data modelling.
Abstract: The main aim of the paper is to portray how conceptual modelling as applied to database systems can move relatively painlessly into the domain of object orientation. It discusses how object-oriented analysis, an approach primarily directed at the building of applications in procedural or object-oriented languages, is equally relevant to the development of database systems. The intention is to position the techniques firmly in the history of semantic data modelling.

Journal ArticleDOI
TL;DR: Results of experiments in knowledge engineering with scientific texts by the application of the ARISTA method demonstrate the feasibility of deductive question-answering and explanation generation directly from texts involving mainly causal reasoning.
Abstract: The paper presents results of experiments in knowledge engineering with scientific texts by the application of the ARISTA method. ARISTA stands for Automatic Representation Independent Syllogistic Text Analysis. This method uses natural language text as a knowledge base in contrast with the methods followed by the prevailing approach, which rely on the translation of texts into some knowledge representation formalism. The experiments demonstrate the feasibility of deductive question-answering and explanation generation directly from texts involving mainly causal reasoning. Illustrative examples of the operation of a prototype based on the ARISTA method and implemented in Prolog are presented.

Journal ArticleDOI
TL;DR: Critical issues of graphical user interface development for object-oriented database systems are discussed and new challenges and new opportunities are posed in user interface design.
Abstract: Graphical user interfaces have become very popular for database systems because they increase the usability of these applications. The functionality and ease-of-use of the graphical user interface, however, depend on the expressiveness and complexity of the underlying data model. With the advent of new database technology based on powerful data models like the object-oriented data model, new challenges and new opportunities are posed in user interface design. In the paper critical issues of graphical user interface development for object-oriented database systems are discussed.

Journal ArticleDOI
TL;DR: Four design methods that are of current interest in real-time software development are compared and the relative strengths and weaknesses of each method are presented.
Abstract: Four design methods that are of current interest in real-time software development are compared. The comparison presents the relative strengths and weaknesses of each method, with additional information on graphic notation and the recommended sequence of steps involved in the use of each method. The methods selected for comparison are Structured Design for Real-Time Systems, object-oriented design, PAMELA (Process Abstraction Method for Embedded Large Applications), and SCR (Software Cost Reduction project from the Naval Research Laboratory).

Journal ArticleDOI
TL;DR: The usefulness of the evaluation, the inadequacies of system developmentpractice it implies, and how to incorporate HF evaluation into an improved system development practice are considered.
Abstract: A human factors (HF) evaluation, carried out as part of the development of a set of computer-aided software engineering (CASE) tools, is presented and is used as an example of the processes and products of typical HF evaluation practice. The role of HF evaluation as a part of software quality assurance is identified, and typical current practice of HF evaluation is characterized. The details of the particular evaluation are then reported. First, its processes are described; these are determined by relating features of the system under development to the desired focus, actual context, and possible methods of the evaluation. Then the products of the evaluation are described; these products or outcomes are formulated as the user-computer interaction difficulties that were identified, grouped into three types (termed task, presentation, and device difficulties). The characteristics of each type of difficulty are discussed, in terms of their ease of identification, their generality across application domains, the HF knowledge that they draw on, and their relationship to redesign. The conclusion considers the usefulness of the evaluation, the inadequacies of system development practice it implies, and how to incorporate HF evaluation into an improved system development practice.

Journal ArticleDOI
TL;DR: The paper describes the use of the formal language Z to specify the InterSect system, a prototype hypertext system designed to meet the requirements of complex documentation environments that differs from conventional hypertext systems in that its nodes can behave like records in a database, as well as participating in normal hypertext links.
Abstract: InterSect is a prototype hypertext system designed to meet the requirements of complex documentation environments It differs from conventional hypertext systems in that its nodes can behave like records in a database, as well as participating in normal hypertext links This helps to overcome some of the problems, such as getting lost in hyperspace, exhibited by first-generation hypertext systems The object-oriented database DAMOKLES is used in the prototype The paper describes the use of the formal language Z to specify the InterSect system

Journal ArticleDOI
TL;DR: The role of TARDIS during specification and high-level design of a software system with stringent real-time goals is illustrated.
Abstract: TARDIS is a generic framework that prescribes mechanisms needed during software development to cater for 'nonfunctional' system requirements. These requirements have been treated in an ad hoc manner in the past, but the increasing importance of safety-critical embedded systems demands more rigorous approaches. The TARDIS framework mandates the inclusion of nonfunctional goals during initial requirements specification and subsequent adherence to these requirements through all phases of software development. The role of TARDIS during specification and high-level design of a software system with stringent real-time goals is illustrated.

Journal ArticleDOI
TL;DR: A new approach, based on a multilanguage framework and leading to easier and more effective automation of protocol design, is presented.
Abstract: The paper deals with the problem of communication software design carried out by a synthesis approach. After a discussion of design methodologies, a survey is done of the major approaches to automated protocol design, focusing on their features and on the adequacy of the adopted formal description techniques. A new approach, based on a multilanguage framework and leading to easier and more effective automation of protocol design, is presented.

Journal ArticleDOI
TL;DR: The paper reports the results of an experiment conducted to explore the significance of team coordination in the process of software development.
Abstract: The problem of how to coordinate teams or organizations involved in the development of software has acquired ever increasing attention. The paper reports the results of an experiment conducted to explore the significance of team coordination in the process of software development. Three small teams were organized to develop software given the same goal, and the effects of the coordination within each team were observed and compared.

Journal ArticleDOI
TL;DR: It is illustrated how heuristic solutions can be embodied in a model-based DSS and how the standard decision support literature, although intuitively appealing, provides little practical assistance in system construction or classification.
Abstract: The paper presents a case study of the development of an expert decision support system which uses simple heuristic methods for fast determination of routes for simultaneous signals in a transmission network of limited capacity. It illustrates how heuristic solutions can be embodied in a model-based DSS and how the standard decision support literature, although intuitively appealing, provides little practical assistance in system construction or classification

Journal ArticleDOI
TL;DR: This paper attempts to place testing within the context of quality assurance and also tries to predict trends in testing for the next decade.
Abstract: Software testing has, for many years, been regarded as something of a Cinderella subject. The paper attempts to place testing within the context of quality assurance and also tries to predict trends in testing for the next decade. A major component in the growth of testing technology should be the increased use of testing tools.

Journal ArticleDOI
TL;DR: Progress in theoretical, experimental and observational approaches is described together with factors driving the evolution of software measurements, which include development of standards, large-scale research projects and success stories from industry.
Abstract: The development in the field of software measurements is outlined to identify the state of the art available to practitioners. Progress in theoretical, experimental and observational approaches is described together with factors driving the evolution. These factors include development of standards, large-scale research projects and success stories from industry. Recent examples are described of how software measures have been applied to support the management of the software development process. The examples are concerned with the establishment of company norms for, and use of anomalies among, measurement values, and the successful implementation of a large-scale measurement programme.

Journal ArticleDOI
TL;DR: An approach to heterogeneous distributed database management system (DBMS) design is described, and a prototype implementation based on this approach called the multiple database access system (MDAS) is presented.
Abstract: An approach is described to heterogeneous distributed database management system (DBMS) design, and a prototype implementation based on this approach called the multiple database access system (MDAS) is presented. This system acts as a frontend to multiple local DBMSs, which continue to perform all data processing. The current MDAS implementation involves two different commercial microcomputer-based local DBMSs residing on separate machines, which are joined by a serial connection. The MDAS services queries on an integrated view of semantically related databases exhibiting a range of schema and data conflicts.

Journal ArticleDOI
TL;DR: This paper presents a novel approach to automated test data generation that is based on actual execution of the program under test, a run-time scheduler, function-minimization methods, and dynamic dataflow analysis.
Abstract: Test data generation in program testing is the process of identifying a set of test data that satisfies given testing criteria. One of the major problems in dynamic testing of distributed software is reproducible program execution. Repeated executions of a distributed program with the same test data may result in execution of different program paths. Unlike for sequential programs, a test case for a distributed program must contain more than input data; it must also provide appropriate choices for nondeterministic selections. The paper presents a novel approach to automated test data generation that is based on actual execution of the program under test, a run-time scheduler, function-minimization methods, and dynamic dataflow analysis. Test data are developed for the program using actual values of the input variables. When the program is executed, the program execution flow is monitored. If during program execution an undesirable execution flow is observed (e.g., the ‘actual’ path does not correspond to the selected path), then function-minimization search algorithms are used to locate automatically the values of input variables for which the selected path is traversed. In addition, dynamic data-flow analysis is used to determine those input variables responsible for the undesirable program behaviour; this can lead to significant speed-up of the test data generation process.

Journal ArticleDOI
TL;DR: Outlier analysis techniques are applied to identify three classes of problem design component, based on an empirical study of 62 modules, and augmenting the more traditional approach of a single structure metric with an additional perspective, that of module size, considerably enhances the ability of design metrics to isolate problem components.
Abstract: Design structure measures are an example of a class of metrics that may be derived early on in a software project; they are useful numeric indicators of design weaknesses — weaknesses which, if uncorrected, lead to problems of implementation, reliability, and maintainability. Unfortunately, these metrics suffer from certain limitations. In particular, they are limited in their ability to model system architecture due to the fact that they are insensitive to component size. Thus architectures that trade structural complexity for large components by electing to comprise a small number of extremely large modules will not be adequately modelled. The paper has two concerns. First, there is the problem of adequately measuring component size at design time. Various existing metrics are evaluated and found to be deficient. Consequently, a new, more flexible approach, based on the traceability from system requirements to design components, is proposed. Second, there is the issue of multidimensional modelling, in this case structure and size. Outlier analysis techniques are applied to identify three classes of problem design component, based on an empirical study of 62 modules. The results suggest that augmenting the more traditional approach of a single structure metric with an additional perspective, that of module size, considerably enhances the ability of design metrics to isolate problem components.