scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 1993"


Journal ArticleDOI
TL;DR: A method for measuring modifications to database schemata and their consequences by using a thesaurus tool is presented and measurements of the evolution of a large-scale database application currently running in several hospitals in the UK are presented and interpreted.
Abstract: Achieving correct changes is the dominant activity in the application software industry. Modification of database schemata is one kind of change which may have severe consequences for database applications. The paper presents a method for measuring modifications to database schemata and their consequences by using a thesaurus tool. Measurements of the evolution of a large-scale database application currently running in several hospitals in the UK are presented and interpreted. The kind of measurements provided by this in-depth study is useful input to the design of change management tools.

153 citations


Journal ArticleDOI
TL;DR: SPICE (Software Process Improvement and Capability Determination) as mentioned in this paper is an international standard for software process management, which aims to build on the best features of existing software assessment methods and is developed by the International Standards Group for Software Engineering.
Abstract: In June 1991, the International Standards group for Software Engineering approved a study period to investigate the need and requirements for a standard for software process management. A new international work item has been subsequently raised. The resulting project is named SPICE (Software Process Improvement and Capability Determination). The project aims to build on the best features of existing software assessment methods.

151 citations


Journal ArticleDOI
TL;DR: The subjective judgement by the expert is incorporated in the regression model of the metrics based on the experimental data and ensures that the metric system is pragmatic and flexible for the software industry.
Abstract: The paper presents a new metric for the object-oriented design. The metric measures the complexity of a class in an object-oriented design. The metrics include operation complexity, operation argument complexity, attribute complexity, operation coupling, class coupling, cohesion, class hierarchy, and reuse. An experiment is conducted to build the metric system. The approach is to derive a regression model of the metrics based on the experimental data. Moreover, the subjective judgement by the expert is incorporated in the regression model. This ensures that the metric system is pragmatic and flexible for the software industry.

77 citations


Journal ArticleDOI
TL;DR: A brief review of the program testing technique known as ‘mutation testing’ is provided and current research directions in this area are outlined and it is suggested that this method may be one way to achieve the high reliability necessary in critical software.
Abstract: The aim of the paper is to provide a brief review of the program testing technique known as ‘mutation testing’ and outline current research directions in this area. Mutation testing is an example of what is sometimes called an error-based testing technique. In other words, it involves the construction of test data designed to uncover specific errors or classes of errors. A large number of simple changes (mutations) are made to a program, one at a time. Test data then has to be found which distinguishes the mutated versions from the original version. Although the idea was proposed more than a decade ago, it is in some ways still a ‘new’ technique. Originally it was seen by many as costly and somewhat bizarre. However, several variants of the basic method have evolved and these, possibly in conjunction with more efficient techniques for applying the method, can help reduce the cost. Also, by guaranteeing the absence of particular errors, it may be one way to achieve the high reliability necessary in critical software. A further advantage of mutation testing is its universal applicability to all programming languages.

60 citations


Journal ArticleDOI
TL;DR: The impression is that QFD has features which help the participants to understand each other's requirements; a prerequisite of producing ISs which takes into consideration the different quality requirements of the parties involved.
Abstract: In this paper the use of Quality Function Deployment (QFD) as a tool to improve software quality is examined. For the concept of quality the authors have decided to use the definition suggested by the International Standardization Organization (ISO). QFD is a management technique aimed at facilitating companywide quality control which has proven valuable in the fields of manufacturing and service production. Because we recognize several issues in common with these fields and information systems (IS) development, it is thought to be a worthwhile effort to apply QFD to IS development as well. We try to make our approach somewhat more realistic by applying it in a hypothetical manner to an actual case; and the benefits and drawbacks of the technique learned during the process are reported. Our impression is that QFD has features which help the participants to understand each other's requirements; a prerequisite of producing ISs which takes into consideration the different quality requirements of the parties involved.

47 citations


Journal ArticleDOI
TL;DR: This proposed methodology representation model and the corresponding MEET-tool are intended to build a so-called computer-aided methodology engineering tool in order to support methods specification and further development.
Abstract: The paper introduces an approach to a structured and disciplined specification of methods in the area of information systems development (ISD), especially in software engineering and project management. In particular, we focus on the underlying model to specify such ISD knowledge. This proposed methodology representation model and the corresponding MEET-tool are intended to build a so-called computer-aided methodology engineering wol in order to support methods specification and further development. The paper is based on the research work that compares different ISD methods in the ‘Information Management 2000’ research program at the Institute for Information Management at the University of St Gallen. Several information systems development methods used in practice have been completely analysed by the participating industrial partners.

42 citations


Journal ArticleDOI
TL;DR: A new model of the data life-cycle is presented, needed to clarify activities involving data, from its creation through use, and to establish the relationships of these activities to one another.
Abstract: The purpose of the paper is to present a new model of the data life-cycle. Such a model is needed to clarify activities involving data, from its creation through use, and to establish the relationships of these activities to one another. The proposed model features four principal data cycles: the acquisition cycle includes activities that create and store data, the usage cycle includes activities that retrieve and use data, and the two kinds of the combined cycles incorporate both acquisition and usage activities. The model also includes quality checkpoints and feedback loops. These are particularly useful in clarifying data quality issues.

38 citations


Journal ArticleDOI
TL;DR: Following a summary of technologies developed by the process modelling and measurement subcommunities of software engineering, a method for integrating these technologies is suggested, and the potential benefits for project guidance are discussed.
Abstract: As first steps towards establishing software engineering as an engineering discipline, we need to create explicit models of its building blocks, i.e., projects, processes, products, and various quality perspectives; organize these models for effective reuse across project boundaries, and establish measurable criteria for project guidance. The paper investigates the possibilities of providing measurement-based project guidance using explicit project plans. Following a summary of technologies developed by the process modelling and measurement subcommunities of software engineering, a method for integrating these technologies is suggested, and the potential benefits for project guidance are discussed. Examples from the MVP Project at the University of Kaiserslautern are used throughout for illustration purposes.

34 citations


Journal ArticleDOI
TL;DR: Examination of the variation in the performance of software development groups as a function of the characteristics of the group itself finds the influence of cohesiveness, total experience in software development and capability on the group's performance level is found to be strong and significant while the Influence of experience was the weakest.
Abstract: The increasing complexity and size of software systems makes the performance and productivity of the software development activity a critical issue. Most studies of software development performance to date have focused on individual software developers. However, software development, especially for large-scale software projects, is a group effort in which the characteristics of the group itself play an important role. The paper examines the variation in the performance of software development groups as a function of the characteristics of the group itself. Using data from 31 software development groups, we examined the influence of the group's cohesiveness, total experience in software development and capability on the group's performance level. The influence of cohesiveness and capability was found to be strong and significant while the influence of experience was the weakest. The results from this exploratory research provide interesting insights into the issue of software development performance and offer important implications for development teams working on software projects.

34 citations


Journal ArticleDOI
TL;DR: The article describes how this model can be used to guide software process improvement programs and some components of such programs are described.
Abstract: The reasons that underlie the emergence of a software process movement in the mid-1980s are discussed. A brief overview of the Capability Maturity Model for Software developed at the Software Engineering Institute is provided. The article then describes how this model can be used to guide software process improvement programs. Some components of such programs are describe.

29 citations


Journal ArticleDOI
TL;DR: A proposal for specification-oriented software maintenance is presented, in which specifications in an object-oriented extension of the formal notation Z are maintained in step with the corresponding programs.
Abstract: The paper presents a number of techniques that have been developed as components of the software maintenance process as part of the ESPRIT REDO project. These techniques are all based on formal methods, and the work described has provided the mathematical underpinning to a large collaborative project that has been investigating various aspects of software maintenance. The focus of the project has been on reverse engineering, and methods for this part of the maintenance process are reported on here, along with techniques for subsequent re-engineering. A proposal for specification-oriented software maintenance is presented, in which specifications in an object-oriented extension of the formal notation Z are maintained in step with the corresponding programs.

Journal ArticleDOI
TL;DR: A comparative analysis between the concepts of programming and databases for the object-oriented paradigm is provided, through a detailed presentation of system-level and model-level considerations.
Abstract: The object-oriented paradigm has come to the forefront of the research community in the software engineering, programming language, and database research areas. Moreover, the paradigm appears capable of supporting advanced applications such as software development environments (SDEs) that require both programming ability and persistency via a database system. However, there exists a disparity between the programming and database approaches to the object-oriented paradigm. The paper examines and discusses this disparity between the two approaches for the purpose of formulating an understanding of their commonalities and differences. This understanding has been instrumental in supporting work involving the prototyping of SDEs using the object-oriented paradigm, an examination of the techniques required to evolve a class library for persistency, and the proposal of a software architecture and functionality of a persistent programming language system. Thus, it is believed that the work presented in this paper can serve as a framework for researchers and practitioners whose efforts include the aforementioned or other, related areas. From a content perspective, this paper provides a comparative analysis between the concepts of programming and databases for the object-oriented paradigm, through a detailed presentation of system-level and model-level considerations. Both philosophical concepts and implementation pragmatics are investigated. A practical examination of the C++ programming language versus the Opal data language has been conducted, revealing many valuable insights of systems and application details and issues. Features of both approaches are also analysed and illustrated.

Journal ArticleDOI
TL;DR: A rich and flexible framework that aims to cover a much larger number of issues than is currently addressed, and its use in classifying modelling techniques and extending existing development methods is illustrated.
Abstract: The modelling of complex information systems requires that a large number of issues be dealt with. Most well-known development methods concentrate on a subset of these issues. We have developed a rich and flexible framework that aims to cover a much larger number of issues than is currently addressed. The framework can be used as a checklist in composing project scenarios. This article presents the framework and its rationale, and illustrates its use in classifying modelling techniques and extending existing development methods.

Journal ArticleDOI
TL;DR: The presentation will highlight two aspects of major interest: the idea ‘behind’ BOOTSTRAP and conclusions for future improvement of the methodology; the results from data collected so far and conclusion for the European Software Industry's improvement initiatives.
Abstract: Disregarding its unquestionable merits, SEI's maturity assessment approach shows some weaknesses and deficiencies, which make it too hard to accept from a European viewpoint for a larger community of systems and software producers. From a new understanding that software technology transfer should not be performed as a market-push but rather a market-pull exercise, maturity assessments are ideal exercises to be applied in order to stimulate this demand. Shifting the pendulum of emphasis to the pull side is a movement for which the IT department of the European Commission (EC) are in favour. Against this background the BOOTSTRAP project was launched by the EC the goal of which was to provide an advanced process assessment methodology. The presentation will highlight two aspects of major interest: the idea ‘behind’ BOOTSTRAP and conclusions for future improvement of the methodology; the results from data collected so far and conclusions for the European Software Industry's improvement initiatives.

Journal ArticleDOI
TL;DR: A decision support tool for business planning based upon information from market research based upon a knowledge-based system, database, numerical processing and analysis of the user's own skills is designed and implemented.
Abstract: This project has designed and implemented a decision support tool for business planning based upon information from market research. The tool is an integrated solution, based upon a knowledge-based system, database, numerical processing and analysis of the user's own skills. The paper will show how the knowledge required was represented within the system and how the knowledge-based component was integrated with the rest of the system. The knowledge encapsulated within this system is based upon the findings of the PIMS (Profit Impact of Marketing Strategy) project, supported by portfolio analysis.

Journal ArticleDOI
TL;DR: A general methodology for object-oriented design, called MOOD, is presented, which allows the creation of a design mainly in terms of classes, objects and inheritance, and the representation of aDesign graphically by a set of class hierarchy diagrams, composition diagrams, object diagrams and operation diagrams
Abstract: The paper is concerned with object-oriented design methodologies for software systems. A general methodology for object-oriented design, called MOOD, is presented. MOOD is unrelated to any programming language, yet is capable of being used to design a variety of object-oriented software systems. In particular, MOOD allows the creation of a design mainly in terms of classes, objects and inheritance, and the representation of a design graphically by a set of class hierarchy diagrams, composition diagrams, object diagrams and operation diagrams.

Journal ArticleDOI
TL;DR: The meta-model of the repository and an editor transfer chart are proposed to specify the modelling transparency in CASE tools to support relating diagrams to each other.
Abstract: Modelling transparency is introduced as the functionality of CASE tools that supports relating diagrams to each other. Efficient transfer between the various diagram editors in CASE tools is needed for the highly inter-related diagrams of complex applications. Sequential dependency and parallel dependency of diagrams in systems development products are discussed. A scale of four degrees of modelling transparency is introduced to position the modelling transparency implemented in a particular CASE tool. The meta-model of the repository and an editor transfer chart are proposed to specify the modelling transparency in CASE tools.

Journal ArticleDOI
TL;DR: The paper considers the design of a GIS shell by extension of an object-oriented (database) system to provide spatial objects with appropriate behaviour, and discusses the principles employed in the design, and a 4-level architecture for organizing shell objects so as to meet the stated objectives.
Abstract: Geographic Information Systems (GIS) combine the requirement for graphical display of information with the requirement to manage complex, disk-based data. The object-oriented approach is recognized as an appropriate technology for meeting both of these requirements, and several attempts have been made to build a GIS using object-oriented data management systems. The paper considers the design of a GIS shell by extension of an object-oriented (database) system. A GIS shell does not include any application-specific objects, but extends a basic object-oriented system to provide spatial objects with appropriate behaviour. The starting point for this work was the set of requirements of users of an existing GIS shell. Central objectives are to provide multiple views of application objects, with independence from the stored representation of the spatial attributes. The paper discusses the principles employed in the design of the shell, and a 4-level architecture for organizing shell objects so as to meet the stated objectives. Implementation issues relating to the appropriateness of an object-oriented database management system are discussed towards the end of the paper.


Journal ArticleDOI
TL;DR: A new formalism for describing such a complex data model is presented in the paper and an object manager embodying these ideas is fully implemented for the Oracle database management system.
Abstract: The purpose of this document is to present a set of mechanisms and concepts for object systems based on an external relational database. The object space may be shared among a set of applications which use the standard query language SQL as its principal data access mechanism. Methods are not a concern of this paper and may be handled by callout to separate execution engines. Internally a semantic data model is used including reverse links. A new formalism for describing such a complex data model is presented in the paper. An object manager embodying these ideas is fully implemented for the Oracle database management system.


Journal ArticleDOI
TL;DR: The paper suggests that an SPM-based approach is valid for information systems developer provided some modifications and enhancements are made to the underlying SPM model and the way SPM is applied.
Abstract: The information systems developer differs from the technical, or real-time, systems developer in a number of respects, in particular, their business environment, their technical environment, their organizational/departmental culture, their software development processes and their priorities and needs for computer aided software engineering (CASE) tools 1 . The paper suggests that an SPM-based approach is valid for such developers provided some modifications and enhancements are made to the underlying SPM model and the way SPM is applied.

Journal ArticleDOI
TL;DR: The purpose of the paper is to describe a graphical process modelling language (ProNet) which contains a synthesis of object-oriented, behavioural and data flow ideas and its use in a complex real-world example involving a software maintenance project.
Abstract: The purpose of the paper is to describe a graphical process modelling language (ProNet) which contains a synthesis of object-oriented, behavioural and data flow ideas. While the main focus of the current efforts has been to develop the graphical elements of the language, the language's notation also has the potential for enactability as the semantics allow a direct mapping between the graphical constructs and a set of production rules. The latter part of the paper is devoted to describing ProNet's use in a complex real-world example involving a software maintenance project.

Journal ArticleDOI
TL;DR: The paper discusses various methodologies and tools in a review of related research and development efforts as well as commercialized products, mainly in Japan, and places them in a consolidated perspective, giving some examples of those items considered as particularly interesting from a Japanese point of view.
Abstract: It is generally recognized that software re-engineering and reuse technologies are vitally important and a highly effective means of alleviating the so-called ‘software crisis’. The paper discusses various methodologies and tools in a review of related research and development efforts as well as commercialized products, mainly in Japan, and places them in a consolidated perspective, giving some examples of those items considered as particularly interesting from a Japanese point of view. In Japan, the concept of treating reusable pieces of software as parts has been very effectively employed, and tree-structured graphical charts are widely used in both forward and reverse software engineering. An increasing emphasis is on the use of domain knowledge and object-oriented methodologies.

Journal ArticleDOI
F Lin1
TL;DR: This model refines Boehm's Ada-COCOMO based on the study of 23 re-engineered projects and refines the Re-engineering Option Analysis when Object-Oriented Technology is the major concern.
Abstract: Adoption of any unfamiliar technology entails an increased risk relative to the use of established well-understood technologies. Hence, the use of OOT on a project increases the probability of failure. The paper discusses the Re-engineering Option Analysis when Object-Oriented Technology is the major concern. Our model refines Boehm's Ada-COCOMO based on the study of 23 re-engineered projects.

Journal ArticleDOI
TL;DR: A reverse engineering process for producing design level documents by static analysis of ADA code to support maintenance and reuse activities on existing real-time software and to check consistency between design and code is described.
Abstract: The paper describes a reverse engineering process for producing design level documents by static analysis of ADA code. The produced documents, which we call concurrent data flow diagrams, describe the task structure of a software system and the data flow between tasks. Firstly, concurrent data flow diagrams are defined and discussed and then the main characteristics and features of the reconstruction process are illustrated. The process has been used to support maintenance and reuse activities on existing real-time software and to check consistency between design and code.

Journal ArticleDOI
TL;DR: The authors outline and justify an entity relationship based three level schema architecture which addresses several OO data modelling inadequacies, viz. the lack of support for the notion of relationship in the OO approach, the inability to judge the quality of an OO schema and the lacks of a reasonable approach to resolve inheritance conflicts in class hierarchies.
Abstract: The object oriented (OO) paradigm suffers from several inadequacies which are widely recognized, e.g. lack of a formal foundation, general disagreement in interpreting OO concepts, lack of a declarative query language, use of a navigational interface, inheritance conflicts in class hierarchies, etc. In the paper, a representative list of these inadequacies is presented. Some proposals that have been made to resolve some of these inadequacies are reviewed. Then the authors outline and justify an entity relationship based three level schema architecture which addresses several OO data modelling inadequacies, viz. the lack of support for the notion of relationship in the OO approach, the lack of support for external schemas, the inability to judge the quality of an OO schema and the lack of a reasonable approach to resolve inheritance conflicts in class hierarchies.

Journal ArticleDOI
TL;DR: The goal of the research was to specify formally a persistent object management system and to implement a part of the specification in the form of a prototype, called the persistent object storage manager (POSM).
Abstract: The goal of the research on which the paper is based was to specify formally a persistent object management system and to implement a part of the specification in the form of a prototype. The prototype is called the persistent object storage manager (POSM). The prototype was implemented in C++ using the IBM OS/2 operating system. POSM is formally specified in the paper. The data model used in POSM is rigorously defined using the Z notation. The operations of POSM and its state space are also specified in Z notation. Example schemas for the operations of the node management component of the prototype are presented. Z notation is justified in the paper and the benefits of using Z in the research are discussed.

Journal ArticleDOI
TL;DR: This work proposes a novel approach for distributed dynamic checkpointing in which the overhead associated with the more traditional periodic checkpointing techniques is avoided and the checkpointing technique can be used as the basis for optimal protocol recovery to achieve stabilization.
Abstract: In this paper, the problem of designing stabilizing computer communication protocols is addressed. A communication protocol is said to be stabilizing, if starting from or being at any illegal global state, the protocol will eventually reach a legal (or consistent) global state, and resume its normal execution. To achieve protocol stabilization, the protocol must be able to detect the error when it occurs, and then it must recover from that error and revert to a legal protocol state. Based on the concepts of event indices and maximally reachable event index tuples, we propose a novel approach for distributed dynamic checkpointing in which the overhead associated with the more traditional periodic checkpointing techniques is avoided. Furthermore, our checkpointing technique can be used as the basis for optimal protocol recovery to achieve stabilization. An example illustrating the new dynamic checkpointing technique is also provided.

Journal ArticleDOI
TL;DR: The different roles which Prolog can play in the implementation of an OODB are illustrated by reference to example systems which, although they use Prolog as an implementation language, have significantly different architectures.
Abstract: This paper outlines the use of Prolog for implementing object-oriented databases (OODBs), to indicate both the benefits and costs associated with Prolog as an implementation platform. The different roles which Prolog can play in the implementation of an OODB are illustrated by reference to example systems which, although they use Prolog as an implementation language, have significantly different architectures. These architectures are compared and assessed, both in terms of the functionality provided to users, and performance.