Showing papers in "Information & Software Technology in 1999"
••
TL;DR: In this paper, the authors present an approach to give formal semantics to Event-driven Process Chains (EPCs) by mapping EPCs (without connectors of type ∨) onto Petri nets.
Abstract: For many companies, business processes have become the focal point of attention. As a result, many tools have been developed for business process engineering and the actual deployment of business processes. Typical examples of these tools are Business Process Reengineering (BPR) tools, Enterprise Resource Planning (ERP) systems, and Workflow Management (WFM) systems. Some of the leading products, e.g. SAP R/3 (ERP/WFM) and ARIS (BPR), use Event-driven Process Chains (EPCs) to model business processes. Although the EPCs have become a widespread process modeling technique, they suffer from a serious drawback: neither the syntax nor the semantics of an EPC are well defined. In this paper, this problem is tackled by mapping EPCs (without connectors of type ∨) onto Petri nets. The Petri nets have formal semantics and provide an abundance of analysis techniques. As a result, the approach presented in this paper gives formal semantics to EPCs. Moreover, many analysis techniques are available for EPCs. To illustrate the approach, it is shown that the correctness of an EPC can be checked in polynomial time by using Petri-net-based analysis techniques.
693 citations
••
TL;DR: The heuristic methods discussed in this paper produce optimal or near-optimal performance artificial neural networks using only a fraction of the time needed for a full factorial design.
Abstract: Artificial neural networks were used to support applications across a variety of business and scientific disciplines during the past years. Artificial neural network applications are frequently viewed as black boxes which mystically determine complex patterns in data. Contrary to this popular view, neural network designers typically perform extensive knowledge engineering and incorporate a significant amount of domain knowledge into artificial neural networks. This paper details heuristics that utilize domain knowledge to produce an artificial neural network with optimal output performance. The effect of using the heuristics on neural network performance is illustrated by examining several applied artificial neural network systems. Identification of an optimal performance artificial neural network requires that a full factorial design with respect to the quantity of input nodes, hidden nodes, hidden layers, and learning algorithm be performed. The heuristic methods discussed in this paper produce optimal or near-optimal performance artificial neural networks using only a fraction of the time needed for a full factorial design.
185 citations
••
TL;DR: This article suggests Fuzzy Cognitive Maps (FCM) as an alternative modelling approach and describes how they can be developed and used to simulate the SISP process and introduces computational modelling, as well as it supports scenarios development and simulation in the S ISP domain.
Abstract: In the early 1980s articles began to focus on Strategic Planning of Information Systems (SISP) and to argue the critical importance of Information Technology (IT) in today’s organisations. Since then, a large number of models were presented in order to analyse IT from a strategic point of view and suggest new IT projects. However, researchers urge for alternative approaches to SISP, as current ones fall short in taking into consideration both the business and IT perspectives as well as they fail to tackle the complexity of the domain and suggest specific IS opportunities. This article suggests Fuzzy Cognitive Maps (FCM) as an alternative modelling approach and describes how they can be developed and used to simulate the SISP process. FCMs were successfully developed and used in several ill-structured domains, such as decision making, policy making. The proposed FCM contains 165 variables and 210 relationships from both business and IT domains. The strength of this approach lies in its capability not only to comprehensively model qualitative knowledge which dominates strategic decision making, but also to simulate and evaluate several alternative ways of using IT in order to improve organisational performance. This approach introduces computational modelling, as well as it supports scenarios development and simulation in the SISP domain.
151 citations
••
TL;DR: This article proposes superimposition, a novel black-box adaptation technique that allows one to impose predefined, but configurable types of functionality on a reusable component.
Abstract: Several authors have identified that the only feasible way to increase productivity in software construction is to reuse existing software. To achieve this, component-based software development is one of the more promising approaches. However, traditional research in component-oriented programming often assumes that components are reused “as-is”. Practitioners have found that “as-is” reuse seldom occurs and that reusable components generally need to be adapted to match the system requirements. Existing component object models provide only limited support for component adaptation, i.e. white-box techniques such as copy–paste and inheritance, and black-box approaches such as aggregation and wrapping. These techniques suffer from problems related to reusability, efficiency, implementation overhead or the self problem. To address these problems, this article proposes superimposition, a novel black-box adaptation technique that allows one to impose predefined, but configurable types of functionality on a reusable component. Three categories of typical adaptation types are discussed, related to the component interface, component composition and component monitoring. Superimposition and the types of component adaptation are exemplified by several examples.
147 citations
••
TL;DR: This paper showed that the empirical evidence for the status of formal organizational constructs in cooperative work is not as strong as we may have believed and that there is evidence from other studies that contradicts what we have taken for granted for years.
Abstract: The received understanding of the status of formal organizational constructs in cooperative work is problematic. This paper shows that the empirical evidence is not as strong as we may have believed and that there is evidence from other studies that contradicts what we may have taken for granted for years. This indicates that the role of formal constructs is more differentiated than generally taken for granted. They not only serve as `maps' but also as `scripts'.
103 citations
••
TL;DR: Conformance testing approaches are required for gaining confidence in final products and guaranteeing their integration and interoperability within open distributed environment, and ideas gained from the experience with protocol testing are examined.
Abstract: In developing distributed systems, current trends are towards creating open distributed environments supporting interworking, interoperability, and portability, in spite of heterogeneity and autonomy of related systems. Several reference models, architectures and frameworks such as ODP, CORBA, and TINA, have already been designed and proposed. However, even though models, architectures, and frameworks, provide a good basis for developing working open distributed applications, conformance testing approaches are required for gaining confidence in final products and guaranteeing their integration and interoperability within open distributed environment. ODP provides some preliminary statements on conformance assessment in open distributed systems, but considerable work needs to be done before reaching a workable and accepted conformance testing methodology for open distributed processing. Further, ISO, ITU, OMG, and TINA-C, have recently recognized the urgent need for conformance testing. In this paper, we examine ideas gained from our experience with protocol testing, which may contribute to the design of such a framework. Our methodology is essentially guided by two features that have a great influence on all aspects of the testing process: controllability and observability.
94 citations
••
TL;DR: This work analyzes how physically collocated teams work together now and what services they require to work together across distances, focusing on real time interactions because those interactions justify collocating teams today.
Abstract: We analyze how physically collocated teams work together now and what services they require to work together across distances, focusing on real time interactions because those interactions justify collocating teams today. We explain how Integrated Product Teams (IPTs) are organized in system development programs and how their physical collocation facilitates communication, collaboration, and coordination within the team. Interactions within IPTs take two forms: scheduled meetings and opportunistic interactions. Scenarios of scheduled IPT meetings help motivate and identify requirements for supporting distributed meetings. Opportunistic interactions are far more common than scheduled meetings and more difficult to observe and analyze because they are not scheduled or predictable.
81 citations
••
TL;DR: A markup language based upon XML for working with the predictive models produced by data mining systems, which provides a flexible mechanism for defining schema for predictive models and supports model selection and model averaging, involving multiple predictive models.
Abstract: We introduce a markup language based upon XML for working with the predictive models produced by data mining systems. The language is called the predictive model markup language (PMML) and can be used to define predictive models and ensembles of predictive models. It provides a flexible mechanism for defining schema for predictive models and supports model selection and model averaging, involving multiple predictive models. It has proved useful for applications requiring ensemble learning, partitioned learning and distributed learning. In addition, it facilitates moving predictive models across applications and systems.
80 citations
••
TL;DR: This paper discusses Brooks' The Mythical Man-Month and compares the software project management advice given there with practices employed some 25 years later, to find out the state of today’s practice.
Abstract: This paper discusses Brooks’ The Mythical Man-Month , a landmark work in the software project management field, and compares the software project management advice given there with practices employed some 25 years later To find out the state of today’s practice 20 experienced software developers were interviewed regarding their impressions of factors leading to success or failure of software development projects Their observations are compared with the points raised by Brooks in his seminal work q 1999 Elsevier Science BV All rights reserved
75 citations
••
TL;DR: In this paper, the current status of selected parts of software economics are surveyed, highlighting the gaps both between practice and theory and between the current understanding and what is needed.
Abstract: 1. Overview Software is valuable when it produces information in a manner that enables people and systems to meet their objectives more effectively. Software engineering techniques have value when they enable software developers to build more valuable software. Software economics is the sub-field of software engineering that seeks improvements which enable software engineers to reason more effectively about important economic aspects of software development, including cost, benefit, risk, opportunity, uncertainty, incomplete knowledge and the value of additional information, implications of competition, and so forth. In this paper, we survey the current status of selected parts of software economics, highlighting the gaps both between practice and theory and between our current understanding and what is needed. The sheer volume of current software costs makes the study and application of software economics techniques a significant area of concern and opportunity. Recent studies [1,2] estimate roughly 2,000,000 software professionals in the US in 1998. At typical salaries of $60‐80,000/year and a typical overhead rate of 150%, this translates into a $300‐ 400 billion annual expenditure on software development in the US alone. A conservative estimate of worldwide software costs is twice the US costs, or $600‐800 billion per year. With the kind of expenditures now being made on software, just the economics of more efficient software production are important to understand and apply. Software development is widely seen to be inefficient, which means that there is considerable room for improvement. A 10% reduction in software production costs translates into a
67 citations
••
TL;DR: This article presents a formal systematic software architecture specification and analysis methodology called SAM and shows how to apply SAM to specify a command control (C2) system and to analyze its real-time constraints.
Abstract: Software architecture study has become one of the most active research areas in software engineering in the recent years. Although there have been many published results on specification and analysis method of software architectures, information on sound systematic methodology for modeling and analyzing software architectures is lacking. In this article, we present a formal systematic software architecture specification and analysis methodology called SAM and show how to apply SAM to specify a command control (C2) system and to analyze its real-time constraints.
••
TL;DR: An analysis of the communication, coordination and cooperation requirements of business processes reveals a gap in current computer support, and a cooperative hypermedia system with process support is proposed, called CHIPS, focusing on flexible business processes.
Abstract: In this paper, we present a cooperative hypermedia-based process support system focusing on flexible business processes. An analysis of the communication, coordination and cooperation requirements of business processes reveals a gap in current computer support. We propose to address these requirements by extending a cooperative hypermedia system with process support. The resulting system, called CHIPS, uses hypermedia-based activity spaces to model the structural, relational, and computational semantics of both individual tasks and processes. Application examples demonstrate that the CHIPS system retains the intuitive usability of hypertext and can support a wide range of business processes.
••
TL;DR: It is argued that tests done with real data sets cannot provide all the information needed for a thorough assessment of their performance characteristics and that artificial data sets are therefore essential.
Abstract: In this article, we discuss the need to evaluate the performance of data mining procedures and argue that tests done with real data sets cannot provide all the information needed for a thorough assessment of their performance characteristics. We argue that artificial data sets are therefore essential. After a discussion of the desirable characteristics of such artificial data, we describe two pseudo-random generators. The first is based on the multi-variate normal distribution and gives the investigator full control of the degree of correlation between the variables in the artificial data sets. The second is inspired by fractal techniques for synthesizing artificial landscapes and can produce data whose classification complexity can be controlled by a single parameter. We conclude with a discussion of the additional work necessary to achieve the ultimate goal of a method of matching data sets to the most appropriate data mining technique.
••
TL;DR: It is argued that the use of risk avoidance leads to benefits that far outweigh the cost of implementation, and like the cleanroom software development process and its sub-process of software inspections, leads to superior products.
Abstract: This study has examined software reliability using a risk management framework to analyze popular risk management models with regard to their emphasis on risk avoidance. Too often, risk reduction occurs late in the software development life cycle when changes are costly and less effective. We suggest that early risk avoidance techniques, like the cleanroom software development process and its sub-process of software inspections, leads to superior products. Our approach is different from the current mainstream approach of testing to eliminate errors because we propose using risk models that emphasize preventive risk management early in development. We argue that the use of risk avoidance leads to benefits that far outweigh the cost of implementation.
••
TL;DR: This paper addresses the issue of the requirements engineering process for COTS components acquisition and assembly with an approach based on the notion of requirements maps and assembly strategies and demonstrates the approach with the selection of a CASE tool.
Abstract: In spite of the increasing use of commercial off-the-shelf (COTS) products for system development, there is little consideration on how to acquire requirements for COTS products, how to select COTS components and how to assemble them to comply to these requirements. The paper addresses the issue of the requirements engineering process for COTS components acquisition and assembly. It proposes an approach based on the notion of requirements maps and assembly strategies and demonstrates the approach with the selection of a CASE tool.
••
TL;DR: An extension of the existing algorithms to consider an extended finite state machine as the specification is presented and an algorithm is also introduced to take into account the number of transitions covered.
Abstract: Passive testing is the process of collecting traces of messages exchanged between an operating implementation and its environment, in order to verify that these traces actually belong to the language accepted by the provided finite state machine specification. In this paper, we present an extension of the existing algorithms to consider an extended finite state machine as the specification. An algorithm is also introduced to take into account the number of transitions covered. These techniques are illustrated by the application to a real protocol, the GSM (global system for mobile communication)-MAP (mobile application part).
••
TL;DR: Using WIBOs that carry out tasks on users’ behalf, it is possible to build workflow systems that bring further improvements in process automation and dynamic management, and achieve dynamic (re)allocation of resources to Actors.
Abstract: This paper describes an architecture for workflow management systems based on Workflow Intelligent Business Objects (WIBOs). The design of WIBOs is based on principles of intelligence, autonomy, collaboration and co-operation. Using WIBOs that carry out tasks on users’ behalf, it is possible to build workflow systems that bring further improvements in process automation and dynamic management, and achieve dynamic (re)allocation of resources to Actors. A WIBO prototype architecture has been implemented using Java. A Java Remote Method Invocation (RMI) has been used to enable WIBOs to communicate over an Intranet or Internet.
••
TL;DR: The evolution of groupware is explored and some of its effects on organizations and society are exposed in this special issue of ACM Special Interest Group on Supporting Group Work.
Abstract: Over the past twenty years industry and academia have been working to develop computer systems to increase work group's productivity, commonly referred to as groupware. Groupware encompasses a broad spectrum of research and development including group support systems, computer-supported collaborative work, group decision support systems, and computer mediated collaboration. Applications arising out of these efforts included concurrent multi-user authoring systems, computer conferencing, integrated computer/video meeting systems, electronic voting, brainstorming, and workflow systems. The papers in this special issue are some of the best from over 100 papers submitted to the GROUP'97 conference sponsored by the ACM Special Interest Group on Supporting Group Work. They represent work conducted by researchers on four continents from both industry and academia. As a group the authors present a blend of theory, practice, and technological innovation from the groupware research arena. This paper is intended to serve as an introduction to the area of groupware research and development. In it we explore the evolution of groupware and expose some of its effects on organizations and society.
••
TL;DR: The final version of a knowledge discovery system, Telecommunication Network Alarm Sequence Analyzer (TASA), for telecommunication networks alarm data analysis is described, based on the discovery of recurrent, temporal patterns of alarms in databases, where large collections of potentially interesting rules can be found efficiently.
Abstract: In this paper we describe the final version of a knowledge discovery system, Telecommunication Network Alarm Sequence Analyzer (TASA), for telecommunication networks alarm data analysis. The system is based on the discovery of recurrent, temporal patterns of alarms in databases; these patterns, episode rules, can be used in the construction of real-time alarm correlation systems. Also association rules are used for identifying relationships between alarm properties. TASA uses a methodology for knowledge discovery in databases (KDD) where one first discovers large collections of patterns at once, and then performs interactive retrievals from the collection of patterns. The proposed methodology suits very well such KDD formalisms as association and episode rules, where large collections of potentially interesting rules can be found efficiently. When searching for the most interesting rules, simple threshold-like restrictions, such as rule frequency and confidence may satisfy a large number of rules. In TASA, this problem can be alleviated by templates and pattern expressions that describe the form of rules that are to be selected or rejected. Using templates the user can flexibly specify the focus of interest, and also iteratively refine it. Different versions of TASA have been in prototype use in four telecommunication companies since the beginning of 1995. TASA has been found useful in, e.g. finding long-term, rather frequently occurring dependencies, creating an overview of a short-term alarm sequence, and evaluating the alarm data base consistency and correctness.
••
TL;DR: The major characteristics of the time series astronomical data, data preprocessing techniques to process these time series, and some domain-specific techniques to separate candidate variable stars from the nonvariant ones are presented.
Abstract: In this paper we present some initial results of a project which uses data-mining techniques to search for evidence of massive compact halo objects (MACHOs) from very large time series database. MACHOs are the proposed materials that probably make the “dark matter” surrounding our own and other galaxies. It was suggested that MACHOs may be detected through the gravitational microlensing effect which can be identified from the light curves of background stars. The objective of this project is two-fold, namely, (i) identification of new classes of variable stars and (ii) detection of microlensing events. In this paper, we present the major characteristics of the time series astronomical data, data preprocessing techniques to process these time series, and some domain-specific techniques to separate candidate variable stars from the nonvariant ones. We discuss the use of the Fourier model to represent the time series and the k -means based clustering method to classify variable stars.
••
TL;DR: It is concluded that prescriptive information systems methodologies are unlikely to cope well with strategic uncertainty, user communication or staff development and are to focus more on soft organisational issues and to use approaches tailored to each project.
Abstract: This article examines the efficiency and effectiveness of a prescriptive systems development methodology in practice. The UK Government's mandatory Structured Systems Analysis and Design Method (SSADM) was examined to determine its value to software projects. The evidence was collected from interviews with 17 project managers, discussions with participants on three large SSADM projects and from observing 90 end users during training. The conclusions are that prescriptive information systems methodologies are unlikely to cope well with strategic uncertainty, user communication or staff development. The recommendations are to focus more on soft organisational issues and to use approaches tailored to each project.
••
TL;DR: It was found that about 50% of the real causes of the software bugs found were made by the designer's carelessness, and guidelines were established to improve the designers’ behavior.
Abstract: In our software development group we have faced difficulties in improving software quality. In order to find the causes of this problem we have analyzed software bugs that were found in the credit authorization terminal software that our group developed, and found the real causes of the bug. As a result, it was found that about 50% of the real causes were made by the designer's carelessness. Based on this finding, guidelines were established to improve the designers’ behavior. Furthermore, we made clear the causes of software bugs, the phase in which they were made, the relation between the software bugs and their real causes. We also made the measures to prevent the software bugs and implemented them. And we refer to the effect that the substantial amount of software bugs can be decreased by the implementation.
••
TL;DR: The control-flow for five kinds of use cases is analysed and guidelines are given for use case descriptions to attain a well-defined flow of control.
Abstract: The control-flow for five kinds of use cases is analysed: for common use cases, variant use cases, component use cases, specialised use cases and for ordered use cases. The control-flow semantics of use cases - and of the uses-relation, the extends-relation and the precedes-relation between use cases - is described in terms of flowgraphs. Sequence diagrams of use cases are refined to capture the control-flow adequately. Guidelines are given for use case descriptions to attain a well-defined flow of control.
••
TL;DR: The findings indicate that analysts’ perceptions of failure reasons and their approach to development fall along similar lines, which enables information system management to select project teams to help avoid failures.
Abstract: System analysts approach tasks with different orientations to their actions. Likewise, system failures are perceived to be because of a variety of causes. A survey of 239 analysts is conducted to explore the similarities between these orientations and perceptions of failure reasons. The findings indicate that analysts’ perceptions of failure reasons and their approach to development fall along similar lines. This trait enables information system management to select project teams to help avoid failures. The variety of orientations needed for success can be chosen from within the organization or training of analysts can be targeted more effectively to consider missing elements based on current orientations.
••
TL;DR: A multi-stage model for bilateral negotiation support is presented, identifies generic types of DSS and communication tools necessary to support the requirements and strategic analyses and describes an actual system built based on this conceptualisation.
Abstract: This article presents a multi-stage model for bilateral negotiation support. It continues from the work of Lim and Benbasat [L.H. Lim, I. Benbasat, A theoretical perspective of negotiation support systems, Journal of Management Information Systems 19(3) (1992) 27-44], in which a two-dimensional approach to negotiation support is advocated. Based on that approach, the current work identifies generic types of DSS and communication tools necessary to support the requirements and strategic analyses. An actual system built based on this conceptualisation, is then described.
••
TL;DR: This paper surveys works related to this dualistic model view and introduces the research on developing a computational model and a language based on this objective dualism.
Abstract: There are many situations where modeling based on the dual concepts of objects and roles has an advantage over modeling based solely on objects. In this paper, we survey works related to this dualistic model view and clarify the characteristics of such models. Then, we introduce our research on developing a computational model and a language based on this objective dualism.
••
TL;DR: A revision of the Chidamber and Kemerer’s metrics which can be applied to software which had been constructed by reusing software components is proposed and an analysis of data collected from the development of an object-oriented software using a GUI framework is given.
Abstract: Measuring software products and processes is essential for improving software productivity and quality. In order to evaluate the complexity of object-oriented software, several complexity metrics have been proposed. Among them, Chidamber and Kemerer’s metrics are the most well-known for object-oriented software. Their metrics evaluate the complexity of the classes in terms of internal, inheritance, and coupling complexity. Though the reused classes of the class library usually have better quality than the newly-developed ones, their metrics deal with inheritance and coupling complexity in the same way. This article first proposes a revision of the Chidamber and Kemerer’s metrics which can be applied to software which had been constructed by reusing software components. Then, we give an analysis of data collected from the development of an object-oriented software using a GUI framework. We compare the original metrics with the revised ones by evaluating the accuracy of estimating the effort to fix faults and show the validity and usefulness of the revised metrics.
••
TL;DR: By integrating heterogeneous data it is often possible to discover new information at a finer level of granularity than that available in any of the contributing data sources, and a system may be enabled to induce new rules based on information at that would be possible without integration.
Abstract: It is commonly the case that a distributed database holds data originating from a number of different sources. These heterogeneous data sources may provide different views of the same data, or they may be different samples from the same population. In each case, a methodology is provided for combining data which may be held at different levels of granularity. Variations in granularity arise due to the use of different levels within a concept hierarchy or the use of different concept hierarchies. Data integration is accomplished using the intersection hypergraph to produce the integrated universal classification scheme and to determine the cardinalities of each category within the universal table. In the commonly occurring case of continuous or ordinal data, an explicit and efficient computational algorithm is presented. By integrating heterogeneous data it is often possible to discover new information at a finer level of granularity than that available in any of the contributing data sources. A system may be thus be enabled, so as to induce new rules based on information at a finer level of granularity than that would be possible without integration.
••
TL;DR: The approach to the symbiosis of the OO and relational data models, which is built into GinisNT , a scalable OO GIS framework based on an OO-to-relational mapping algorithm, is presented.
Abstract: An object-oriented paradigm is established as the leading approach for developing non-traditional applications, such as GIS or multimedia systems. On the other hand, relational databases have dominated the area of data processing in the past decade. These two trends motivate the research on integrating OO applications with relational databases. This paper presents our approach to the symbiosis of the OO and relational data models, which is built into GinisNT , a scalable OO GIS framework based on an OO-to-relational mapping algorithm. The mapping algorithm transforms classes and objects into relations and tuples, and vice versa, instantiates objects from relational databases. The methodology presented here is extremely efficient, as has been proved by a number of applications developed in GinisNT , and is at the same time cost efficient, as it builds upon existing platforms.
••
TL;DR: These two approaches to object-oriented methods and modelling languages are compared by focusing on two main areas: (1) process and lifecycle support and, predominantly, metamodel and notation.
Abstract: Recent efforts have been made to coalesce object-oriented methods and object-oriented modelling languages and, at the same time, to put them on a more rigorous footing by the use of metamodelling techniques. Two so-called third-generation approaches, OPEN (a full methodology) and UML (a modelling language) are described and compared here. These two approaches are compared by focusing on two main areas: (1) process and lifecycle support and, predominantly, (2) metamodel and notation.