scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 2000"


Journal ArticleDOI
TL;DR: The purpose of this research and report is to investigate the key players and their roles along with the existing methods and obstacles in Requirements Elicitation and concentrate on emphasizing key activities and methods for gathering information, as well as offering new approaches and ideas for improving the transfer and record of this information.
Abstract: Requirements engineering are one of the most crucial steps in software development process. Without a well-written requirements specification, developer’s do not know what to build, user’s do not know what to expect, and there is no way to validate that the created system actually meets the original needs of the user. Much of the emphasis in the recent attention for a software engineering discipline has centered on the formalization of software specifications and their flowdown to system design and verification. Undoubtedly, the incorporation of such sound, complete, and unambiguous traceability is vital to the success of any project. However, it has been our experience through years of work (on both sides) within the government and private sector military industrial establishment that many projects fail even before they reach the formal specification stage. That is because too often the developer does not truly understand or address the real requirements of the user and his environment. The purpose of this research and report is to investigate the key players and their roles along with the existing methods and obstacles in Requirements Elicitation. The article will concentrate on emphasizing key activities and methods for gathering this information, as well as offering new approaches and ideas for improving the transfer and record of this information. Our hope is that this article will become an informal policy reminder/guideline for engineers and project managers alike. The success of our products and systems are largely determined by our attention to the human dimensions of the requirements process. We hope this article will bring attention to this oft-neglected element in software development and encourage discussion about how to effectively address the issue. q 2000 Elsevier Science B.V. All rights reserved.

182 citations


Journal ArticleDOI
TL;DR: Using multi-company data the OLS regression model provided significantly more accurate results than Analogy-based predictions, and it was found in general that models based on the company-specific data resulted in significant more accurate estimates.
Abstract: This research examined the use of the International Software Benchmarking Standards Group (ISBSG) repository for estimating effort for software projects in an organization not involved in ISBSG. The study investigates two questions: (1) What are the differences in accuracy between ordinary least-squares (OLS) regression and Analogy-based estimation? (2) Is there a difference in accuracy between estimates derived from the multi-company ISBSG data and estimates derived from company-specific data? Regarding the first question, we found that OLS regression performed as well as Analogy-based estimation when using company-specific data for model building. Using multi-company data the OLS regression model provided significantly more accurate results than Analogy-based predictions. Addressing the second question, we found in general that models based on the company-specific data resulted in significantly more accurate estimates.

163 citations


Journal ArticleDOI
TL;DR: Argo/UML, an object-oriented design tool using the unified modeling language (UML) design notation, is described, which supports several identified cognitive needs of software designers in the form of design tool features.
Abstract: Software design is a cognitively challenging task. Most software design tools provide support for editing, viewing, storing, and transforming designs, but lack support for the essential and difficult cognitive tasks facing designers. These cognitive tasks include decision-making, decision ordering, and task-specific design understanding. This paper describes Argo/UML, an object-oriented design tool using the unified modeling language (UML) design notation. Argo/UML supports several identified cognitive needs of software designers. This support is provided in the form of design tool features. We describe each feature in the context of Argo/UML and provide enough detail to enable other tool builders to provide similar support in their own tools. We also discuss our implementation of the UML and XMI standards, and our development approach.

105 citations


Journal ArticleDOI
TL;DR: A set of measure axioms is presented whose sufficiency is guaranteed by measurement theory, used in mathematics to define measures of distance.
Abstract: Axiomatic approaches to software measurement present sets of necessary, but not sufficient measure axioms. The insufficiency of the measure axioms implies that they are useful to invalidate existing software measures, but not to validate them. In this paper, a set of measure axioms is presented whose sufficiency is guaranteed by measurement theory. The axioms referred to are the metric axioms, used in mathematics to define measures of distance. We present a constructive procedure that defines software measures satisfying these axioms. As an illustration of distance-based software measurement, a measure is defined for the aggregation coupling of object classes.

104 citations


Journal ArticleDOI
TL;DR: A novel genetically trained neural network (NN) predictor trained on historical data is presented, demonstrating substantial improvement in prediction accuracy by the neuro-genetic approach as compared to both a regression-tree-based conventional approach, as well as backpropagation-trained NN approach reported recently.
Abstract: Prediction of resource requirements of a software project is crucial for the timely delivery of quality-assured software within a reasonabletimeframe. Many conventional (model-based) and AI-oriented (model-free) resource estimators have been proposed in the recent past. Thispaper presents a novel genetically trained neural network (NN) predictor trained on historical data. We demonstrate substantial improvementin prediction accuracy by the neuro-genetic approach as compared to both a regression-tree-based conventional approach, as well asbackpropagation-trained NN approach reported recently. The superiority of this new predictor is established usingn-fold cross validationand Student’s t-test on various partitions of merged Cocomo and Kemerer data sets incorporating data from 78 real-life software projects.q 2000 Elsevier Science B.V. All rights reserved. Keywords: Neuro-genetic prediction; Neural network predictor; Genetically trained neural network 1. IntroductionReasonably accurate prediction of software developmenteffort has a profound effect on all stages of the softwaredevelopment cycle. Underestimates of resource require-ments for a software project lead to: (a) underestimationof the cost; (b) unrealistic time schedule; (c) considerablework pressure on the engineers; and (d) compromises indevelopment methodology, documentation and testing. Onthe other hand, overestimates are likely to cause: (a) a lostcontract due to prohibitive costs; (b) over allocation of engi-neers to the project leading to constraints on other projects;(c) low productivity levels of engineers; and (d) easy-goingwork habit in the organization. Resource requirementprediction for software projects is, therefore, an activeresearch area.Various conventional model-based methods have metwith limited success, whereas, intelligent prediction usingneurocomputing has proven its worth in many diverse appli-cation areas [1]. McCullagh et al. [2] have used neuralnetwork (NN) to estimate rainfall in Australia and havereported results superior to conventional model-basedapproach. NN predictors are playing major roles in diverseapplications and are being successfully applied to load fore-casting, medical diagnosis, communications, robot naviga-tion, software production etc., for example see Ref. [3].Recently, software engineers have started using NNs invarious stages of software production with significantsuccess. Karunanithi [4] has applied NN for software relia-bility prediction in the presence of code churn. This work isa major step forward in software reliability estimation sincethe conventional reliability growth models made the unrea-listic assumption that the complete code for the system isavailable before testing starts and the code remains frozenduring testing. Due to their power of generalization, NNsare able to accurately predict reliability in the presence ofcode churn. In a unique application of NN-based classifier,Khoshgoftaar et al. [5] have developed a system for identi-fying high-risk, error-prone modules early in the develop-ment cycle to allow optimal resource allocation for themodules. Specification-level software size estimates havebeen obtained by Hakkarainen et al. [6] by training an NNwith structured analysis (SA) descriptions as inputs, and sizemetric values as outputs. The authors used training and testdata set consisting of randomly generated SA descriptionsas input data and corresponding algorithm-based size metricvalues as output data. The size metrics used in their experi-ments were—DeMarco’s Function Bang metric, Albrecht’sFunction Points and Symons’ Mark II Function Points.Function Bang is based on the complexity of data flowsand the types of operation on these data flows. It measuresthe number of data-tokens around the boundary of variousfunctional primitives in a data flow diagram; whereas,

102 citations


Journal ArticleDOI
TL;DR: A simulation model was developed to demonstrate the impact of unstable software requirements on project performance, and to analyse how much money should be invested in stabilising software requirements in order to achieve optimal cost effectiveness.
Abstract: During the last decade, software process simulation has been used to address a variety of management issues and questions. These include: understanding; training and learning; planning; control and operational management; strategic management; process improvement and technology adoption. This paper presents a simulation model that was developed to demonstrate the impact of unstable software requirements on project performance, and to analyse how much money should be invested in stabilising software requirements in order to achieve optimal cost effectiveness. The paper reports on all steps of model building, describes the structure of the final simulation model, and presents the most interesting simulation results of an industrial application. (C) 2000 Elsevier Science B.V. All rights reserved. (Less)

100 citations


Journal ArticleDOI
TL;DR: This study conducted a survey targeted at software developers in New England to obtain broader insight into the software process improvement practices and examined the impact of SPI methodologies on quality factors and compared the impact to the importance of quality factors for software developers.
Abstract: Despite all the attention that software process improvement (SPI) practices have received, there is no solid evidence of how extensively they are used across organizations, and their impact on quality, cost, and on-time delivery. The findings of previous studies are based on case studies, often assessing the effectiveness of a particular methodology in a large company. In our attempt to obtain a broader insight into the software process improvement practices, we conducted a survey targeted at software developers in New England. We collected 67 responses and used descriptive statistics to analyze the survey results. In addition, we examined the impact of SPI methodologies on quality factors and compared the impact to the importance of quality factors for software developers. The Spearman correlation coefficient was used to determine the degree of correlation between the two.

83 citations


Journal ArticleDOI
TL;DR: The principles are illustrated and explained by several examples, drawing on object-oriented and mathematical modeling languages, and it is conject that the principles are applicable to the development of new modeling languages and for improving the design of existing modeling languages.
Abstract: Modeling languages, like programming languages, need to be designed if they are to be practical, usable, accepted, and of lasting value. We present principles for the design of modeling languages. To arrive at these principles, we consider the intended use of modeling languages. We conject that the principles are applicable to the development of new modeling languages, and for improving the design of existing modeling languages that have evolved, perhaps through a process of unification. The principles are illustrated and explained by several examples, drawing on object-oriented and mathematical modeling languages.

83 citations


Journal ArticleDOI
TL;DR: It is concluded that recording the GSCs may be useful for understanding project cost drivers and for comparing similar projects, but the VAF should not be used: doubts about its construction are not balanced by any practical benefit.
Abstract: In function point analysis, fourteen “general systems characteristics” (GSCs) are used to construct a “value adjustment factor” (VAF), with which a basic function point count is adjusted. Although the GSCs and VAF have been criticized on both theoretical and practical grounds, they are used by many practitioners. This paper reports on an empirical investigation into their use and practical value. We conclude that recording the GSCs may be useful for understanding project cost drivers and for comparing similar projects, but the VAF should not be used: doubts about its construction are not balanced by any practical benefit. A new formulation is needed for using the GSCs to explain effort; factors identified here could guide further research.

56 citations


Journal ArticleDOI
TL;DR: This paper is focused on qualitative and quantitative comparison of two distributed object models for use with Java: CORBA and RMI, and finds that because of its complexity CORBA is slightly slower than RMI in simple scenarios.
Abstract: Distributed object architectures and Java are important for building modern, scalable, web-enabled applications. This paper is focused on qualitative and quantitative comparison of two distributed object models for use with Java: CORBA and RMI. We compare both models in terms of features, ease of development and performance. We present performance results based on real world scenarios that include single client and multi-client configurations, different data types and data sizes. We evaluate multithreading strategies and analyse code in order to identify the most time-consuming methods. We compare the results and give hints and conclusions. We have found that because of its complexity CORBA is slightly slower than RMI in simple scenarios. On the other hand, CORBA handles multiple simultaneous clients and larger data amounts better and suffers from far lower performance degradation under heavy client load. The article presents a solid basis for making a decision about the underlying distributed object model.

55 citations


Journal ArticleDOI
TL;DR: The research thesis is that software development based on a software reuse reference model improves the competitive edge and time-to-market of many software development enterprises.
Abstract: In software engineering there is a need for technologies that will significantly decrease effort in developing software products, increase quality of software products and decrease time-to-markets. The software development industry can be improved by utilizing and managing software reuse with an “empirically validated reference model” that can be customized for different kinds of software development enterprises. Our research thesis is that software development based on a software reuse reference model improves the competitive edge and time-to-market of many software development enterprises. The definition and study of such a model has been carried out using four steps. First, the reference model developed here is based on the existing software reuse concepts. Second, this reference model is an empirical study which uses both legacy studies and lessons learned studies. Third, the impact of the reference model on software development effort, quality, and time-to-market is empirically derived. Fourth, an initial set of successful cases, which are based on the software reuse reference model utilization, are identified. The main contribution of this paper is a reference model for the practice of software reuse. A secondary contribution is an initial set of cases from software development enterprises which are successful in the practice of reuse in terms of decreased effort, increased quality and a high correlation in their application of our software reuse reference model activities.

Journal ArticleDOI
TL;DR: The rationale for a component-based approach to developing software engineering tools, the architecture and support tools used, some resultant tools and tool facilities developed, and the possible future research directions in this area are described.
Abstract: Developing software engineering tools is a difficult task, and the environments in which these tools are deployed continually evolve as software developers’ processes, tools and tool sets evolve. To more effectively develop such evolvable environments, we have been using component-based approaches to build and integrate a range of software development tools, including CASE and workflow tools, file servers and versioning systems, and a variety of reusable software agents. We describe the rationale for a component-based approach to developing such tools, the architecture and support tools we have used some resultant tools and tool facilities we have developed, and summarise the possible future research directions in this area.

Journal ArticleDOI
TL;DR: The presentation of the decision-making pattern is the purpose of this paper and will help developers and stakeholders to make decisions in order to reach their intentions.
Abstract: During enterprise knowledge development in any organisation, developers and stakeholders are faced with situations that require them to make decisions in order to reach their intentions. To help the decision-making process, guidance is required. Enterprise Knowledge Development (EKD) is a method offering a guided knowledge development process. The guidance provided by the EKD method is based on a decision-making pattern promoting a situation and intention oriented view of enterprise knowledge development processes. The pattern is iteratively repeated through the EKD process using different types of guiding knowledge. Consequently, the EKD process is systematically guided. The presentation of the decision-making pattern is the purpose of this paper.

Journal ArticleDOI
TL;DR: Model-based testing allows large numbers of test cases to be generated from a description of the behavior of the system under test, leading to a more effective and more efficient testing process.
Abstract: Model-based testing allows large numbers of test cases to be generated from a description of the behavior of the system under test. Given the same description and test runner, many types of scenarios can be exercised and large areas of the application under test can be covered, thus leading to a more effective and more efficient testing process.

Journal ArticleDOI
TL;DR: A conceptual model of nine ‘learning enablers’ to facilitate learning in software projects is presented, which help identifying whether individual and/or organisational learning is facilitated.
Abstract: The importance of people factors for the success of software development is commonly accepted, because the success of a software project is above all determined by having the right people on the right place at the right time. As software development is a knowledge intensive industry; the ‘quality’ of developers is primarily determined by their knowledge and skills. This paper presents a conceptual model of nine ‘learning enablers’ to facilitate learning in software projects. These enablers help identifying whether individual and/or organisational learning is facilitated. The main question addressed in this paper is: ‘Which factors enable learning in software projects and to what extent?’ q 2000 Elsevier Science B.V. All rights reserved.

Journal ArticleDOI
TL;DR: An integrated solution through which significant improvement may be achieved is discussed, based on the Multiple Criteria Decision Aid methodology and the exploitation of packaged software evaluation expertise in the form of an intelligent system.
Abstract: Solving software evaluation problems is a particularly difficult software engineering process and many contradictory criteria must be considered to reach a decision. Nowadays, the way that decision support techniques are applied suffers from a number of severe problems, such as naive interpretation of sophisticated methods and generation of counter-intuitive, and therefore most probably erroneous, results. In this paper we identify some common flaws in decision support for software evaluations. Subsequently, we discuss an integrated solution through which significant improvement may be achieved, based on the Multiple Criteria Decision Aid methodology and the exploitation of packaged software evaluation expertise in the form of an intelligent system. Both common mistakes and the way they are overcome are explained through a real world example.

Journal ArticleDOI
TL;DR: The proposed system measures the similarity between requirement sentences to identify possible redundancies and inconsistencies, and extracts the possible ambiguous requirements to trace dependency between documents and improve quality of requirement sentences.
Abstract: As software becomes more complicated and larger, the software engineer's requirements-analysis becomes an important and uneasy activity. This paper proposes a requirements-analysis supporting system that supports informal requirements-analysis. The proposed system measures the similarity between requirement sentences to identify possible redundancies and inconsistencies, and extracts the possible ambiguous requirements. The similarity measurement method combines a sliding window model and a parser model. Using these methods, the proposed system supports to trace dependency between documents and improve quality of requirement sentences. Efficiency of the proposed system and a process for requirement specification analysis using the system are presented.

Journal ArticleDOI
TL;DR: It is found that once an OO project starts, the metrics can give good indications of project progress, e.g. how mature the design and implementation is, which can be used to adjust the project plan in real time.
Abstract: Software metrics have been used to measure software artifacts statically—measurements are taken after the artifacts are created. In this study, three metrics—System Design Instability (SDI), Class Implementation Instability (CII), and System Implementation Instability (SII)—are used for the purpose of measuring object-oriented (OO) software evolution. The metrics are used to track the evolution of an OO system in an empirical study. We found that once an OO project starts, the metrics can give good indications of project progress, e.g. how mature the design and implementation is. This information can be used to adjust the project plan in real time. We also performed a study of design instability that examines how the implementation of a class can affect its design. This study determines that some aspects of OO design are independent of implementation, while other aspects are dependent on implementation.

Journal ArticleDOI
TL;DR: A framework for the comparison of proposals for information integration systems is presented, and it is shown that proposals differ greatly in all of the criteria stated and that the selection of an approach is thus highly dependent on the requirements of specific applications.
Abstract: Information integration systems provide facilities that support access to heterogeneous information sources in a way that isolates users from differences in the formats, locations and facilities of those sources. A number of systems have been proposed that exploit knowledge based techniques to assist with information integration, but it is not always obvious how proposals differ from each other in their scope, in the quality of integration afforded, or in the cost of exploitation. This paper presents a framework for the comparison of proposals for information integration systems, and applies the framework to a range of representative proposals. It is shown that proposals differ greatly in all of the criteria stated and that the selection of an approach is thus highly dependent on the requirements of specific applications.

Journal ArticleDOI
TL;DR: The paper introduces a new classification scheme for assessing software projects and illustrated how the method may be used to predict software success using subjective measures of project characteristics.
Abstract: This paper presents a method for using subjective factors to evaluate project success. The method is based on collection of subjective measures with respect to project characteristics and project success indicators. The paper introduces a new classification scheme for assessing software projects. Further, it is illustrated how the method may be used to predict software success using subjective measures of project characteristics. The classification scheme is illustrated in two case studies. The results are positive and encouraging for future development of the approach.

Journal ArticleDOI
TL;DR: The outcome of a case study aimed at assessing organisational obstacles influencing successful application of CBD in the industry is presented, including cognitive skills, disincentives, organisational politics and organisational culture.
Abstract: This paper discusses some human, social and organisational issues affecting the introduction of Component-Based Development (CBD) in organisations. In particular, the paper presents the outcome of a case study aimed at assessing organisational obstacles influencing successful application of CBD in the industry. We present some organisational problems experienced by three organisations in adopting and implementing CBD, including cognitive skills, disincentives, organisational politics and organisational culture. In each case we suggest some solutions that developers and managers should consider in order to minimise these organisational problems. We suggest that applying social–technical approaches can minimise the impact of these organisational obstacles. Examples of social–technical approaches include maintaining a relationship with customers throughout the development process and eliciting support from key sponsors and stakeholders.

Journal ArticleDOI
TL;DR: It is concluded that processes, tools and technologies that either reduce the need for or the time for testing have an impact on the development and evolution of lead-time of DRTSs.
Abstract: This paper presents a survey that identifies lead-time consumption in the development and evolution of distributed real-time systems DRTSs. Data has been collected through questionnaires, focused interviews and non-directive interviews with senior designers. Quantitative data has been analyzed using the Analytic Hierarchical Process (AHP). A trend in the 11 organizations is that there is a statistically significant shift of the main lead-time burden from programming to integration and testing, when distributing systems. From this, it is concluded that processes, tools and technologies that either reduce the need for or the time for testing have an impact on the development and evolution of lead-time of DRTSs.

Journal ArticleDOI
TL;DR: The model is extended to allow use of testing data on prior builds to cover the real-world scenario in which the release build is constructed only after a succession of repairs to buggy pre-release builds, to enable reliability prediction for future builds.
Abstract: In previous work we developed a method to model software testing data, including both failure events and correct behavior, as a finite-state, discrete-parameter, recurrent Markov chain. We then showed how direct computation on the Markov chain could yield various reliability related test measures. Use of the Markov chain allows us to avoid common assumptions about failure rate distributions and allows both the operational profile and test coverage of behavior to be explicitly and automatically incorporated into reliability computation. Current practice in Markov chain based testing and reliability analysis uses only the testing (and failure) activity on the most recent software build to estimate reliability. In this paper we extend the model to allow use of testing data on prior builds to cover the real-world scenario in which the release build is constructed only after a succession of repairs to buggy pre-release builds. Our goal is to enable reliability prediction for future builds using any or all testing data for prior builds. The technique we present uses multiple linear regression and exponential smoothing to merge multi-build test data (modeled as separate Markov chains) into a single Markov chain which acts as a predictor of the next build of testing activity. At the end of the testing cycle, the predicted Markov chain represents field use. It is from this chain that reliability predictions are made.

Journal ArticleDOI
TL;DR: The main results from a survey investigation performed in Norwegian organisations within the area of software development and maintenance are presented, finding the amount of both traditional and functional maintenance work are significantly higher than in the similar investigation done five years earlier.
Abstract: The large amount of work on information systems being taken up by maintenance activities has been one of the arguments of those speaking about a ‘software crisis’. We have investigated the applicability of this statement, and propose instead to look at the percentage of work being done on functional maintenance to assess the efficiency of the information systems support in an organisation. This paper presents the main results from a survey investigation performed in Norwegian organisations within the area of software development and maintenance. The results are based on responses from 53 Norwegian organisations. The investigation is compared with other investigations, both those performed in Norway where a similar investigation was conducted in 1994 and investigations performed in other countries. Similar to the investigation from 1994, the situation is better when looking at the situation from a functional point of view rather than using the traditional maintenance measures. Somewhat surprisingly, the amount of both traditional and functional maintenance work are significantly higher than in the similar investigation done five years earlier. It is also significantly higher than what was found in earlier investigations carried out in the USA and in other countries. One reason for this seems to be the extra maintenance and replacement-oriented work necessary to deal with the Y2K-problem. Even when considering this, too much of the scarce IT-personnel spent their time on tasks that do not add value for the users of the systems.

Journal ArticleDOI
TL;DR: The successful use of MetaMOOSE to construct a full lifecycle CASE toolset (MOOSE) and its subsequent use in real world engineering projects is described.
Abstract: This paper describes certain problems which can occur when attempting to build complex CASEtools with facilities not envisaged by the Metatool builders. A solution, based upon an object oriented approach combined with an interpreted OO language has been used to build the MetaMOOSE MetaCASE tool. MetaMOOSE uses an object model to describe the entities and behaviour of the SE development process. Use of the Itcl language gives platform independence and speeds the tool development cycle. A persistent object database ensures integration of the resulting CASE tools. In addition, the successful use of MetaMOOSE to construct a full lifecycle CASE toolset (MOOSE) and its subsequent use in real world engineering projects is described.

Journal ArticleDOI
TL;DR: This paper describes the TA Exchange Format (TAXForm) exchange format for frameworks at the software architecture level and shows how TAXForm can be used as a "binding glue" to achieve interoperability between these frameworks without having to modify their internal structure.
Abstract: A number of standalone tools are designed to help developers understand software systems. These tools operate at different levels of abstraction, from low level source code to software architectures. Although recent proposals have suggested how code-level frameworks can share information, little attention has been given to the problem of connecting software architecture level frameworks. In this paper, we describe the TA Exchange Format (TAXForm) exchange format for frameworks at the software architecture level. By defining mappings between TAXForm and formats that are used within existing frameworks, we show how TAXForm can be used as a “binding glue” to achieve interoperability between these frameworks without having to modify their internal structure.

Journal ArticleDOI
TL;DR: Given a Markov chain usage model as a system of convex constraints, mathematical programming can be used to generate theMarkov chain transition probabilities that represent a specific software usage model.
Abstract: Software usage models are the basis for statistical testing. They derive their structure from specifications and their probabilities from evolving knowledge about the intended use of the software product. The evolving knowledge comes from developers, customers and testers of the software system in the form of relationships that should hold among the parameters of a model. When software usage models are encoded as Markov chains, their structure can be represented by a system of linear constraints, and many of the evolving relationships among model parameters can be represented by convex constraints. Given a Markov chain usage model as a system of convex constraints, mathematical programming can be used to generate the Markov chain transition probabilities that represent a specific software usage model.

Journal ArticleDOI
TL;DR: A set of indicators for the evaluation of Workflow software-type products within the context of Information Systems, based on a comprehensive bibliographical review of all topics referring to the Workflow Technology and Information Systems is presented.
Abstract: The main objective of this paper is to propose a set of indicators for the evaluation of Workflow software-type products within the context of Information Systems. This paper is mainly based on a comprehensive bibliographical review of all topics referring to the Workflow Technology and Information Systems. Next, sets of indicators are presented for the selection of a Workflow software based on the realities of the business world, including a method of examination so as to obtain an integral evaluation on the Workflow software. Finally, the evaluation method for two types of Workflow software is applied: Lotus Domino / Notes ® and Microsoft Exchange ® , for the billing subsystems of a company called MANAPRO Consultants, Inc. ® .

Journal ArticleDOI
TL;DR: TML provides a simple representation of usage models, while formalizing modeling techniques already in use informally, for describing these models in a manner that supports development, reuse, and automated testing.
Abstract: Finite-state Markov chains have proven useful as a model to characterize a population of uses of a software system. This paper presents a language (TML) for describing these models in a manner that supports development, reuse, and automated testing. TML provides a simple representation of usage models, while formalizing modeling techniques already in use informally.

Journal ArticleDOI
TL;DR: A brief introduction to software engineering tools is presented, and issues involved in the construction of these tools are discussed.
Abstract: A brief introduction to software engineering tools is presented, and issues involved in the construction of these tools are discussed. Some of the current issues concerning tool developers are highlighted, which include: metaCASE technology, cognitive support, evaluation and validation of tools and data interchange. Some recent developments in tool construction techniques are examined, and opportunities for further research and development in tool building are identified.