scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 2001"


Journal ArticleDOI
TL;DR: It is argued that software engineering is ideal for the application of metaheuristic search techniques, such as genetic algorithms, simulated annealing and tabu search, which could provide solutions to the difficult problems of balancing competing competing constraints.
Abstract: This paper claims that a new field of software engineering research and practice is emerging: search-based software engineering. The paper argues that software engineering is ideal for the application of metaheuristic search techniques, such as genetic algorithms, simulated annealing and tabu search. Such search-based techniques could provide solutions to the difficult problems of balancing competing (and some times inconsistent) constraints and may suggest ways of finding acceptable solutions in situations where perfect solutions are either theoretically impossible or practically infeasible. In order to develop the field of search-based software engineering, a reformulation of classic software engineering problems as search problems is required. The paper briefly sets out key ingredients for successful reformulation and evaluation criteria for search-based software engineering.

761 citations


Journal ArticleDOI
TL;DR: An evolutionary test environment has been developed that performs fully automatic test data generation for most structural test methods and the introduction of an approximation level for fitness evaluation of generated test data and the definition of an efficient test strategy for processing test goals, increases the performance of evolutionary testing considerably.
Abstract: Testing is the most significant analytic quality assurance measure for software. Systematic design of test cases is crucial for the test quality. Structure-oriented test methods, which define test cases on the basis of the internal program structures, are widely used. A promising approach for the automation of structural test case design is evolutionary testing. Evolutionary testing searches test data that fulfil a given structural test criteria by means of evolutionary computation. In this work, an evolutionary test environment has been developed that performs fully automatic test data generation for most structural test methods. The introduction of an approximation level for fitness evaluation of generated test data and the definition of an efficient test strategy for processing test goals, increases the performance of evolutionary testing considerably.

534 citations


Journal ArticleDOI
TL;DR: An overview of evolutionary algorithms is presented covering genetic algorithms, evolution strategies, genetic programming and evolutionary programming, and the schema theorem is reviewed and critiqued.
Abstract: An overview of evolutionary algorithms is presented covering genetic algorithms, evolution strategies, genetic programming and evolutionary programming. The schema theorem is reviewed and critiqued. Gray codes, bit representations and real-valued representations are discussed for parameter optimization problems. Parallel Island models are also reviewed, and the evaluation of evolutionary algorithms is discussed.

431 citations


Journal ArticleDOI
TL;DR: The problem of selecting an optimal next release is shown to be NP-hard and the use of various modern heuristics to find a high quality but possibly suboptimal solution is described.
Abstract: Companies developing and maintaining complex software systems need to determine the features that should be added to their system as part of the next release. They will wish to select these features to ensure the demands of their client base are satisfied as much as possible while at the same time ensuring that they themselves have the resources to undertake the necessary development. This situation is modelled in this paper and the problem of selecting an optimal next release is shown to be NP-hard. The use of various modern heuristics to find a high quality but possibly suboptimal solution is described. Comparative studies of these heuristics are given for various test cases.

319 citations


Journal ArticleDOI
TL;DR: GP has the potential to be a valid additional tool for software effort estimation but set up and running effort is high and interpretation difficult, as it is for any complex meta-heuristic technique.
Abstract: Accurate software effort estimation is an important part of the software process. Originally, estimation was performed using only human expertise, but more recently, attention has turned to a variety of machine learning (ML) methods. This paper attempts to evaluate critically the potential of genetic programming (GP) in software effort estimation when compared with previously published approaches, in terms of accuracy and ease of use. The comparison is based on the well-known Desharnais data set of 81 software projects derived from a Canadian software house in the late 1980s. The input variables are restricted to those available from the specification stage and significant effort is put into the GP and all of the other solution strategies to offer a realistic and fair comparison. There is evidence that GP can offer significant improvements in accuracy but this depends on the measure and interpretation of accuracy used. GP has the potential to be a valid additional tool for software effort estimation but set up and running effort is high and interpretation difficult, as it is for any complex meta-heuristic technique.

317 citations


Journal ArticleDOI
TL;DR: One of the results of this work is that it is found that significant deviations from the linear model in the software cost functions are not found, from the marginal cost analysis of the equations with best predictive values.
Abstract: The question of finding a function for software cost estimation is a long-standing issue in the software engineering field. The results of other works have shown different patterns for the unknown function, which relates software size to project cost (effort). In this work, the research about this problem has been made by using the technique of Genetic Programming (GP) for exploring the possible cost functions. Both standard regression analysis and GP have been applied and compared on several data sets. However, regardless of the method, the basic size–effort relationship does not show satisfactory results, from the predictive point of view, across all data sets. One of the results of this work is that we have not found significant deviations from the linear model in the software cost functions. This result comes from the marginal cost analysis of the equations with best predictive values.

181 citations


Journal ArticleDOI
TL;DR: A research model was developed and tested to assess the factors that influence the use of IT by senior executives and found only a small number of antecedent variables influencing actual use, either directly or indirectly.
Abstract: There is a paucity of literature focusing on the ingredients for effective top management, i.e. senior executives, use of Information Technology (IT). In practice, many senior executives argue that they do not see a connection between what IT does and their tasks as executives. Based on the Technology Acceptance Model (TAM), a research model was developed and tested to assess the factors that influence the use of IT by senior executives. A dedicated system supporting the task of a senior executive, an Executive Information System (EIS), was used as the IT tool under review. A large number of external variables were identified and hypothesized, influencing the core elements of TAM. To test the research model using structural equation modeling, cross-sectional data was gathered from eighty-seven senior executives drawn from twenty-one European-based multinationals. The results supported the core TAM and found only a small number of antecedent variables influencing actual use, either directly or indirectly. In addition to identifying the external factors, three of these key variables are under managerial control. They can be used to design organizational or managerial interventions that increase effective utilization of IT.

162 citations


Journal ArticleDOI
TL;DR: It was found that projects with high priority on costs and incomplete requirements specifications were prone to adjust the work to fit the estimate when the estimates were too optimistic, while too optimistic estimates led to effort overruns for projects withhigh priority on quality and well specified requirements.
Abstract: This paper presents results from two case studies and two experiments on how effort estimates impact software project work. The studies indicate that a meaningful interpretation of effort estimation accuracy requires knowledge about the size and nature of the impact of the effort estimates on the software work. For example, we found that projects with high priority on costs and incomplete requirements specifications were prone to adjust the work to fit the estimate when the estimates were too optimistic, while too optimistic estimates led to effort overruns for projects with high priority on quality and well specified requirements. Two hypotheses were derived from the case studies and tested experimentally. The experiments indicate that: (1) effort estimates can be strongly impacted by anchor values, e.g. early indications on the required effort. This impact is present even when the estimators are told that the anchor values are irrelevant as estimation information; (2) too optimistic effort estimates lead to less use of effort and more errors compared with more realistic effort estimates on programming tasks.

107 citations


Journal ArticleDOI
TL;DR: The application of stochastic production frontiers to a comprehensive firm-level panel data set provides empirical evidence that IT has a significantly positive effect on technical efficiency and, hence, contributes to the productivity growth in organizations, claimed by some earlier studies with the same data set.
Abstract: With the vast amounts of resources being invested in information technology (IT), the issue of how to measure and manage the impact of IT on organizational performance has received increased attention. Based on the production theory in microeconomics, this paper investigates the relationship between IT investments and technical efficiency in the firm's production process. The application of stochastic production frontiers to a comprehensive firm-level panel data set provides us with empirical evidence that IT has a significantly positive effect on technical efficiency and, hence, contributes to the productivity growth in organizations, claimed by some earlier studies with the same data set. The stochastic production frontiers considered include the popular Cobb–Douglas function and the more flexible translog function. Both specifications of production technology lead to the same conclusion. Managerial implications derived from the empirical results are also presented.

94 citations


Journal ArticleDOI
TL;DR: A new approach based on the combination of an SPS and Evolutionary Computation is presented, to provide accurate decision rules in order to help the project manager to take decisions at any time in the development.
Abstract: The use of dynamic models and simulation environments in connection with software projects paved the way for tools that allow us to simulate the behaviour of the projects. The main advantage of a Software Project Simulator (SPS) is the possibility of experimenting with different decisions to be taken at no cost. In this paper, we present a new approach based on the combination of an SPS and Evolutionary Computation. The purpose is to provide accurate decision rules in order to help the project manager to take decisions at any time in the development. The SPS generates a database from the software project, which is provided as input to the evolutionary algorithm for producing the set of management rules. These rules will help the project manager to keep the project within the cost, quality and duration targets. The set of alternatives within the decision-making framework is therefore reduced to a quality set of decisions.

89 citations


Journal ArticleDOI
TL;DR: This paper describes and evaluates a reading technique called usage-based reading (UBR), which utilises prioritised use cases to guide reviewers through an inspection and has the potential to become an important reading technique.
Abstract: Reading methods for software inspections are used for aiding reviewers to focus on special aspects in a software artefact. Many experiments were conducted for checklist-based reading and scenario-based reading concluding that the focus is important for software reviewers. This paper describes and evaluates a reading technique called usage-based reading (UBR). UBR utilises prioritised use cases to guide reviewers through an inspection. More importantly, UBR drives the reviewers to focus on the software parts that are most important for a user. An experiment was conducted on 27 third year Bachelor's software engineering students, where one group used use cases sorted in a prioritised order and the control group used randomly ordered use cases. The main result is that reviewers in the group with prioritised use cases are significantly more efficient and effective in detecting the most critical faults from a user's point of view. Consequently, UBR has the potential to become an important reading technique. Future extensions to the reading technique are suggested and experiences gained from the experiment to support replications are provided.

Journal ArticleDOI
TL;DR: The analysis suggests that regression analysis seems to be the best choice as an ERP prediction system, and that ANGEL, ACE, CART and OSR primarily add value to a user in exploratory data analysis by their ability to identify similar projects.
Abstract: There exist many effort prediction systems but none specifically devised for enterprise resource planning (ERP) projects, and the empirical evidence is neither convincing nor adequate from a human user perspective. Consequently, this non-empirical evaluation contributes knowledge by investigating: (i) their applicability to ERP projects, (ii) their added value to a human user beyond making a prediction, and (iii) if they make sense. The analysis suggests that regression analysis seems to be the best choice as an ERP prediction system, and that ANGEL, ACE, CART and OSR primarily add value to a user in exploratory data analysis by their ability to identify similar projects.

Journal ArticleDOI
TL;DR: The revised framework confirms most of the original characteristics and suggests a number of additions and modifications, and grounds the characteristics in the framework and thereby suggests more precise definitions for some of them.
Abstract: Earlier semantic and formal analyses of whole–part (WP) relationships in object-oriented models have led to a framework, which distinguishes between primary, consequential, secondary and dependent characteristics of WP relationships. This paper interprets, validates and elaborates on that framework using an existing ontological theory and an associated formal model of objects. The revised framework confirms most of the original characteristics and suggests a number of additions and modifications. The analysis also grounds the characteristics in the framework and thereby suggests more precise definitions for some of them.

Journal ArticleDOI
TL;DR: In this paper, statistical simulation techniques are used to calculate confidence intervals for the effort needed for a project portfolio and the overall approach is illustrated through the adaptation of the analogy-based method for software cost estimation to cover multiple projects.
Abstract: Although typically a software development organisation is involved in more than one project simultaneously, the available tools in the area of software cost estimation deal mostly with single software projects. In order to calculate the possible cost of the entire project portfolio, one must combine the single project estimates taking into account the uncertainty involved. In this paper, statistical simulation techniques are used to calculate confidence intervals for the effort needed for a project portfolio. The overall approach is illustrated through the adaptation of the analogy-based method for software cost estimation to cover multiple projects.

Journal ArticleDOI
TL;DR: Insight is provided into how OO software metrics should be interpreted in relation to the quality factors they purport to measure in terms of quality factors related to reusability.
Abstract: Software reuse increases productivity, reduces costs, and improves quality. Object-oriented (OO) software has been shown to be inherently more reusable than functionally decomposed software; however, most OO software was not specifically designed for reuse [Software Reuse Guidelines and Methods, Plenum Press, New York, 1991]. This paper describes the analysis, in terms of quality factors related to reusability, contained in an approach that aids significantly in assessing existing OO software for reusability. An automated tool implementing the approach is validated by comparing the tool's quality determinations to that of human experts. This comparison provides insight into how OO software metrics should be interpreted in relation to the quality factors they purport to measure.

Journal ArticleDOI
TL;DR: The paper shows how simulated annealing and genetic algorithms can be used to generate correct and efficient BAN protocols and investigates the use of parsimonious and redundant representations.
Abstract: Protocol security is important. So are efficiency and cost. This paper provides an early framework for handling such aspects in a uniform way based on combinatorial optimisation techniques. The belief logic of Burrows, Abadi and Needham (BAN logic) is viewed as both a specification and proof system and as a ‘protocol programming language’. The paper shows how simulated annealing and genetic algorithms can be used to generate correct and efficient BAN protocols. It also investigates the use of parsimonious and redundant representations.

Journal ArticleDOI
TL;DR: A new approach is proposed to formulate the requirement specifications based on the notion of goals along three aspects: to extend use cases with goals to guide the derivation of use cases, to analyze the interactions among nonfunctional requirements, and to structure fuzzy object-oriented modelsbased on the interactions found.
Abstract: One of the foci of the recent development in requirements engineering has been the study of conflicts and vagueness encountered in requirements. However, there is no systematic way in the existing approaches for handling the interactions among nonfunctional requirements and their impacts on the structuring of requirement specifications. In this paper, a new approach is proposed to formulate the requirement specifications based on the notion of goals along three aspects: (1) to extend use cases with goals to guide the derivation of use cases; (2) to analyze the interactions among nonfunctional requirements; and (3) to structure fuzzy object-oriented models based on the interactions found. The proposed approach is illustrated using the problem domain of a meeting scheduler system.

Journal ArticleDOI
TL;DR: This paper proposes a pattern system to design basic aspects of data sharing, communication, and coordination for collaborative applications, useful for the design and development of collaborative applications as well as for the development of platforms for the construction of collaborative application.
Abstract: Collaborative applications provide a group of users with the facility to communicate and share data in a coordinated way. Building collaborative applications is still a complex task. In this paper we propose a pattern system to design basic aspects of data sharing, communication, and coordination for collaborative applications. These patterns are useful for the design and development of collaborative applications as well as for the development of platforms for the construction of collaborative applications. q 2001 Elsevier Science B.V. All rights reserved.

Journal ArticleDOI
TL;DR: This paper considers the problem of generating a minimal synchronised test sequence that detects output-shifting faults when the system is specified using a finite state machine with multiple ports.
Abstract: A distributed system may have a number of separate interfaces called ports and in testing it may be necessary to have a separate tester at each port. This introduces a number of issues, including the necessity to use synchronised test sequences and the possibility that output-shifting faults go undetected. This paper considers the problem of generating a minimal synchronised test sequence that detects output-shifting faults when the system is specified using a finite state machine with multiple ports. The set of synchronised test sequences that detect output-shifting faults is represented by a directed graph G and test generation involves finding appropriate tours of G . This approach is illustrated using the test criterion that the test sequence contains a test segment for each transition.

Journal ArticleDOI
TL;DR: In this paper, a formal method (called PZ nets) for specifying concurrent and distributed systems is presented, which integrates two well-known existing formal methods Petri nets and Z such that Petri Nets are used to specify the overall structure, control flows, causal relation, and dynamic behavior of a system; and Z is used to define tokens, labels and constrains of the system.
Abstract: In this paper, a formal method (called PZ nets) for specifying concurrent and distributed systems is presented. PZ nets integrate two well-known existing formal methods Petri nets and Z such that Petri nets are used to specify the overall structure, control flows, causal relation, and dynamic behavior of a system; and Z is used to define tokens, labels and constrains of the system. The essence, benefits, and problems of the integration are discussed. A set of heuristics and transformations to develop PZ nets and a technique to analyze PZ nets are proposed and demonstrated through a well-known example.

Journal ArticleDOI
TL;DR: This work interviewed several formal methods users about the use of formal methods and their impact on various aspects of software engineering including the effects on the company, its products and its development processes as well as pragmatic issues such as scalability, understandability and tool support.
Abstract: The recognised deficiency in the level of empirical investigation of software engineering methods is particularly acute in the area of formal methods, where reports about their usefulness vary widely. We interviewed several formal methods users about the use of formal methods and their impact on various aspects of software engineering including the effects on the company, its products and its development processes as well as pragmatic issues such as scalability, understandability and tool support. The interviews are a first stage of empirical assessment. Future work will investigate some of the issues raised using formal experimentation and case studies.

Journal ArticleDOI
TL;DR: This paper shows how a combined set of internal and external success factors explains the success or failure of five industrial measurement programs.
Abstract: Success factors for measurement programs as identified in the literature typically focus on the ‘internals’ of the measurement program; incremental implementation, support from management, a well-planned metric framework, and so on. However, for a measurement program to be successful within its larger organizational context, it has to generate value for the organization. This implies that attention should also be given to the proper mapping of some identifiable organizational problem onto the measurement program, and the translation back of measurement results to organizational actions. We have extended the well-known ‘internal’ success factors for software measurement programs with four ‘external’ success factors. These success factors are aimed at the link between the measurement program and the usage of the measurement results. In this paper, we show how this combined set of internal and external success factors explains the success or failure of five industrial measurement programs.

Journal ArticleDOI
TL;DR: Two metrics related to referential integrity, number of foreign keys (NFK) and depth of the referentIAL tree (DRT) for controlling the quality of a relational database are presented.
Abstract: Databases are the core of Information Systems (IS). It is, therefore, necessary to ensure the quality of the databases in order to ensure the quality of the IS. Metrics are useful mechanisms for controlling database quality. This paper presents two metrics related to referential integrity, number of foreign keys (NFK) and depth of the referential tree (DRT) for controlling the quality of a relational database. However, to ascertain the practical utility of the metrics, experimental validation is necessary. This validation can be carried out through controlled experiments or through case studies. The controlled experiments must also be replicated in order to obtain firm conclusions. With this objective in mind, we have undertaken different empirical work with metrics for relational databases. As a part of this empirical work, we have conducted a case study with some metrics for relational databases and a controlled experiment with two metrics presented in this paper. The detailed experiment described in this paper is a replication of the later one. The experiment was replicated in order to confirm the results obtained from the first experiment. As a result of all the experimental works, we can conclude that the NFK metric is a good indicator of relational database complexity. However, we cannot draw such firm conclusions regarding the DRT metric.

Journal ArticleDOI
TL;DR: This work presents a CASE tool designed to generate the SQL queries necessary to build a warehouse from a set of operational relational databases, and specifies a list of attribute names that will appear in the warehouse, conditions if any are desired, and a description of the operational databases.
Abstract: Data warehouses have become an instant phenomenon in many large organizations that deal with massive amounts of information. Drawing on the experiences from the systems development field, we surmise that an effective design tool will enhance the success of warehouse implementations. Thus, we present a CASE tool designed to generate the SQL queries necessary to build a warehouse from a set of operational relational databases. The warehouse designer simply specifies a list of attribute names that will appear in the warehouse, conditions if any are desired, and a description of the operational databases. The tool returns the queries needed to populate the warehouse table.

Journal ArticleDOI
TL;DR: An approach to testing from a deterministic sequential specification written in μSZ, in which the extended finite state machine (EFSM) defined by the Statechart can be rewritten to produce an EFSM that has a number of properties that simplify test generation.
Abstract: A hybrid specification language μSZ, in which the dynamic behaviour of a system is described using Statecharts and the data and the data transformations are described using Z, has been developed for the specification of embedded systems. This paper describes an approach to testing from a deterministic sequential specification written in μSZ. By considering the Z specifications of the operations, the extended finite state machine (EFSM) defined by the Statechart can be rewritten to produce an EFSM that has a number of properties that simplify test generation. Test generation algorithms are introduced and applied to an example. While this paper considers μSZ specifications, the approaches described might be applied whenever the specification is an EFSM whose states and transitions are specified using a language similar to Z.

Journal ArticleDOI
TL;DR: It is shown that using an eclectic approach, where a domain expert software engineer is encouraged to tailor and combine existing approaches, may overcome the limitation of the single approaches and helps to better address the particular goals of the project at hand and the unique aspects of the subject system.
Abstract: The identification of objects in procedural programs has long been recognised as a key to renewing legacy systems. As a consequence, several authors have proposed methods and tools that achieve, in general, some level of success, but do not always precisely identify a coherent set of objects. We show that using an eclectic approach, where a domain expert software engineer is encouraged to tailor and combine existing approaches, may overcome the limitation of the single approaches and helps to better address the particular goals of the project at hand and the unique aspects of the subject system. The eclectic approach is illustrated by reporting experiences from a case study of identifying coarse-grained, persistent objects to be used in the migration of a COBOL system to a distributed component system.

Journal ArticleDOI
TL;DR: An integrated environment acting as a software agent for discovering correlative attributes of data objects from multiple heterogeneous resources is presented, which employs common data warehousing and OLAP techniques to form integrated data repository and generate database queries over large data collections from various distinct data resources.
Abstract: Discovering knowledge such as causal relations among objects in large data collections is very important in many decision-making processes. In this paper, we present our development of an integrated environment acting as a software agent for discovering correlative attributes of data objects from multiple heterogeneous resources. The environment provides necessary supporting tools and processing engines for acquiring, collecting, and extracting relevant information from multiple data resources, and then forming meaningful knowledge patterns. The agent system is featured with an interactive user interface that provides useful communication channels for human supervisors to actively engage in necessary consultation and guidance in the entire knowledge discovery processes. A cross-reference technique is employed for searching and discovering coherent set of correlative patterns from the heterogeneous data resources. A Bayesian network approach is applied as a knowledge representation scheme for recording and manipulating the discovered causal relations. The system employs common data warehousing and OLAP techniques to form integrated data repository and generate database queries over large data collections from various distinct data resources.

Journal ArticleDOI
TL;DR: A qualitative evaluation is performed on RUP's public domain component and on OPEN using a set of stated criteria, focusing on aspects of the process architecture and underpinning metamodel, the concepts and terminology utilized and support for project management.
Abstract: Two of the leading object-oriented processes are the public domain Object-oriented Process, Environment and Notation (OPEN) and the proprietary Rational Unified Process (RUP). A qualitative evaluation is performed on RUP's public domain component and on OPEN using a set of stated criteria. In particular, we focus our comparison on aspects of the process architecture and underpinning metamodel, the concepts and terminology utilized and support for project management.

Journal ArticleDOI
TL;DR: This paper describes how a model of a system to be implemented using COM might be constructed using a particular modelling tool, RolEnact, and discusses the extent to which validation of the model contributes to the validity of the eventual solution.
Abstract: Software Engineers continue to search for efficient ways to build high quality systems. Two contrasting techniques that promise to help with the effective construction of high quality systems are the use of formal models during design and the use of components during development. In this paper, we take the position that these techniques work well together. Hardware Engineers have shown that building systems from components has brought enormous benefits. Using components permits hardware engineers to consider systems at an abstract level, making it possible for them to build and reason about systems that would otherwise be too large and complex to understand. It also enables them to make effective reuse of existing designs. It seems reasonable to expect that using components in software development will also bring advantages. Formal methods provide a means to reason about a program (or system) enabling the creation of programs which can be proved to meet their specifications. However, the size of real systems makes these methods impractical for any but the simplest of structures — constructing a complete formal specification for a commercial system is a daunting task. Using formal methods for the whole of a large commercial system is not practical, but some of the advantages of using them can be obtained where a system is to be built from communicating components, by building and evaluating a formal model of the system. We describe how a model of a system to be implemented using COM might be constructed using a particular modelling tool, RolEnact. We discuss the extent to which validation of the model contributes to the validity of the eventual solution.

Journal ArticleDOI
TL;DR: An approach and practical experience for integrating the legacy systems to a heterogeneous distributed computing environment by adopting the CORBA technology is presented and evaluated using the equality attributes proposed by Bass, Clements, and Kazman.
Abstract: With the advent and widespread use of object-oriented and client–server technologies, companies expect their legacy systems, developed for the centralized environment, to take advantage of these new technologies and also cooperate with their heterogeneous environments. An alternative to migrating legacy systems from the mainframe to a user-centered, distributed object computing, and client–server platform is to wrap the legacy systems on the mainframe and expose the interfaces of the legacy systems to the remote clients. The enabling middleware technologies such as Common Object Request Broker Architecture (CORBA), Component Object Model/Distributed Component Object Model (COM/DCOM), and Java RMI make the migration of the legacy systems to a heterogeneous distributed computing environment possible. In this paper, we present an approach and practical experience for integrating the legacy systems to a heterogeneous distributed computing environment by adopting the CORBA technology. It has been reported quite often that an approach like this will improve system maintainability, portability, and interoperability. We also illustrate this approach with an industrial project. The project is viewed as a reengineering effort where a centralized reengineering system is wrapped to operate in a heterogeneous distributed computing environment by leveraging CORBA technology. The reengineering approach is a combination of redesign and simple facelift. The resulting legacy integration architecture through the application of the approach is evaluated using the equality attributes proposed by Bass, Clements, and Kazman.