scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 2003"


Journal ArticleDOI
TL;DR: This paper proposes that most definitions of business process are based on machine metaphor type explorations of a process that are too limited to express the true nature of business processes that need to develop and adapt to today's challenging environment.
Abstract: Definitions of business process given in much of the literature on Business Process Management are limited in depth and their related models of business processes are correspondingly constrained. After giving a brief history of the progress of business process modeling techniques from production systems to the office environment, this paper proposes that most definitions are based on machine metaphor type explorations of a process. While these techniques are often rich and illuminating it is suggested that they are too limited to express the true nature of business processes that need to develop and adapt to today's challenging environment.

346 citations


Journal ArticleDOI
TL;DR: This paper examines the elements of organization-oriented macro decisions as well as process-oriented micro decisions in the RE process and illustrates how to integrate classical decision-making models with RE process models.
Abstract: The requirements engineering (RE) process is a decision-rich complex problem solving activity. This paper examines the elements of organization-oriented macro decisions as well as process-oriented micro decisions in the RE process and illustrates how to integrate classical decision-making models with RE process models. This integration helps in formulating a common vocabulary and model to improve the manageability of the RE process, and contributes towards the learning process by validating and verifying the consistency of decision-making in RE activities.

197 citations


Journal ArticleDOI
TL;DR: This paper presents an integrated method that combines metamorphic testing with fault-based testing using real and symbolic inputs and enhances fault- based testing to alleviate the oracle problem.
Abstract: There are two fundamental limitations in software testing, known as the reliable test set problem and the oracle problem. Fault-based testing is an attempt by Morell to alleviate the reliable test set problem. In this paper, we propose to enhance fault-based testing to alleviate the oracle problem as well. We present an integrated method that combines metamorphic testing with fault-based testing using real and symbolic inputs.

178 citations


Journal ArticleDOI
TL;DR: This paper addresses the question of how to characterize properties in an evolutionary framework, and what relationships link these properties to a customer's view of correctness, and describes in rigorous terms the kinds of validation checks that must be performed on parts of a requirements specication in order to ensure that errors are detected and marked as such, leading to better quality requirements.
Abstract: The initial expression of requirements for a computer-based system is often informal and possibly vague. Requirements engineers need to examine this often incomplete and inconsistent brief expression of needs. Based on the available knowledge and expertise, assumptions are made and conclusions are deduced to transform this \rough sketch" into more complete, consistent, and hence correct requirements. This paper addresses the question of how to characterize these properties in an evolutionary framework, and what relationships link these properties to a customer’s view of correctness. Moreover, we describe in rigorous terms the dieren t kinds of validation checks that must be performed on dieren t parts of a requirements specication in order to ensure that errors (i.e., cases of inconsistency and incompleteness) are detected and marked as such, leading to better quality requirements.

132 citations


Journal ArticleDOI
TL;DR: This pilot study builds upon an existing theory-based categorisation of communication problems through presentation of a four-dimensional framework on communication, validated through a content analysis of interview data that can be assigned to the dimensional categories, highlighting any problematic areas.
Abstract: The gathering of stakeholder requirements comprises an early, but continuous and highly critical stage in system development. This phase in development is subject to a large degree of error, influenced by key factors rooted in communication problems. This pilot study builds upon an existing theory-based categorisation of these problems through presentation of a four-dimensional framework on communication. Its structure is validated through a content analysis of interview data, from which themes emerge, that can be assigned to the dimensional categories, highlighting any problematic areas. The paper concludes with a discussion on the utilisation of the framework for requirements elicitation exercises.

124 citations


Journal ArticleDOI
TL;DR: In this paper, the use of Bayesian belief networks (BBNs) was proposed to support expert judgment in software cost estimation, and their advantages for expert opinion support for productivity estimation.
Abstract: In spite of numerous methods proposed, software cost estimation remains an open issue and in most situations expert judgment is still being used. In this paper, we propose the use of Bayesian belief networks (BBNs), already applied in other software engineering areas, to support expert judgment in software cost estimation. We briefly present BBNs and their advantages for expert opinion support and we propose their use for productivity estimation. We illustrate our approach by giving two examples, one based on the COCOMO81 cost factors and a second one, dealing with productivity in ERP system localization.

120 citations


Journal ArticleDOI
TL;DR: A software development effort PI approach that is based on the assumption that the estimation accuracy of earlier software projects predicts the effort PIs of new projects is introduced.
Abstract: When estimating software development effort, it may be useful to describe the uncertainty of the estimate through an effort prediction interval (PI). An effort PI consists of a minimum and a maximum effort value and a confidence level. We introduce and evaluate a software development effort PI approach that is based on the assumption that the estimation accuracy of earlier software projects predicts the effort PIs of new projects. First, we demonstrate the applicability and different variants of the approach on a data set of 145 software development tasks. Then, we experimentally compare the performance of one variant of the approach with human (software professionals') judgment and regression analysis-based effort PIs on a data set of 15 development tasks. Finally, based on the experiment and analytical considerations, we discuss when to base effort PIs on human judgment, regression analysis, or our approach.

99 citations


Journal ArticleDOI
TL;DR: Two new approaches for passive testing using an Extended Finite State Machine (EFSM) specification are presented that extract information from the specification and then work on the trace and take a different direction than the previous methods.
Abstract: This paper presents two new approaches for passive testing using an Extended Finite State Machine (EFSM) specification. The state of the art of passive testing shows us that all the methods for detection of errors based on EFSMs try to match the trace to the specification. Indeed, one searches a state succession in the specification machine that is able to generate the trace observed on the implementation. Using this approach, processing is performed on the specification and the trace remains in the background since no operation is applied to it. This made us realise that focusing our efforts on the trace could be beneficial and has given as result two approaches presented in this paper that extract information from the specification and then work on the trace. Thus, they take a different direction than the previous methods. We first present an approach to test traces by using invariants resulting from the specification. We formally define these invariants and we see how to extract them. We also discuss their ability to detect errors appearing in the implementation. This approach is able to test the data flow, but not in a very satisfactory way. This is the reason for a second approach seeking to apply a set of constraints to the trace. We develop in detail its principles. Both approaches are applied to a Simple Connection Protocol (SCP) and the results of preliminary experiments are presented. q 2003 Elsevier B.V. All rights reserved.

91 citations


Journal ArticleDOI
TL;DR: Links that may be made between process models and Unified Modelling Language (UML) software specification techniques are discussed, working from an argument that the whole complexity of organisational activity cannot be captured by UML alone.
Abstract: This paper discusses links that may be made between process models and Unified Modelling Language (UML) software specification techniques, working from an argument that the whole complexity of organisational activity cannot be captured by UML alone. The approach taken is to develop a set of use cases, which would be capable of providing information support to a pre-defined organisational process. The nature of the thinking, which is necessary to derive the use cases, is outlined using the pre-defined process as a case study. The grouping of transactions and state changes into Use Cases is shown to require design choices, which may vary between particular organisational contexts. Conclusions are drawn about the direction of further investigation of links between process modelling and UML.

68 citations


Journal ArticleDOI
TL;DR: Overall, subjective norm and perceived behavioral control emerged as strongest determinants of intention to adopt NSS.
Abstract: An exploratory study was conducted to identify factors affecting the intention to adopt negotiation support systems (NSS) by managers and executives. Drawing from past literature, the Theory of Planned Behavior and the Technology Acceptance Model provided basis for analyzing our results. Overall, subjective norm and perceived behavioral control emerged as strongest determinants of intention to adopt NSS. Further probing of subjective norm revealed organizational culture and industrial characteristics to play significant roles. A new conceptual framework is proposed which would be of both theoretical and practical importance.

66 citations


Journal ArticleDOI
TL;DR: A generalized resemblancedegree between two fuzzy sets of imprecise objects and a generalized resemblance degree to compare complex fuzzy objects within a given class are introduced.
Abstract: The comparison concept plays a determining role in many problems related to object management in an Object-Oriented Database Model. Object comparison is appropriately managed in a crisp object-oriented context by means of the concepts of identity and value equality. However, when dealing with imprecise or imperfect objects, questions like ‘To which extent may two objects be the same one?’ or ‘How similar are two objects?’ have not a clear answer, because the equality concept becomes fuzzy. In this paper we present a set of operators that are useful when comparing objects in a fuzzy environment. In particular, we introduce a generalized resemblance degree between two fuzzy sets of imprecise objects and a generalized resemblance degree to compare complex fuzzy objects within a given class. q 2003 Elsevier Science B.V. All rights reserved.

Journal ArticleDOI
TL;DR: Roles are discussed and why they are important in the analysis of security and how they could be employed to define access policies and a framework is presented, based on these concepts, for analysing access policies.
Abstract: Pressures are increasing on organisations to take an early and more systematic approach to security. A key to enforcing security is to restrict access to valuable assets. We regard access policies as security requirements that specify such restrictions. Current requirements engineering methods are generally inadequate for eliciting and analysing these types of requirements, because they do not allow complex organisational structures and procedures that underlie policies to be represented adequately. This paper discusses roles and why they are important in the analysis of security. The paper relates roles to organisational theory and how they could be employed to define access policies. A framework is presented, based on these concepts, for analysing access policies.

Journal ArticleDOI
TL;DR: The aim is to model business processes directly in an executable form, so that the mobility and mutability inherent in business behaviour is reflected and supported in the corresponding IT systems, erasing the present IT-business divide.
Abstract: This paper introduces the ideas behind BPML, the business process modelling language published by BPMI. BPML provides a process-centric (as opposed to a datacentric) metalanguage and execution model for business systems. It is underpinned by a strong mathematical foundation, the pi-calculus. The current paper is derived from supplementary appendices to a book which describes a ‘third wave’ approach to business process management [Business Process Management: The Third Wave, 2003]. The aim is to model business processes directly in an executable form, so that the mobility and mutability inherent in business behaviour is reflected and supported in the corresponding IT systems, erasing the present IT-business divide.

Journal ArticleDOI
TL;DR: The EKD-CMM road map and guidelines are presented and exemplifies their use with a real case study and the assumption of the work presented is the situatedness of the change process.
Abstract: The assumption of the work presented in this paper is the situatedness of the change process. The Enterprise Knowledge Development-Change Management Method (EKD-CMM) provides multiple and dynamically constructed ways of working to organize and to guide the change management. The method is built on the notion of labeled graph of intentions and strategies called a road map and the associated guidelines. The EKD-CMM road map is a navigational structure that supports the dynamic selection of the intention to be achieved next and the appropriate strategy to achieve it whereas guidelines help in the operationalization of the selected intention following the selected strategy. This paper presents the EKD-CMM road map and guidelines and exemplifies their use with a real case study.

Journal ArticleDOI
TL;DR: This paper represents a new approach that identifies classes based on goals of use cases without descriptions, and produces use case-entity diagrams as a vehicle for deriving classes from use cases and to show the involvement of classes in use cases of a system.
Abstract: In a use case-driven process, classes in the class diagram need to be identified from use cases in the use case diagram. Current object modelling approaches identify classes either from use case descriptions, or using classic categories. Both ways are inefficient when use cases can be described with many scenarios in different words. This paper represents a new approach that identifies classes based on goals of use cases without descriptions. The approach produces use case-entity diagrams as a vehicle for deriving classes from use cases and to show the involvement of classes in use cases of a system.

Journal ArticleDOI
TL;DR: The findings of a survey of multimedia developers in Ireland are discussed, finding practitioners generally agree that systematic approaches are desirable in order to beneficially add structure to development processes, but they are predominantly using their own in-house methods rather than those prescribed in the literature.
Abstract: As multimedia information systems begin to infiltrate organizations, there arises a need to capture and disseminate knowledge about how to develop them. Little is thus far known about the realities of multimedia systems development practice, or about how the development of multimedia systems compares to that of ‘traditional’ information systems. Herein are discussed the findings of a survey of multimedia developers in Ireland. Practitioners generally agree that systematic approaches are desirable in order to beneficially add structure to development processes, but they are predominantly using their own in-house methods rather than those prescribed in the literature.

Journal ArticleDOI
TL;DR: A novel approach to extend the inverted index for containment query processing, and results suggest that a native implementation using an RDBMS can support containment queries as efficiently as an IR implementation.
Abstract: The inverted index is widely used in the existing information retrieval field. In order to support containment queries for structured documents such as XML, it needs to be extended. Previous work suggested an extension in storing the inverted index for XML documents and processing containment queries, and compared two implementation options: using an RDBMS and using an Information Retrieval (IR) engine. However, the previous work has two drawbacks in extending the inverted index. One is that the RDBMS implementation is generally much worse in the performance than the IR engine implementation. The other is that when a containment query is processed in an RDBMS, the number of join operations increases in proportion to the number of containment relationships in the query and a join operation always occurs between large relations. In order to solve these problems, we propose in this paper a novel approach to extend the inverted index for containment query processing, and show its effectiveness through experimental results. In particular, our performance study shows that (1) our RDBMS approach almost always outperforms the previous RDBMS and IR approaches, (2) our RDBMS approach is not far behind our IR approach with respect to performance, and (3) our approach is scalable to the number of containment relationships in queries. Therefore, our results suggest that, without having to make any modifications on the RDBMS engine, a native implementation using an RDBMS can support containment queries as efficiently as an IR implementation.

Journal ArticleDOI
TL;DR: Two measures of spatial complexity, which are based on two important aspects of the program—code as well as data, denote better understandability of the source code.
Abstract: In order to maintain the software, the programmers need to understand the source code. The understandability of the source code depends upon the psychological complexity of the software, and it requires cognitive abilities to understand the source code. The individual needs to correlate the orientation and location of various entities with their processing, which requires spatial abilities. This paper presents two measures of spatial complexity, which are based on two important aspects of the program—code as well as data. The measures have been applied to 15 different software projects and results have been used to draw many conclusions. The validation of the results has been done with help of perfective maintenance data. Lower values of code as well as data spatial complexity denote better understandability of the source code.

Journal ArticleDOI
TL;DR: A simple yet Balanced Pattern Specification Language that is aimed to achieve equilibrium by specifying the structural as well as behavioral aspects of design patterns by combining two subsets of logic one from First Order Logic and one from Temporal Logic of Actions.
Abstract: Pattern users are faced with difficulties in understanding when and how to use the increasing number of available design patterns due the inherent ambiguity in the existing means (textual and graphical) of describing them. Since patterns are seldom used in isolation but are usually combined to solve complex problems, the above-mentioned difficulties have even worsen. Hence, there is an appealing need to introduce formalism to accurately describe patterns and pattern combination to allow rigorous reasoning about them. The main problem of existing formal specification languages for design patterns is lack of completeness. This is mainly due either because they were not originally conceived to specify design patterns and have been adapted to do so, or they tend to focus on specifying either the structural or behavioral aspect of design patterns but not both of them. Moreover, only few of them venture in specifying design pattern combination. We propose a simple yet Balanced Pattern Specification Language that is aimed to achieve equilibrium by specifying the structural as well as behavioral aspects of design patterns. This is achieved by combining two subsets of logic one from First Order Logic and one from Temporal Logic of Actions. Moreover it can be used to formally specify pattern combination.

Journal ArticleDOI
TL;DR: This study considers the applicability of fuzzy logic modeling methods to the task of software source code sizing, using a previously published data set and suggests that fuzzy predictive models can outperform their traditional regression-based counterparts.
Abstract: Knowing the likely size of a software product before it has been constructed is potentially beneficial in project management: for instance, size can be an important factor in determining an appropriate development/integration schedule, and it can be a significant input in terms of the allocation of personnel and other resources. In this study we consider the applicability of fuzzy logic modeling methods to the task of software source code sizing, using a previously published data set. Our results suggest that, particularly with refinement using data and knowledge, fuzzy predictive models can outperform their traditional regression-based counterparts.

Journal ArticleDOI
TL;DR: It is illustrated how fuzzy expert systems can infer useful results by using the limited facts about a current project, and rules about software development, to support Independent Assessments of projects during the very early phases of the software life cycle.
Abstract: Risk is the potential for realization of undesirable consequences of an event. Operational risk of software is the likelihood of untoward events occurring during operations due to software failures. NASA IV&V Facility is an independent institution which conducts Independent Assessments for various NASA projects. Its responsibilities, among others, include the assessments of operational risks of software. In this study, we investigate Independent Assessments that are conducted very early in the software development life cycle. Existing risk assessment methods are largely based on checklists and analysis of a risk matrix, in which risk factors are scored according to their influence on the potential operational risk. These scores are then arithmetically aggregated into an overall risk score. However, only incomplete project information is available during the very early phases of the software life cycle, and thus, a quantitative method, such as a risk matrix, must make arbitrary assumptions to assess operational risk. We have developed a fuzzy expert system, called the Research Prototype Early Assessment System, to support Independent Assessments of projects during the very early phases of the software life cycle. Fuzzy logic provides a convenient way to represent linguistic variables, subjective probability, and ordinal categories. To represent risk, subjective probability is a better way than quantitative objective probability of failure. Furthermore, fuzzy severity categories are more credible than numeric scores. We illustrated how fuzzy expert systems can infer useful results by using the limited facts about a current project, and rules about software development. This approach can be extended to add planned IV&V level, history of past NASA projects, and rules from NASA experts.

Journal ArticleDOI
TL;DR: Validated techniques to identify conflicts between system requirements and the governing security and privacy policies are presented and are generalizable to other domains, in which systems contain sensitive information.
Abstract: Keeping sensitive information secure is increasingly important in e-commerce and web-based applications in which personally identifiable information is electronically transmitted and disseminated. This paper discusses techniques to aid in aligning security and privacy policies with system requirements. Early conflict identification between requirements and policies enables analysts to prevent incongruous behavior, misalignments and unfulfilled requirements, ensuring that security and privacy are built in rather than added on as an afterthought. Validated techniques to identify conflicts between system requirements and the governing security and privacy policies are presented. The techniques are generalizable to other domains, in which systems contain sensitive information.

Journal ArticleDOI
TL;DR: The approach exploits techniques of Computational Intelligence that are treated as a consortium of granular computing, neural networks and evolutionary techniques to support quality assessment of individual objects to analyze an object-oriented visualization-based software system for biomedical data analysis.
Abstract: Quality of individual objects composing a software system is one of the important factors that determine quality of this system. Quality of objects, on the other hand, can be related to a number of attributes, such as extensibility, reusability, clarity and efficiency. These attributes do not have representations suitable for automatic processing. There is a need to find a way to support quality related activities using data gathered during quality assurance processes, which involve humans. This paper proposes an approach, which can be used to support quality assessment of individual objects. The approach exploits techniques of Computational Intelligence that are treated as a consortium of granular computing, neural networks and evolutionary techniques. In particular, self-organizing maps and evolutionary-based developed decision trees are used to gain a better insight into the software data and to support a process of classification of software objects. Genetic classifiers—a novel algorithmic framework—serve as ‘filters’ for software objects. These classifiers are built on data representing subjective evaluation of software objects done by humans. Using these classifiers, a system manager can predict quality of software objects and identify low quality objects for review and possible revision. The approach is applied to analyze an object-oriented visualization-based software system for biomedical data analysis.

Journal ArticleDOI
Andre Postma1
TL;DR: A method for module architecture verification is described which yields support for checking on an architectural level whether the implicit module architecture of the implementation of a system is consistent with its specified module architecture, and which facilitates achieving architecture conformance by relating architectural-level violations to the code-level entities that cause them.
Abstract: A method for module architecture verification is described, which yields support for checking on an architectural level whether the implicit module architecture of the implementation of a system is consistent with its specified module architecture, and which facilitates achieving architecture conformance by relating architectural-level violations to the code-level entities that cause them, hence making it easier to resolve them. Module architecture conformance is needed to enable implementing and maintaining the system and reasoning about it. We describe our experience having applied the proposed method to check a representative part of the module architecture of a large industrial component-based software system.

Journal ArticleDOI
TL;DR: This work joins normalized relations into tables according to their data dependencies constraints to reengineer the relational databases into XML documents with constraints preservation, and maps them into an XML schema in the form of DTD.
Abstract: The revolution of XML is recognized as the trend of technology on the Internet to researchers as well as practitioners. Companies need to adopt XML technology. With investment in the current relational database systems, they want to develop new XML documents while running existing relational databases on production. They need to reengineer the relational databases into XML documents with constraints preservation. In the process, schema translation must be done before data conversion. Since the existing relational databases are usually normalized, they have to be reconstructed into XML document tree structures. This can be accomplished through denormalization by joining the normalized relations into tables according to their data dependencies constraints. The joined tables are mapped into DOMs, which are then integrated into XML document trees. The user specifies an XML document root with its relevant nodes to form a partitioned XML document tree to meet their requirements. The selected XML document tree is mapped into an XML schema in the form of DTD. We then load joined tables into DOMs, integrate them into a DOM, and transform it into an XML document.

Journal ArticleDOI
TL;DR: A fuzzy object-oriented modeling technique (FOOM) schema based on XML to model requirements specifications and incorporated the notion of stereotype to facilitate the modeling of imprecise requirements is developed.
Abstract: Fuzzy theory is suitable to capture and analyze the informal requirements that are imprecise in nature, meanwhile, XML is emerging as one of the dominant data formats for data processing on the internet. In this paper, we have developed a fuzzy object-oriented modeling technique (FOOM) schema based on XML to model requirements specifications and incorporated the notion of stereotype to facilitate the modeling of imprecise requirements. FOOM schema is also transformed into a set of application programming interfaces (APIs) in an automatic manner. A schema graph is proposed to serve as an intermediate representation for the structure of FOOM schema to bridge the FOOM schema and APIs for both content validation and data access for the XML documents.

Journal ArticleDOI
TL;DR: This study addresses the construction of a preset checking sequence that will not pose controllability and observability problems when applied in distributed test architectures that utilize remote testers.
Abstract: This study addresses the construction of a preset checking sequence that will not pose controllability (synchronization) and observability (undetectable output shift) problems when applied in distributed test architectures that utilize remote testers. The controllability problem manifests itself when a tester is required to send the current input and because it did not send the previous input nor did it receive the previous output it cannot determine when to send the input. The observability problem manifests itself when a tester is expecting an output in response to either the previous input or the current input and because it is not the one to send the current input, it cannot determine when to start and stop waiting for the output. Based on UIO sequences, a checking sequence construction method is proposed to yield a sequence that is free from controllability and observability problems.

Journal ArticleDOI
TL;DR: A protocol and framework are presented that utilise the Unified Modelling Language and adopts best practice from IT and social science methods that produces a clear linkage between stakeholder goals and expectations, and IT functionality expressed as UML use cases.
Abstract: This paper relates experiences of using a business-process approach to the determination of requirements for social care systems A method has been developed and used successfully with a number of major research projects, most specifically PLANEC A protocol and framework are presented that utilise the Unified Modelling Language and adopts best practice from IT and social science methods It utilises a loose-coupled hierarchical grouping of processes as a strategic view, and more tightly coupled models such as workflows The method, as it has evolved, has produced a clear linkage between stakeholder goals and expectations, and IT functionality expressed as UML use cases

Journal ArticleDOI
TL;DR: In this paper, a broad range of alternative approaches to various aspects of representation and runtime support are identified, based on the analysis of an expressive number of systems, and identified functionality can serve both as a guide for the evaluation and selection of systems of this kind as well as a roadmap for the development of new, improved systems.
Abstract: Process-Centered Software Development Environments are systems that provide automated support for software development activities. Such environments mediate the eorts of potentially large groups of developers working on a common project. This mediation is based on runtime support for actual work performance based on formal representations of work. In the present work, we survey and assess the contributions of the software pro- cess literature under the perspective of support for collaboration and coordination. A broad range of alternative approaches to various aspects of representation and runtime support are identified, based on the analysis of an expressive number of systems. The identified functionality can serve both as a guide for the evaluation and selection of systems of this kind as well as a roadmap for the development of new, improved systems.

Journal ArticleDOI
TL;DR: This work proposes three intraprocedural dynamic slicing algorithms which are more space and time efficient than the existing algorithms and introduces the concepts of jump dependence and Unstructured Program Dependence Graph.
Abstract: Dynamic slicing algorithms are used in interactive applications such as program debugging and testing. Therefore, these algorithms need to be very efficient. In this context, we propose three intraprocedural dynamic slicing algorithms which are more space and time efficient than the existing algorithms. Two of the proposed algorithms compute precise dynamic slices of structured programs using Program Dependence Graph as an intermediate representation. To compute precise dynamic slices of unstructured programs, we introduce the concepts of jump dependence and Unstructured Program Dependence Graph. The third algorithm uses Unstructured Program Dependence Graph as the intermediate program representation, and computes precise dynamic slices of unstructured programs. We show that each of our proposed algorithms is more space and time efficient than the existing algorithms.