scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 2005"


Journal ArticleDOI
TL;DR: An example of engineering a software tool for a specific bioinformatics task known as spliced alignment that not only overcomes the limitations of the earlier implementation but greatly improves space and time requirements is described.
Abstract: The research area now commonly called 'bioinformatics' has brought together biologists, computer scientists, statisticians, and scientists of many other fields of expertise to work on computational solutions to biological problems. A large number of algorithms and software packages are freely available for many specific tasks, such as sequence alignment, molecular phylogeny reconstruction, or protein structure determination. Rapidly changing needs and demands on data handling capacity challenge the application providers to consistently keep pace. In practice, this has led to many incremental advances and re-writing of code that present the user community with confusing options and a large overhead from non-standardized implementations that need to be integrated into existing work flows. This situation gives much scope for contributions by software engineers. In this article, we describe an example of engineering a software tool for a specific bioinformatics task known as spliced alignment. The problem was motivated by disabling limitations in an original, ad hoc, and yet widely popular implementation by one of the authors. The present collaboration has led to a robust, highly versatile, and extensible tool (named GenomeThreader) that not only overcomes the limitations of the earlier implementation but greatly improves space and time requirements.

273 citations


Journal ArticleDOI
TL;DR: The possibility of using a method known as ordinal regression to model the probability of correctly classifying a new project to a cost category and is validated with respect to its fitting and predictive accuracy.
Abstract: In the area of software cost estimation, various methods have been proposed to predict the effort or the productivity of a software project. Although most of the proposed methods produce point estimates, in practice it is more realistic and useful for a method to provide interval predictions. In this paper, we explore the possibility of using such a method, known as ordinal regression to model the probability of correctly classifying a new project to a cost category. The proposed method is applied to three data sets and is validated with respect to its fitting and predictive accuracy.

126 citations


Journal ArticleDOI
TL;DR: An adaptive fuzzy logic framework for software effort prediction that tolerates imprecision, explains prediction rationale through rules, incorporates experts knowledge, offers transparency in the prediction system, and could adapt to new environments as new data becomes available is presented.
Abstract: Algorithmic effort prediction models are limited by their inability to cope with uncertainties and imprecision present in software projects early in the development life cycle. In this paper, we present an adaptive fuzzy logic framework for software effort prediction. The training and adaptation algorithms implemented in the framework tolerates imprecision, explains prediction rationale through rules, incorporates experts knowledge, offers transparency in the prediction system, and could adapt to new environments as new data becomes available. Our validation experiment was carried out on artificial datasets as well as the COCOMO public database. We also present an experimental validation of the training procedure employed in the framework.

113 citations


Journal ArticleDOI
TL;DR: An investigation into the adoption and use of UML in the software development community is described, indicating a wide diversity of opinion regarding UML, reflecting the relative immaturity of the technology as well as the controversy over its effectiveness.
Abstract: The Unified Modeling Language (UML) has become the de facto standard for systems development and has been promoted as a technology that will help solve some of the longstanding problems in the software industry. However, there is still little empirical evidence supporting the claim that UML is an effective approach to modeling software systems. Indeed, there is much anecdotal evidence suggesting the contrary, i.e. that UML is overly complex, inconsistent, incomplete and difficult to learn. This paper describes an investigation into the adoption and use of UML in the software development community. A web-based survey was conducted eliciting responses from users of UML worldwide. Results indicate a wide diversity of opinion regarding UML, reflecting the relative immaturity of the technology as well as the controversy over its effectiveness. This paper discusses the results of the survey and charts of the course for future research in UML usage.

113 citations


Journal ArticleDOI
TL;DR: A generic and extensible measurement framework for object-oriented software testability is presented, based on a theory expressed as a set of operational hypotheses to be further tested, to provide structured guidance for practitioners trying to measure design testability and to provide a theoretical framework for facilitating empirical research on testability.
Abstract: Testing is an expensive activity in the development process of any software system. Measuring and assessing the testability of software would help in planning testing activities and allocating required resources. More importantly, measuring software testability early in the development process, during analysis or design stages, can yield the highest payoff as design refactoring can be used to improve testability before the implementation starts. This paper presents a generic and extensible measurement framework for object-oriented software testability, which is based on a theory expressed as a set of operational hypotheses. We identify design attributes that have an impact on testability directly or indirectly, by having an impact on testing activities and sub-activities. We also describe the cause-effect relationships between these attributes and software testability based on thorough review of the literature and our own testing experience. Following the scientific method, we express them as operational hypotheses to be further tested. For each attribute, we provide a set of possible measures whose applicability largely depends on the level of details of the design documents and the testing techniques to be applied. The goal of this framework is twofold: (1) to provide structured guidance for practitioners trying to measure design testability, (2) to provide a theoretical framework for facilitating empirical research on testability.

90 citations


Journal ArticleDOI
Torgeir Dingsøyr1
TL;DR: The importance of postmortem reviews as a method for knowledge sharing in software projects, and an overview of known such processes in the field of software engineering is given.
Abstract: Conducting postmortems is a simple and practical method for organisational learning. Yet, not many companies have implemented such practices, and in a survey, few expressed satisfaction with how postmortems were conducted. In this article, we discuss the importance of postmortem reviews as a method for knowledge sharing in software projects, and give an overview of known such processes in the field of software engineering. In particular, we present three lightweight methods for conducting postmortems found in the literature, and discuss what criteria companies should use in defining their way of conducting postmortems.

89 citations


Journal ArticleDOI
TL;DR: A survey of technologies developed by researchers that can be used to combat degeneration is presented, that is, technologies that are employed in identifying, treating and researching degeneration.
Abstract: As software systems evolve over time, they invariably undergo changes that can lead to a degeneration of the architecture. Left unchecked, degeneration may reach a level where a complete redesign is necessary, a task that requires significant effort. In this paper, we present a survey of technologies developed by researchers that can be used to combat degeneration, that is, technologies that can be employed in identifying, treating and researching degeneration. We also discuss the various causes of degeneration and how it can be prevented.

88 citations


Journal ArticleDOI
TL;DR: This paper presents a multi-faceted system based on five research-focused characteristics: topic, approach, method, unit of analysis, and reference discipline that was designed based on the requirements for effective classification systems and was then used to investigate these five characteristics of research in the computing field.
Abstract: The field of computing is made up of several disciplines of which Computer Science, Software Engineering, and Information Systems are arguably three of the primary ones. Despite the fact that each discipline has a specific focus, there is also considerable overlap. Knowledge sharing, however, is becoming increasingly difficult as the body of knowledge in each discipline increases and specialization results. For effective knowledge sharing, it is therefore important to have a unified classification system by means of which the bodies of knowledge that constitute the field may be compared and contrasted. This paper presents a multi-faceted system based on five research-focused characteristics: topic, approach, method, unit of analysis, and reference discipline. The classification system was designed based on the requirements for effective classification systems, and was then used to investigate these five characteristics of research in the computing field.

86 citations


Journal ArticleDOI
TL;DR: A new, combined metamodel, named Standard Metamodels for Software Development Methodologies (SMSDM) has been constructed which supports not only process but also products and capability assessment in the contexts of both software development and CSCW.
Abstract: Software development processes and methodologies to date have frequently been described purely textually. However, more recently, a number of metamodels have been constructed to both underpin and begin to formalize these methodologies. We have critically examined four of these: the Object Management Group's Software Process Engineering Metamodel (SPEM), the OPEN Process Framework (OPF), the OOSPICE metamodel for capability assessment and the LiveNet approach for computer-supported collaborative work (CSCW). Based on this analysis, a new, combined metamodel, named Standard Metamodel for Software Development Methodologies (SMSDM) has been constructed which supports not only process but also products and capability assessment in the contexts of both software development and CSCW. As a proof of concept we conclude with a partial example to show how the SMSDM metamodel (and by inference the other metamodels) are used in practice by creating a simple yet usable methodology.

83 citations


Journal ArticleDOI
TL;DR: The limitations of using virtual work in offshore systems development are discussed, and development processes and management procedures amenable to virtual work are described, while offshore developing various types of information systems.
Abstract: The market for offshore systems development, motivated by lower costs in developing countries, is expected to increase and reach about $15 billion in the year 2007. Virtual workgroups supported by computer and communication technologies enable offshore systems development. This article discusses the limitations of using virtual work in offshore systems development, and describes development processes and management procedures amenable to virtual work in offshore development projects. It also describes a framework to use virtual work selectively, while offshore developing various types of information systems.

82 citations


Journal ArticleDOI
TL;DR: The issues involved in accepting this premise as a fundamental building block with empirical software engineering are investigated and the traditional view of replication is recommended to improve the effectiveness of this essential process within the authors' domain.
Abstract: Recently, software engineering has witnessed a great increase in the amount of work with an empirical component; however, this work has often little or no established empirical framework within the topic to draw upon. Frequently, researchers use frameworks from other disciplines in an attempt to alleviate this deficiency. A common underpinning in these frameworks is that experimental replication is available as the cornerstone of knowledge discovery within the discipline. This paper investigates the issues involved in accepting this premise as a fundamental building block with empirical software engineering and recommends extending the traditional view of replication to improve the effectiveness of this essential process within our domain.

Journal ArticleDOI
TL;DR: An empirical assessment and improvement of the effort estimation model for corrective maintenance adopted in a major international software enterprise and it is shown that a linear model including the same variables achieved better performances.
Abstract: We present an empirical assessment and improvement of the effort estimation model for corrective maintenance adopted in a major international software enterprise. Our study was composed of two phases. In the first phase we used multiple linear regression analysis to construct effort estimation models validated against real data collected from five corrective maintenance projects. The model previously adopted by the subject company used as predictors the size of the system being maintained and the number of maintenance tasks. While this model was not linear, we show that a linear model including the same variables achieved better performances. Also we show that greater improvements in the model performances can be achieved if the types of the different maintenance tasks is taken into account. In the second phase we performed a replicated assessment of the effort prediction models built in the previous phase on a new corrective maintenance project conducted by the subject company on a software system of the same type as the systems of the previous maintenance projects. The data available for the new project were finer grained, according to the indications devised in the first study. This allowed to improve the confidence in our previous empirical analysis by confirming most of the hypotheses made. The new data also provided other useful indications to better understand the maintenance process of the company in a quantitative way.

Journal ArticleDOI
TL;DR: Results underscore the need to enrich current technology acceptance models with these constructs, and serve to encourage project managers to adopt formal SPI methods if developers perceive the methods will have positive impacts on their productivity and system quality.
Abstract: Numerous software process improvement (SPI) innovations have been proposed to improve software development productivity and system quality; however, their diffusion in practice has been disappointing. This research investigates the adoption of the Personal Software Process on industrial software projects. Quantitative and qualitative analyses reveal that perceived increases in software quality and development productivity, project management benefits, and innovation fit to development tasks, enhance the usefulness of the innovation to developers. Results underscore the need to enrich current technology acceptance models with these constructs, and serve to encourage project managers to adopt formal SPI methods if developers perceive the methods will have positive impacts on their productivity and system quality.

Journal ArticleDOI
TL;DR: No matter the problem is planning the timetable of trains, the unusual passenger flow occurrence or the incident caused delay, the model will make the train regulation as the same as the timetable construction, which will simplify the work of administration.
Abstract: A new train operation model proposed here not only considers the flexibility of train regulation, or train rescheduling problem, but also the objectives of timetabling process. A genetic algorithm is applied to solve this problem efficiently. Thus no matter the problem is planning the timetable of trains, the unusual passenger flow occurrence or the incident caused delay, our model will make the train regulation as the same as the timetable construction. This will simplify the work of administration. Our model also shows that once the delay occurred, the waiting time of the passengers will be the cost to delay every train. If the delay is not large enough, the system can have some rooms for removing the influence of the delay through our model.

Journal ArticleDOI
TL;DR: The findings are important for virtual development environments and allow further research focusing on a framework for lowering residual defects, and give insights that can be used immediately by practitioners to devise strategies for lowered residual defects.
Abstract: This paper explores the effects of virtual development on product quality, from the viewpoint of 'conformance to specifications'. Virtual Development refers to the development of products by teams distributed across space, time, and organization boundaries (hence virtual teams). Specifically, causes of defect injection and non- or late-detection are explored. Because of the practical difficulties of obtaining hard project-specific defect data, an approach was taken that relied upon accumulated expert knowledge. The accumulated expert knowledge based approach was found to be a practical alternative to an in-depth defect causal analysis on a per-project basis. Defect injection causes are concentrated in the Requirements Specification phases. Thus defect dispersion is likely to increase, as requirements specifications are input for derived requirements specifications in multiple, related sub-projects. Similarly, a concentration of causes for the non- or late-detection of defects was found in the Integration Test phases. Virtual development increases the likelihood of defects in the end product because of the increased likelihood of defect dispersion, because of new virtual development related defect causes, and because causes already existing in co-located development are more likely to occur. The findings are important for virtual development environments and (1) allow further research focusing on a framework for lowering residual defects, and (2) give insights that can be used immediately by practitioners to devise strategies for lowering residual defects.

Journal ArticleDOI
Michael Jackson1
TL;DR: A view of the approach is suggested by an important account of engineering in the aeronautical industry: in particular, the problem classes captured by elementary problem frames are likened to those solved in established engineering branches by normal, rather than radical, design.
Abstract: A general account is given of the problem frames approach to the development of software-intensive systems, assuming that the reader is already familiar with its basic ideas. The approach is considered in the light of the long-standing aspiration of software developers to merit a place among practitioners of the established branches of engineering. Some of its principles are examined, and some comments offered on the range of its applicability. A view of the approach is suggested by an important account of engineering in the aeronautical industry: in particular, the problem classes captured by elementary problem frames are likened to those solved in established engineering branches by normal, rather than radical, design. The relative lack of specialisation in software development is identified as an important factor holding back the evolution of normal design practice in some areas.

Journal ArticleDOI
TL;DR: This work has proposed a methodology for the design of secure databases, and implemented an extension of Rational Rose, including and managing security information and constraints in the first stages of the methodology.
Abstract: Security is an important issue that must be considered as a fundamental requirement in information systems development, and particularly in database design Therefore security, as a further quality property of software, must be tackled at all stages of the development The most extended secure database model is the multilevel model, which permits the classification of information according to its confidentiality, and considers mandatory access control Nevertheless, the problem is that no database design methodologies that consider security (and therefore secure database models) across the entire life cycle, particularly at the earliest stages currently exist Therefore it is not possible to design secure databases appropriately Our aim is to solve this problem by proposing a methodology for the design of secure databases In addition to this methodology, we have defined some models that allow us to include security information in the database model, and a constraint language to define security constraints As a result, we can specify a fine-grained classification of the information, defining with a high degree of accuracy which properties each user has to own in order to be able to access each piece of information The methodology consists of four stages: requirements gathering; database analysis; multilevel relational logical design; and specific logical design The first three stages define activities to analyze and design a secure database, thus producing a general secure database model The last stage is made up of activities that adapt the general secure data model to one of the most popular secure database management systems: Oracle9i Label Security This methodology has been used in a genuine case by the Data Processing Center of Provincial Government In order to support the methodology, we have implemented an extension of Rational Rose, including and managing security information and constraints in the first stages of the methodology

Journal ArticleDOI
TL;DR: The WinCBAM framework is proposed, extending an architecture design method, called cost benefit analysis method (CBAM) framework to include an explicit requirements negotiation component based on the WinWin methodology, and it is shown that the integrated method is substantially more powerful than the Win Win and CBAM methods performed separately.
Abstract: Architecture design and requirements negotiations are conceptually tightly related but often performed separately in real-world software development projects. As our prior case studies have revealed, this separation causes uncertainty in requirements negotiation that hinders progress, limits the success of architecture design, and often leads to wasted effort and substantial re-work later in the development life-cycle. Explicit requirements elicitation and negotiation is needed to be able to appropriately consider and evaluate architecture alternatives and the architecture alternatives need be understood during requirements negotiation. This paper propose the WinCBAM framework, extending an architecture design method, called cost benefit analysis method (CBAM) framework to include an explicit requirements negotiation component based on the WinWin methodology. We then provide a retrospective case study that demonstrates the use of the WinCBAM. We show that the integrated method is substantially more powerful than the WinWin and CBAM methods performed separately. The integrated method can assist stakeholders to elicit, explore, evaluate, negotiate, and agree upon software architecture alternatives based on each of their requirement Win conditions. By understanding the architectural implication of requirements they can be negotiated more successfully: potential requirements conflicts can be discovered or alleviated relatively early in the development life-cycle.

Journal ArticleDOI
TL;DR: The fundamental difference between conventional variability and component variability is identified, and five types of variability are presented and three kinds of variability scopes are presented, precisely defined for its applicable situations and guidelines.
Abstract: Component-Based Development (CBD) is revolutionizing the process of building applications by assembling pre-built reusable components. Components should be designed more for inter-organizational reuse, rather than intra-organization reuse through domain analysis which captures the commonality of the target domain. Moreover, the minor variations within the commonality should also be modeled and reflected in the design of components so that family members can effectively customize the components for their own purpose. To carry out domain analysis effectively and design widely reusable components, precise definitions of variability-related terms and a classification of variability types must be made. In this paper, we identify the fundamental difference between conventional variability and component variability, and present five types of variability and three kinds of variability scopes. Each type of variability is precisely defined for its applicable situations and guidelines. Having a formal view on variability, not only the domain analysis but also component customization can be effectively carried out in a precise manner.

Journal ArticleDOI
TL;DR: The unified mapping of UML models into function points is formally described to enable the automation of the counting procedure and it is proved that accuracy increases with each subsequent abstraction level.
Abstract: A systematic approach to software size estimation is important for accurate project planning. In this paper, we will propose the unified mapping of UML models into function points. The mapping is formally described to enable the automation of the counting procedure. Three estimation levels are defined that correspond to the different abstraction levels of the software system. The level of abstraction influences an estimate's accuracy. Our research, based on a small data set, proved that accuracy increases with each subsequent abstraction level. Changes to the FPA complexity tables for transactional functions will also be proposed in order to better quantify the characteristics of object-oriented software.

Journal ArticleDOI
TL;DR: A literature survey of component based system quality assurance and assessment; the areas surveyed include formalism, cost estimation, and assessment and measurement techniques for the following quality attributes: performance, reliability, maintainability and testability.
Abstract: Component Based Software Development (CBSD) is focused on assembling existing components to build a software system, with a potential benefit of delivering quality systems by using quality components. It departs from the conventional software development process in that it is integration centric as opposed to development centric. The quality of a component based system using high quality components does not therefore necessarily guarantee a system of high quality, but depends on the quality of its components, and a framework and integration process used. Hence, techniques and methods for quality assurance and assessment of a component based system would be different from those of the traditional software engineering methodology. It is essential to quantify factors that contribute to the overall quality, for instances, the trade off between cost and quality of a component, analytical techniques and formal methods, and quality attribute definitions and measurements. This paper presents a literature survey of component based system quality assurance and assessment; the areas surveyed include formalism, cost estimation, and assessment and measurement techniques for the following quality attributes: performance, reliability, maintainability and testability. The aim of this survey is to help provide a better understanding of CBSD in these aspects in order to facilitate the realisation of its potential benefits of delivering quality systems.

Journal ArticleDOI
TL;DR: The testability measurement the authors propose counts the number and the complexity of interactions that must be covered during testing, and the approach is illustrated on application examples.
Abstract: Design-for-testability is a very important issue in software engineering. It becomes crucial in the case of OO designs where control flows are generally not hierarchical, but are diffuse and distributed over the whole architecture. In this paper, we concentrate on detecting, pinpointing and suppressing potential testability weaknesses of a UML class diagram. The attribute significant from design testability is called 'class interaction' and is generalized in the notion of testability anti-pattern: it appears when potentially concurrent client/supplier relationships between classes exist in the system. These interactions point out parts of the design that need to be improved, driving structural modifications or constraints specifications, to reduce the final testing effort. In this paper, the testability measurement we propose counts the number and the complexity of interactions that must be covered during testing. The approach is illustrated on application examples.

Journal ArticleDOI
Petri Kettunen1, Maarit Laanti1
TL;DR: This paper proposes a process model selection frame, which the project manager can use as a systematic guide for (re)choosing the project's process model.
Abstract: Modern large new product developments (NPD) are typically characterized by many uncertainties and frequent changes. Often the embedded software development projects working on such products face many problems compared to traditional, placid project environments. One of the major project management decisions is then the selection of the project's software process model. An appropriate process model helps coping with the challenges, and prevents many potential project problems. On the other hand, an unsuitable process choice causes additional problems. This paper investigates the software process model selection in the context of large market-driven embedded software product development for new telecommunications equipment. Based on a quasi-formal comparison of publicly known software process models including modern agile methodologies, we propose a process model selection frame, which the project manager can use as a systematic guide for (re)choosing the project's process model. A novel feature of this comparative selection model is that we make the comparison against typical software project problem issues. Some real-life project case examples are examined against this model. The selection matrix expresses how different process models answer to different questions, and indeed there is not a single process model that would answer all the questions. On the contrary, some of the seeds to the project problems are in the process models themselves. However, being conscious of these problems and pitfalls when steering a project enables the project manager to master the situation.

Journal ArticleDOI
TL;DR: This paper debunks myths about how smoothly such organizational transformations take place, describes case studies showing how organizational transformation really takes place, and introduces and confirms some guidelines for eliciting requirements and the relevant emotional issues for a computer-based system being introduced into an organization to change its work patterns.
Abstract: Traditional approaches to requirements elicitation stress systematic and rational analysis and representation of organizational context and system requirements. This paper argues that the introduction of any computer-based system to an organization transforms the organization and changes the work patterns of the system's users in the organization. These changes interact with the users' values and beliefs and trigger emotional responses which are sometimes directed against the computer-based system and its proponents. The paper debunks myths about how smoothly such organizational transformations take place, describes case studies showing how organizational transformation really takes place, and introduces and confirms by case studies some guidelines for eliciting requirements and the relevant emotional issues for a computer-based system that is being introduced into an organization to change its work patterns.

Journal ArticleDOI
TL;DR: A new approach to discover strong multilevel spatial association rules in spatial databases based on partitioning the set of rows with respect to the spatial relations denoted as relation table R is presented.
Abstract: Spatial data mining has been identified as an important task for understanding and use of spatial data- and knowledge-bases. In this paper, we present a new approach to discover strong multilevel spatial association rules in spatial databases based on partitioning the set of rows with respect to the spatial relations denoted as relation table R. Meanwhile, the introduction of the equivalence partition tree makes the discovery of multilevel spatial association rules easy and efficient. Experiments show that the new algorithm is efficient.

Journal ArticleDOI
TL;DR: In this article, an approach for deriving and contextualising software requirements through use of the problem frames approach from business process models is proposed, and applied on a live industrial e-business project in which they assess the relevance and usefulness of problem frames as a means of describing the requirements context.
Abstract: Jackson's problem frames is an approach to describing a recurring software problem. It is presumed that some knowledge of the application domain and context has been gathered so that an appropriate problem frame can be determined. However, the identification of aspects of the problem, and its appropriate 'framing' is recognised as a difficult task. One way to describe a software problem context is through process modelling. Once contextual information has been elicited, and explicitly described, an understanding of what problems need to be solved should emerge. However, this use of process models to inform requirements is often rather ad hoc; the traceability from business process to software requirement is not always as straightforward as it ought to be. Hence, this paper proposes an approach for deriving and contextualising software requirements through use of the problem frames approach from business process models. We apply the approach on a live industrial e-business project in which we assess the relevance and usefulness of problem frames as a means of describing the requirements context. We found that the software problem did not always match easily with Jackson's five existing frames. Where no frame was identified, however, we found that Jackson's problem diagrams did couch the requirements in their right context, and thus application of the problem frames approach was useful. This implies a need for further work in adapting a problem frames approach to the context of e-business systems.

Journal ArticleDOI
TL;DR: This paper analyses BP's use of an innovative 'multi-enterprise asset management system' that supports and enables the asset management strategy of BP's exploration and production division on the UK continental shelf.
Abstract: BP is one of the largest energy companies in the world with 2003 revenues of $233 billion. In this paper, we analyse its use of an innovative 'multi-enterprise asset management system' that supports and enables the asset management strategy of BP's exploration and production division on the UK continental shelf (UKCS). The analysis focuses on how BP connects its business processes with over 1500 suppliers to co-ordinate the maintenance, operation and repair of specialised exploration and production equipment. The systems strategy is novel because it takes the enterprise computing concept and implements it across organisational boundaries-hence the term 'multi-enterprise system'. This use of a shared system with all of its suppliers is distinctive from the most common way of connecting with economic partners which is to use shared data systems based on common data standards and communication technologies such as EDI and more recently XML-based systems within vertical industries such as RosettaNet. The design of the multi-enterprise system is based on a sophisticated business process management system called Maximo and this is used to illustrate the systems design aspect of the overall information system in the broader contexts of business strategy and information technology infrastructure.

Journal ArticleDOI
TL;DR: A tool, called μcROSE, is introduced that automatically measures the functional software size, as defined by the COSMIC-FFP method, for Rational Rose RealTime models, and it can be integrated into the Rational Rose realTime toolset.
Abstract: During the last 10 years, many organizations have invested resources and energy in order to be rated at the highest level as possible according to some maturity models for software development. Since measures play an important role in these models, it is essential that CASE tools offer facilities to automatically measure the sizes of various documents produced using them. This paper introduces a tool, called μcROSE, that automatically measures the functional software size, as defined by the COSMIC-FFP method, for Rational Rose RealTime models. μ c ROSE streamlines the measurement process, ensuring repeatability and consistency in measurement while reducing measurement cost. It is the first tool to address automatic measurement of COSMIC-FFP and it can be integrated into the Rational Rose RealTime toolset.

Journal ArticleDOI
TL;DR: Two extended papers from the workshop, an invited contribution from Jackson in which he positions problem frames in the context of the software engineering discipline, and this article, where a review of the literature are provided, where the literature is reviewed.
Abstract: It has been a decade since Michael Jackson introduced problem frames to the software engineering community. Since then, he has published further work addressing problem frames as well as presenting several keynote addresses. Other authors have researched problem frames, have written about their experiences and have expressed their opinions. It was not until 2004 that an opportunity presented itself for researchers in the field to gather as a community. The first International Workshop on Advances and Applications of Problem Frames (IWAAPF'04) was held at the International Conference on Software Engineering in Edinburgh on 24th May 2004. This event attracted over 30 participants: Jackson delivered a keynote address, researchers presented their work and an expert panel discussed the challenges of problem frames. Featuring in this special issue are two extended papers from the workshop, an invited contribution from Jackson in which he positions problem frames in the context of the software engineering discipline, and this article, where we provide a review of the literature. he literature.

Journal ArticleDOI
TL;DR: The results indicate that collaboration diagrams are easier to comprehend than sequence diagrams in RT systems, but there is no difference in comprehension of the two diagram types in MIS, while more correct diagrams are created in MIS applications than in RT applications.
Abstract: UML (Unified Modeling Language) is a collection of somewhat overlapping modeling techniques, thus creating a difficulty in establishing practical guidelines for selecting the most suitable techniques for modeling OO artifacts. This is true mainly with respect to two types of interaction diagrams: Sequence and collaboration. Attempts have been made to evaluate the comprehensibility of these diagram types for various types of applications, but they did not address the issue of quality of diagrams created by analysts. This article reports the findings from a controlled experiment where both the comprehensibility and quality of the interaction diagrams were investigated in two application domains: management information systems (MIS) and real-time (RT) systems. Our results indicate that collaboration diagrams are easier to comprehend than sequence diagrams in RT systems, but there is no difference in comprehension of the two diagram types in MIS. Irrespective of the diagram type, it is easier to comprehend interaction diagrams of MIS than of RT systems. With respect to diagram quality, in the case of MIS, analysts create collaboration diagrams of better quality than sequence diagrams, but there is no significant difference in quality of diagrams created in RT systems. Irrespective of the diagram type, more correct diagrams are created in MIS applications than in RT applications.