scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 2006"


Journal ArticleDOI
TL;DR: The results show that the statistical power of software engineering experiments falls substantially below accepted norms as well as the levels found in the related discipline of information systems research.
Abstract: Statistical power is an inherent part of empirical studies that employ significance testing and is essential for the planning of studies, for the interpretation of study results, and for the validity of study conclusions. This paper reports a quantitative assessment of the statistical power of empirical software engineering research based on the 103 papers on controlled experiments (of a total of 5,453 papers) published in nine major software engineering journals and three conference proceedings in the decade 1993‐2002. The results show that the statistical power of software engineering experiments falls substantially below accepted norms as well as the levels found in the related discipline of information systems research. Given this study’s findings, additional attention must be directed to the adequacy of sample sizes and research designs to ensure acceptable levels of statistical power. Furthermore, the current reporting of significance tests should be enhanced by also reporting effect sizes and confidence intervals. q 2005 Elsevier B.V. All rights reserved.

283 citations


Journal ArticleDOI
TL;DR: An industrial case study of a distributed team in the USA and the Czech Republic that used Extreme Programming suggests that, if critical enabling factors are addressed, methodologies dependent on informal communication can be used on global software development projects.
Abstract: We conducted an industrial case study of a distributed team in the USA and the Czech Republic that used Extreme Programming. Our goal was to understand how this globally-distributed team created a successful project in a new problem domain using a methodology that is dependent on informal, face-to-face communication. We collected quantitative and qualitative data and used grounded theory to identify four key factors for communication in globally-distributed XP teams working within a new problem domain. Our study suggests that, if these critical enabling factors are addressed, methodologies dependent on informal communication can be used on global software development projects.

273 citations


Journal ArticleDOI
TL;DR: It is concluded that the figure of 189% for cost overruns is probably much too high to represent typical software projects in the 1990s and that a continued use of that figure as a reference point for estimation accuracy may lead to poor decision making and hinder progress in estimation practices.
Abstract: The Standish Group reported in their 1994 CHAOS report that the average cost overrun of software projects was as high as 189%. This figure for cost overrun is referred to frequently by scientific researchers, software process improvement consultants, and government advisors. In this paper, we review the validity of the Standish Group's 1994 cost overrun results. Our review is based on a comparison of the 189% cost overrun figure with the cost overrun figures reported in other cost estimation surveys, and an examination of the Standish Group's survey design and analysis methods. We find that the figure reported by the Standish Group is much higher than those reported in similar estimation surveys and that there may be severe problems with the survey design and methods of analysis, e.g. the population sampling method may be strongly biased towards 'failure projects'. We conclude that the figure of 189% for cost overruns is probably much too high to represent typical software projects in the 1990s and that a continued use of that figure as a reference point for estimation accuracy may lead to poor decision making and hinder progress in estimation practices.

240 citations


Journal ArticleDOI
TL;DR: A basic software measurement ontology is introduced, that aims at contributing to the harmonization of the different software measurement proposals and standards, by providing a coherent set of common concepts used in software measurement.
Abstract: Although software measurement plays an increasingly important role in Software Engineering, there is no consensus yet on many of the concepts and terminology used in this field. Even worse, vocabulary conflicts and inconsistencies can be frequently found amongst the many sources and references commonly used by software measurement researchers and practitioners. This article presents an analysis of the current situation, and provides a comparison framework that can be used to identify and address the discrepancies, gaps, and terminology conflicts that current software measurement proposals present. A basic software measurement ontology is introduced, that aims at contributing to the harmonization of the different software measurement proposals and standards, by providing a coherent set of common concepts used in software measurement. The ontology is also aligned with the metrology vocabulary used in other more mature measurement engineering disciplines.

208 citations


Journal ArticleDOI
TL;DR: A ‘‘one-test-at-a-time’’ greedy method is adapted to take importance of pairs into account, so that when run to completion all pair-wise interactions are tested, but when terminated after any intermediate number of tests, those deemed most important are tested.
Abstract: Interaction testing is widely used in screening for faults. In software testing, it provides a natural mechanism for testing systems to be deployed on a variety of hardware and software configurations. In many applications where interaction testing is needed, the entire test suite is not run as a result of time or budget constraints. In these situations, it is essential to prioritize the tests. Here, we adapt a ‘‘one-test-at-a-time’’ greedy method to take importance of pairs into account. The method can be used to generate a set of tests in order, so that when run to completion all pair-wise interactions are tested, but when terminated after any intermediate number of tests, those deemed most important are tested. In addition, practical concerns of seeding and avoids are addressed. Computational results are reported. � 2006 Elsevier B.V. All rights reserved.

203 citations


Journal ArticleDOI
TL;DR: The main differences between Web-based applications and traditional ones, how these differences impact the testing of the former ones, and some relevant contributions in the field of Web application testing developed in recent years are presented.
Abstract: Software testing is a difficult task and testing Web-based applications may be even more difficult, due to the peculiarities of such applications. In the last years, several problems in the field of Web-based applications testing have been addressed by research work, and several methods and techniques have been defined and used to test Web-based applications effectively. This paper will present the main differences between Web-based applications and traditional ones, how these differences impact the testing of the former ones, and some relevant contributions in the field of Web application testing developed in recent years. The focus is mainly on testing the functionality of a Web-based application, even if some discussion about the testing of non-functional requirements is provided too. Some indications about future trends in Web application testing are also outlined in the paper.

199 citations


Journal ArticleDOI
TL;DR: The results suggest that the Bayesian network model can predict maintainability more accurately than the regression-based models for one system, and almost as accurately as the best regression- based model for the other system.
Abstract: As the number of object-oriented software systems increases, it becomes more important for organizations to maintain those systems effectively. However, currently only a small number of maintainability prediction models are available for object-oriented systems. This paper presents a Bayesian network maintainability prediction model for an object-oriented software system. The model is constructed using object-oriented metric data in Li and Henry's datasets, which were collected from two different object-oriented systems. Prediction accuracy of the model is evaluated and compared with commonly used regression-based models. The results suggest that the Bayesian network model can predict maintainability more accurately than the regression-based models for one system, and almost as accurately as the best regression-based model for the other system.

187 citations


Journal ArticleDOI
TL;DR: B-SCP is presented, a requirements engineering framework for organizational IT that directly addresses an organization's business strategy and the alignment of IT requirements with that strategy.
Abstract: Ensuring that organizational IT is in alignment with and provides support for an organization's business strategy is critical to business success. Despite this, business strategy and strategic alignment issues are all but ignored in the requirements engineering research literature. We present B-SCP, a requirements engineering framework for organizational IT that directly addresses an organization's business strategy and the alignment of IT requirements with that strategy. B-SCP integrates the three themes of strategy, context, and process using a requirements engineering notation for each theme. We demonstrate a means of cross-referencing and integrating the notations with each other, enabling explicit traceability between business processes and business strategy. In addition, we show a means of defining requirements problem scope as a Jackson problem diagram by applying a business modeling framework. Our approach is illustrated via application to an exemplar. The case example demonstrates the feasibility of B-SCP, and we present a comparison with other approaches.

186 citations


Journal ArticleDOI
TL;DR: Investigation of the effect on estimation accuracy of the adoption of genetic algorithm (GA) to determine the appropriate weighted similarity measures of effort drivers in analogy-based software effort estimation models demonstrates that the nonlinearly weighted analogy method presents better estimate accuracy over the results obtained using the other methods.
Abstract: A reliable and accurate estimate of software development effort has always been a challenge for both the software industry and academia. Analogy is a widely adopted problem solving technique that has been evaluated and confirmed in software effort or cost estimation domains. Similarity measures between pairs of effort drivers play a central role in analogy-based estimation models. However, hardly any research has addressed the issue of how to decide on suitable weighted similarity measures for software effort drivers. The present paper investigates the effect on estimation accuracy of the adoption of genetic algorithm (GA) to determine the appropriate weighted similarity measures of effort drivers in analogy-based software effort estimation models. Three weighted analogy methods, namely, the unequally weighted, the linearly weighted and the nonlinearly weighted methods are investigated in the present paper. We illustrate our approaches with data obtained from the International Software Benchmarking Standards Group (ISBSG) repository and the IBM DP services database. The experimental results show that applying GA to determine suitable weighted similarity measures of software effort drivers in analogy-based software effort estimation models is a feasible approach to improving the accuracy of software effort estimates. It also demonstrates that the nonlinearly weighted analogy method presents better estimate accuracy over the results obtained using the other methods.

159 citations


Journal ArticleDOI
TL;DR: The findings confirm that the critical factors to achieving trust initially in an outsourcing relationship include previous clients' reference and experience of vendor in outsourcing engagements, and suggest that trust is considered to be very fragile in outsourcing relationships.
Abstract: This paper investigates trust in software outsourcing relationships. The study is based on an empirical investigation of eighteen high maturity software vendor companies based in India. Our analysis of the literature suggests that trust has received a lot of attention in all kinds of business relationships. This includes inter-company relationships, whether cooperative ventures or subcontracting relationships, and relationship among different parts of a single company. However, trust has been relatively under-explored in software outsourcing relationships. In this paper, we present a detailed empirical investigation of trust in commercial software outsourcing relationships. The investigation presents what vendor companies perceive about getting trust from client companies in outsourcing relationships. We present the results in two parts-(1) achieving trust initially in outsourcing relationships and (2) maintaining trust in ongoing outsourcing relationships. Our findings confirm that the critical factors to achieving trust initially in an outsourcing relationship include previous clients' reference and experience of vendor in outsourcing engagements. Critical factors identified for maintaining trust in an established outsourcing relationship include transparency, demonstrability, honesty, process followed and commitment. Our findings also suggest that trust is considered to be very fragile in outsourcing relationships.

149 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel data mining method, namely SMAP-Mine, that can efficiently discover mobile users' sequential movement patterns associated with requested services and the corresponding prediction strategies are proposed.
Abstract: The development of wireless and web technologies has allowed the mobile users to request various kinds of services by mobile devices at anytime and anywhere. Helping the users obtain needed information effectively is an important issue in the mobile web systems. Discovery of user behavior can highly benefit the enhancements on system performance and quality of services. Obviously, the mobile user's behavior patterns, in which the location and the service are inherently coexistent, become more complex than those of the traditional web systems. In this paper, we propose a novel data mining method, namely SMAP-Mine that can efficiently discover mobile users' sequential movement patterns associated with requested services. Moreover, the corresponding prediction strategies are also proposed. Through empirical evaluation under various simulation conditions, SMAP-Mine is shown to deliver excellent performance in terms of accuracy, execution efficiency and scalability. Meanwhile, the proposed prediction strategies are also verified to be effective in measurements of precision, hit ratio and applicability.

Journal ArticleDOI
TL;DR: This paper presents an approach supported by a tool for use cases based requirements engineering that includes use cases formalization, a restricted form of natural language for use case description, and the derivation of an executable specification as well as a simulation environment from use cases.
Abstract: Use cases that describe possible interactions involving a system and its environment are increasingly being accepted as effective means for functional requirements elicitation and analysis. In the current practice, informal definitions of use cases are used and the analysis process is manual. In this paper, we present an approach supported by a tool for use cases based requirements engineering. Our approach includes use cases formalization, a restricted form of natural language for use cases description, and the derivation of an executable specification as well as a simulation environment from use cases.

Journal ArticleDOI
TL;DR: A structured review of typical software effort estimation terminology in software engineering textbooks and software estimation research papers provides evidence that the term 'effort estimate' is frequently used without sufficient clarification of its meaning, and that estimation accuracy is often evaluated without ensuring that the estimated and the actual effort are comparable.
Abstract: It is well documented that the software industry suffers from frequent cost overruns. A contributing factor is, we believe, the imprecise estimation terminology in use. A lack of clarity and precision in the use of estimation terms reduces the interpretability of estimation accuracy results, makes the communication of estimates difficult, and lowers the learning possibilities. This paper reports on a structured review of typical software effort estimation terminology in software engineering textbooks and software estimation research papers. The review provides evidence that the term 'effort estimate' is frequently used without sufficient clarification of its meaning, and that estimation accuracy is often evaluated without ensuring that the estimated and the actual effort are comparable. Guidelines are suggested on how to reduce this lack of clarity and precision in terminology.

Journal ArticleDOI
TL;DR: The impact of knowledge brokers and their associated activities in open source projects is investigated using three Debian lists as a case study and social network analysis was used to visualize how participants are affiliated with the lists.
Abstract: Much research on open source software development concentrates on developer lists and other software repositories to investigate what motivates professional software developers to participate in open source software projects. Little attention has been paid to individuals who spend valuable time in lists helping participants on some mundane yet vital project activities. Using three Debian lists as a case study we investigate the impact of knowledge brokers and their associated activities in open source projects. Social network analysis was used to visualize how participants are affiliated with the lists. The network topology reveals substantial community participation. The consequence of collaborating in mundane activities for the success of open source software projects is discussed. The direct beneficiaries of this research are in the identification of knowledge experts in open source software projects. � 2006 Elsevier B.V. All rights reserved.

Journal ArticleDOI
TL;DR: It is indicated that also in this specific kind of organisation, software processes can be established successfully at low cost considering typical business models, goals and characteristics of small organisations.
Abstract: In order to guide the tailoring of existing approaches for the establishment of software processes in small companies, we report our experiences on defining and implementing software processes in two small software companies. The paper describes the principal steps performed and presents information on costs and duration. We analyse, if and how process guides are used, their impacts and how they are improved. Our findings indicate that also in this specific kind of organisation, software processes can be established successfully at low cost considering typical business models, goals and characteristics of small organisations.

Journal ArticleDOI
TL;DR: This paper presents a new type of pattern called a bridging pattern, which extends interaction design patterns by adding information on how to generally implement this pattern.
Abstract: Adding usability improving solutions during late stage development is to some extent restricted by the software architecture. However, few software engineers and human-computer interaction engineers are aware of this important constraint and as a result avoidable rework is frequently necessary. In this paper we present a new type of pattern called a bridging pattern. Bridging patterns extend interaction design patterns by adding information on how to generally implement this pattern. Bridging patterns can be used for architectural analysis: when the generic implementation is known, software architects can assess what it means in their context and can decide whether they need to modify the software architecture to support these patterns. This may prevent part of the high costs incurred by adaptive maintenance activities once the system has been implemented and leads to architectures with better support for usability.

Journal ArticleDOI
TL;DR: A survey of users and managers in Taiwan was conducted to test a model derived from social capital theory, and the data support the positive relationships between group cohesion and both willingness to participate and commitment to learning.
Abstract: Enterprise Resource Planning systems present unique difficulties in implementation in that they typically involve changes to the entire organization and are a novel application for the organization. These characteristics add to the importance of making groups more cohesive in their goals, commitment, and ability to work toward completion of the new system project. Such cohesiveness is built partly through the willingness of the team members to participate and commitment to learning the new system. To determine if these relationships hold, a survey of users and managers in Taiwan was conducted to test a model derived from social capital theory. The data support the positive relationships between group cohesion and both willingness to participate and commitment to learning. Group cohesion is likewise positively related to meeting management goals. Resources within an organization should support the climate of learning and the building of team participation.

Journal ArticleDOI
TL;DR: A new approach utilizing program dependence analysis techniques and genetic algorithms (GAs) to generate test data is presented to show its effectiveness and efficiency based upon established criterion.
Abstract: The complexity of software systems has been increasing dramatically in the past decade, and software testing as a labor-intensive component is becoming more and more expensive. Testing costs often account for up to 50% of the total expense of software development; hence any techniques leading to the automatic generation of test data will have great potential to considerably reduce costs. Existing approaches of automatic test data generation have achieved some success by using evolutionary computation algorithms, but they are unable to deal with Boolean variables or enumerated types and they need to be improved in many other aspects. This paper presents a new approach utilizing program dependence analysis techniques and genetic algorithms (GAs) to generate test data. A set of experiments using the new approach is reported to show its effectiveness and efficiency based upon established criterion.

Journal ArticleDOI
TL;DR: An innovative coverage-based program prioritization algorithm, a novel path selection algorithm that takes into consideration program priority and functional calling relationship, and a constraint solver for test data generation that derives constraints from bytecode and solves complex constraints involving strings and dynamic objects are presented.
Abstract: Most automatic test generation research focuses on generation of test data from pre-selected program paths or input domains or program specifications. This paper presents a methodology for a full solution to code-coverage-based test case generation, which includes code coverage-based path selection, test data generation and actual test case representation in program's original languages. We implemented this method in an automatic testing framework, eXVantage. Experimental results and industrial trials show that the framework is able to generate tests to achieve program line coverage from 20% to 98% with reduced overall testing effort. Our major contributions include an innovative coverage-based program prioritization algorithm, a novel path selection algorithm that takes into consideration program priority and functional calling relationship, and a constraint solver for test data generation that derives constraints from bytecode and solves complex constraints involving strings and dynamic objects.

Journal ArticleDOI
TL;DR: Web Services support for the dynamic process outsourcing paradigm is analyzed, observing that the framework requires further support for cross-organizational business processes and mechanisms for contracting, QoS management and transaction management, and suggests an approach to fill the gaps.
Abstract: Outsourcing of business processes is crucial for organizations to be effective, efficient and flexible. In fast changing markets, dynamic outsourcing is required, in which business relationships are established and enacted on-the-fly in an adaptive, fine-grained way. This requires automated means for the establishment of outsourcing relationships and for the enactment of services performed in these relationships. Due to wide industry support and their model of loose coupling, Web Services have become the mechanism of choice to interconnect organizations. This paper analyzes Web Services support for the dynamic process outsourcing paradigm. We discuss contract-based outsourcing to define requirements, introduce the Web Services framework and investigate the match between the two. We observe that the framework requires further support for cross-organizational business processes and mechanisms for contracting, QoS management and transaction management. We suggest an approach to fill these gaps based on a business process support application layer implemented on Web Service technology.

Journal ArticleDOI
TL;DR: An evaluation of a program in which low-rigour, one-day SPI assessments were offered at no cost to 22 small Australian software development firms revealed that the process improvement program was effective in improving the process capability of 15 of these smallSoftware development firms.
Abstract: With increasing interest by the software development community in software process improvement (SPI), it is vital that SPI programs are evaluated and the reports of lessons learned disseminated. This paper presents an evaluation of a program in which low-rigour, one-day SPI assessments were offered at no cost to 22 small Australian software development firms. The assessment model was based on ISO/IEC 15504 (SPICE). About 12 months after the assessment, the firms were contacted to arrange a follow-up meeting to determine the extent to which they had implemented the recommendations from the assessment. Comparison of the process capability levels at the time of assessment and the follow-up meetings revealed that the process improvement program was effective in improving the process capability of 15 of these small software development firms. Analysis of the assessment and follow-up reports explored important issues relating to SPI: elapsed time from assessment to follow-up meeting, the need for mentoring, the readiness of firms for SPI, the role of the owner/manager, the advice provided by the assessors, and the need to record costs and benefits. Based on an analysis of the program and its outcomes, firms are warned not to undertake SPI if their operation is likely to be disrupted by events internal to the firm or in the external environment. Firms are urged to draw on the expertise of assessors and consultants as mentors, and to ensure the action plan from the assessment is feasible in terms of the timeframe for evaluation. The RAPID method can be improved by fostering a closer relationship between the assessor and the firm sponsor; by making more extensive use of feedback questionnaires after the assessment and follow-up meeting; by facilitating the collection and reporting of cost benefit metrics; and by providing more detailed guidance for the follow-up meeting. As well as providing an evaluation of the assessment model and method, the outcomes from this research have the potential to better equip practitioners and consultants to undertake software process improvement, hence increasing the success of small software development firms in domestic and global markets.

Journal ArticleDOI
TL;DR: This paper proposes an interactive service customization model to support individual service offering for customers and presents in detail a knowledge-based customizable service process model and the accompanying customization method.
Abstract: Mass customization has become one of the key strategies for a service provider to differentiate itself from its competitors in a highly segmented global service market. This paper proposes an interactive service customization model to support individual service offering for customers. In this model, not only that the content of an activity is customizable, but the process model can also be constructed dynamically according to the customer's requirements. Based on goal ontology, the on-demand customer requirements are transformed into a high-level service process model. Process components, which are building blocks for reusable standardized service processes, are designed to support on-demand process composition. The customer can incrementally define the customized service process through a series of operations, including activation of goal decomposition, reusable component selection, and process composition. In this paper, we first discuss the key requirements of the service customization problem. We then present in detail a knowledge-based customizable service process model and the accompanying customization method. Finally we demonstrate the feasibility of the our approach through a case study of the well-known travel planning problem and present a prototype system that enables users to interactively organize a satisfying travel plan.

Journal ArticleDOI
TL;DR: This paper presents a process based on the B refinement technique for the derivation of a SQL relational implementation, embedded in the JAVA language (JAVA/SQL), from a B specification obtained by the first translation phase.
Abstract: This paper presents a formal approach for the development of trustworthy database applications. This approach consists of three complementary steps. Designers start by modeling applications using UML diagrams dedicated to database applications domain. These diagrams are then automatically translated into B specifications suitable not only for reasoning about data integrity checking but also for the derivation of trustworthy implementations. In this paper, we present a process based on the B refinement technique for the derivation of a SQL relational implementation, embedded in the JAVA language (JAVA/SQL), from a B specification obtained by the first translation phase.

Journal ArticleDOI
TL;DR: A model is developed that directly relates management control to the quality of interaction and project success, with interaction quality as a potential intermediary, which provides guidelines for managers in controlling the critical relations between users and IS personnel.
Abstract: Research has failed to establish a conclusive link between levels of user involvement and information system project success. Communication and control theories indicate that the quality of interactions between users and inofrmation personnel may serve to better the coordinaton in a project and lead to greater success. A model is developed that directly relates management control to the quality of interaction and project success, with interaction quality as a potential intermediary. These variables provide a more distinct relationship to success as interactions are more structurally defined and controlled. A survey of 196 IS professionals provides evidence that management control techniques improve the quality of user-IS personnel interactions and eventual project success. These formal structures provide guidelines for managers in controlling the critical relations between users and IS personnel.

Journal ArticleDOI
TL;DR: A preliminary study splitting the pair programming process into a pair design and a pair implementation phase suggests that there is no difference in terms of development cost between a pair and a solo implementation phase if the cost for developing programs of similar level of correctness is concerned.
Abstract: The drawback of pair programming is the nearly doubled personnel cost. The extra cost of pair programming originates from the strict rule of extreme programming where every line of code should be developed by a pair of developers. Is this rule not a waste of resources? Is it not possible to gain a large portion of the benefits of pair programming by only a small fraction of the meeting time of a pair programming session? We conducted a preliminary study to answer this question by splitting the pair programming process into a pair design and a pair implementation phase. The pair implementation phase is compared to a solo implementation phase, which in turn was preceeded by a pair design phase, as well. The study is preliminary as its major goal was to identify an appropriate sample size for subsequent experiments. The data from this study suggest that there is no difference in terms of development cost between a pair and a solo implementation phase if the cost for developing programs of similar level of correctness is concerned.

Journal ArticleDOI
TL;DR: Some of the factors and constraints that influence time-to-market when software is developed across time zones are identified and a model of the relationships between development time and the Factors and overheads associated with such a pattern of work is described.
Abstract: Economic factors and the World Wide Web are turning software usage and its development into global activities. Many benefits accrue from global development not least from the opportunity to reduce time-to-market through 'around the clock' working. This paper identified some of the factors and constraints that influence time-to-market when software is developed across time zones. It describes a model of the relationships between development time and the factors and overheads associated with such a pattern of work. The paper also reports on a small-scale empirical study of software development across time zones and presents some lessons learned and conclusions drawn from the theoretical and empirical work carried out.

Journal ArticleDOI
TL;DR: This paper presents an ontology-based approach for context reconciliation that focuses on the security breaches that threaten the integrity of the context of Web services, and proposes appropriate means to achieve this integrity.
Abstract: With the increasing popularity of Web services and increasing complexity of satisfying needs of users, there has been a renewed interest in Web services composition. Composition addresses the case of a user request that cannot be satisfied by any available Web service, whereas a composite service obtained by integrating Web services might be used. Because Web services originate from different providers, their composition faces the obstacle of the context heterogeneity of Web services. An unawareness or poor consideration of this heterogeneity during Web services composition and execution result in a lack of the quality and relevancy of information that permits tracking the composition, monitoring the execution, and handling exceptions. This paper presents an ontology-based approach for context reconciliation. The approach also focuses on the security breaches that threaten the integrity of the context of Web services, and proposes appropriate means to achieve this integrity.

Journal ArticleDOI
TL;DR: Although many obstacles have to be addressed, the results indicate that the approach is a viable way to manage DSD during very demanding circumstances.
Abstract: This paper presents an approach for Distributed Software Development (DSD) that is based on two foundations. The first one is an integration centric engineering process, which aims at managing crucial dependencies in DSD projects. The second foundation is a strategy for operationalizing the coordination of the engineering process. The purpose of this strategy is to simultaneously provide global information system support for coordination and achieve common understanding about what should be coordinated and how. The approach has been successfully used at Ericsson, a major supplier of telecommunication systems worldwide, for coordinating extraordinary complex projects developing nodes in the third generation of mobile systems. Although many obstacles have to be addressed, the results indicate that the approach is a viable way to manage DSD during very demanding circumstances.

Journal ArticleDOI
TL;DR: This paper presents a moderated fuzzy web service discovery approach to model subjective and fuzzy opinions, and to assist service consumers and providers in reaching a consensus, which achieves a common consensus on the distinct opinions and expectations of serviceumers and providers.
Abstract: Web services are used for developing and integrating highly distributed and heterogeneous systems in different domains such as e-business, grid services, and e-government systems. Web services discovery is a key to dynamically locating desired web services across the Internet. Prevailing research trend is to dynamically discover and compose web services in order to develop composite services that provide enhanced functionality. Existing discovery techniques do not take into account the diverse preferences and expectations of service consumers and providers which are generally used for searching or advertising web services. This paper presents a moderated fuzzy web service discovery approach to model subjective and fuzzy opinions, and to assist service consumers and providers in reaching a consensus. The method achieves a common consensus on the distinct opinions and expectations of service consumers and providers. This process is iterative such that further fuzzy opinions and preferences can be added to improve the precision of web service discovery. The proposed method is implemented as a prototype system and is tested through various experiments. Experimental results demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This paper addresses the graphical representation of the behaviour of B specifications, using state transition diagrams, which can help understand the specification for stakeholders who are not familiar with the B method, such as customers or certification authorities.
Abstract: This paper addresses the graphical representation of the behaviour of B specifications, using state transition diagrams. These diagrams can help understand the specification for stakeholders who are not familiar with the B method, such as customers or certification authorities. The paper first discusses the principles of the graphical representation on a deterministic example, featuring a small set of states. It then discusses the representation of specifications which feature a large or infinite set of states, or which are non-deterministic. Abstraction techniques are used to overcome these difficulties. They result in a variety of possible representations. Finally, three techniques, based on animation and proof, are presented to help construct the diagrams.