scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Services Computing in 2010"


Journal ArticleDOI
TL;DR: A process and a suitable system architecture is proposed that enables developers and business process designers to dynamically query, select, and use running instances of real-world services (i.e., services running on physical devices) or even deploy new ones on-demand, all in the context of composite, real- world business applications.
Abstract: The increasing usage of smart embedded devices in business blurs the line between the virtual and real worlds. This creates new opportunities to build applications that better integrate real-time state of the physical world, and hence, provides enterprise services that are highly dynamic, more diverse, and efficient. Service-Oriented Architecture (SOA) approaches traditionally used to couple functionality of heavyweight corporate IT systems, are becoming applicable to embedded real-world devices, i.e., objects of the physical world that feature embedded processing and communication. In such infrastructures, composed of large numbers of networked, resource-limited devices, the discovery of services and on-demand provisioning of missing functionality is a significant challenge. We propose a process and a suitable system architecture that enables developers and business process designers to dynamically query, select, and use running instances of real-world services (i.e., services running on physical devices) or even deploy new ones on-demand, all in the context of composite, real-world business applications.

637 citations


Journal ArticleDOI
TL;DR: This paper presents decision models to optimally allocate source servers to physical target servers while considering real-world constraints and presents a heuristic to address large-scale server consolidation projects.
Abstract: Today's data centers offer IT services mostly hosted on dedicated physical servers. Server virtualization provides a technical means for server consolidation. Thus, multiple virtual servers can be hosted on a single server. Server consolidation describes the process of combining the workloads of several different servers on a set of target servers. We focus on server consolidation with dozens or hundreds of servers, which can be regularly found in enterprise data centers. Cost saving is among the key drivers for such projects. This paper presents decision models to optimally allocate source servers to physical target servers while considering real-world constraints. Our central model is proven to be an NP-hard problem. Therefore, besides an exact solution method, a heuristic is presented to address large-scale server consolidation projects. In addition, a preprocessing method for server load data is introduced allowing for the consideration of quality-of-service levels. Extensive experiments were conducted based on a large set of server load data from a data center provider focusing on managerial concerns over what types of problems can be solved. Results show that, on average, server savings of 31 percent can be achieved only by taking cycles in the server workload into account.

359 citations


Journal ArticleDOI
TL;DR: This paper addresses the issue of selecting and composing Web services not only according to their functional requirements but also to their transactional properties and QoS characteristics by proposing a selection algorithm that satisfies user's preferences as weights over QoS criteria and as risk levels defining semantically the transactional requirements.
Abstract: Web Services are the most famous implementation of service-oriented architectures that has brought some challenging research issues. One of these is the composition, i.e., the capability to recursively construct a composite Web service as a workflow of other existing Web services, which are developed by different organizations and offer diverse functionalities (e.g., ticket purchase, payment), transactional properties (e.g., compensatable or not), and Quality of Service (QoS) values (e.g., execution price, success rate). The selection of a Web service, for each activity of the workflow, meeting the user's requirements, is still an important challenge. Indeed, the selection of one Web service among a set of them that fulfill some functionalities is a critical task, generally depending on a combined evaluation of QoS. However, the conventional QoS-aware composition approaches do not consider the transactional constraints during the composition process. This paper addresses the issue of selecting and composing Web services not only according to their functional requirements but also to their transactional properties and QoS characteristics. We propose a selection algorithm that satisfies user's preferences, expressed as weights over QoS criteria and as risk levels defining semantically the transactional requirements. Proofs and experimental results are presented.

325 citations


Journal ArticleDOI
TL;DR: FACTS is proposed, a framework for fault-tolerant composition of transactional Web services that combines exception handling and transaction techniques and an implementation module to automatically implement fault-handling logic in WS-BPEL.
Abstract: Along with the standardization of Web services composition language and the widespread acceptance of composition technologies, Web services composition is becoming an efficient and cost-effective way to develop modern business applications. As Web services are inherently unreliable, how to deliver reliable Web services composition over unreliable Web services is a significant and challenging problem. In this paper, we propose FACTS, a framework for fault-tolerant composition of transactional Web services. We identify a set of high-level exception handling strategies and a new taxonomy of transactional Web services to devise a fault-tolerant mechanism that combines exception handling and transaction techniques. We also devise a specification module and a verification module to assist service designers to construct fault-handling logic conveniently and correctly. Furthermore, we design an implementation module to automatically implement fault-handling logic in WS-BPEL. A case study demonstrates the viability of our framework and experimental results show that FACTS can improve fault tolerance of composite services with acceptable overheads.

148 citations


Journal ArticleDOI
TL;DR: This work presents a novel concept, called p-dominant service skyline, which provides an integrated solution to tackle the above two issues simultaneously, and presents a p-R-tree indexing structure and a dual-pruning scheme to efficiently compute it.
Abstract: The performance of a service provider may fluctuate due to the dynamic service environment. Thus, the quality of service actually delivered by a service provider is inherently uncertain. Existing service optimization approaches usually assume that the quality of service does not change over time. Moreover, most of these approaches rely on computing a predefined objective function. When multiple quality criteria are considered, users are required to express their preference over different (and sometimes conflicting) quality attributes as numeric weights. This is rather a demanding task and an imprecise specification of the weights could miss user-desired services. We present a novel concept, called p-dominant service skyline. A provider S belongs to the p-dominant skyline if the chance that S is dominated by any other provider is less than p. Computing the p-dominant skyline provides an integrated solution to tackle the above two issues simultaneously. We present a p-R-tree indexing structure and a dual-pruning scheme to efficiently compute the p-dominant skyline. We assess the efficiency of the proposed algorithm with an analytical study and extensive experiments.

140 citations


Journal ArticleDOI
TL;DR: A methodology for ranking the relevant services for a given request is proposed, introducing objective measures based on dominance relationships defined among the services, and methods for clustering therelevant services in a way that reveals and reflects the different trade-offs between the matched parameters are investigated.
Abstract: As the web is increasingly used not only to find answers to specific information needs but also to carry out various tasks, enhancing the capabilities of current web search engines with effective and efficient techniques for web service retrieval and selection becomes an important issue. Existing service matchmakers typically determine the relevance between a web service advertisement and a service request by computing an overall score that aggregates individual matching scores among the various parameters in their descriptions. Two main drawbacks characterize such approaches. First, there is no single matching criterion that is optimal for determining the similarity between parameters. Instead, there are numerous approaches ranging from Information Retrieval similarity measures up to semantic logic-based inference rules. Second, the reduction of individual scores to an overall similarity leads to significant information loss. Determining appropriate weights for these intermediate scores requires knowledge of user preferences, which is often not possible or easy to acquire. Instead, using a typical aggregation function, such as the average or the minimum of the degrees of match across the service parameters, introduces undesired bias, which often reduces the accuracy of the retrieval process. Consequently, several services, e.g., those having a single unmatched parameter, may be excluded from the result set, while being potentially good candidates. In this work, we present two complementary approaches that overcome the aforementioned deficiencies. First, we propose a methodology for ranking the relevant services for a given request, introducing objective measures based on dominance relationships defined among the services. Second, we investigate methods for clustering the relevant services in a way that reveals and reflects the different trade-offs between the matched parameters. We demonstrate the effectiveness and the efficiency of our proposed techniques and algorithms through extensive experimental evaluation on both real requests and relevance sets, as well as on synthetic scenarios.

137 citations


Journal ArticleDOI
TL;DR: This work enables progressive composition of non-Web-service-based components such as portlets, Web applications, native widgets, legacy systems, and Java Beans, and proposed a novel application of semantic annotation together with the standard semantic Web matching algorithm for finding sets of functionally equivalent components out of a large set of available non- web service components.
Abstract: The need for integration of all types of client and server applications that were not initially designed to interoperate is gaining popularity One of the reasons for this popularity is the capability to quickly reconfigure a composite application for a task at hand, both by changing the set of components and the way they are interconnected Service-Oriented Architecture (SOA) has recently become a popular platform in the IT industry for building such composite applications with the integrated components being provided as Web services A key limitation of solely Web-service-based integration is that it requires extra programming efforts when integrating non-Web service components, which is not cost-effective Moreover, with the emergence of new standards, such as Open Service Gateway Initiative (OSGi), the components used in composite applications have grown to include more than just Web services Our work enables progressive composition of non-Web-service-based components such as portlets, Web applications, native widgets, legacy systems, and Java Beans Further, we proposed a novel application of semantic annotation together with the standard semantic Web matching algorithm for finding sets of functionally equivalent components out of a large set of available non-Web-service-based components Once such a set is identified, the user can drag and drop the most suitable component into an Eclipse-based composition canvas After a set of components has been selected in such a way, they can be connected by data-flow arcs, thus forming an integrated, composite application without any low-level programming and integration efforts We implemented and conducted extensive experimental study on the above progressive composition framework on IBM's Lotus Expeditor, an extension of an SOA platform called the Eclipse Rich Client Platform (RCP) that complies with the OSGi standard

120 citations


Journal ArticleDOI
TL;DR: This paper presents the Vienna Runtime Environment for Service-Oriented Computing (VRESCo) that addresses issues of current web service technologies, with a special emphasis on service metadata, Quality of Service, service querying, dynamic binding, and service mediation.
Abstract: Service-Oriented Computing has recently received a lot of attention from both academia and industry. However, current service-oriented solutions are often not as dynamic and adaptable as intended because the publish-find-bind-execute cycle of the Service-Oriented Architecture triangle is not entirely realized. In this paper, we highlight some issues of current web service technologies, with a special emphasis on service metadata, Quality of Service, service querying, dynamic binding, and service mediation. Then, we present the Vienna Runtime Environment for Service-Oriented Computing (VRESCo) that addresses these issues. We give a detailed description of the different aspects by focusing on service querying and service mediation. Finally, we present a performance evaluation of the different components, together with an end-to-end evaluation to show the applicability and usefulness of our system.

108 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel approach for querying and automatically composing Data-Providing services as RDF views over a mediated (domain) ontology, and proposes query rewriting algorithms for processing queries over DP services.
Abstract: Data-Providing (DP) services allow query-like access to organizations' data via web services. The invocation of a DP service results in the execution of a query over data sources. In most cases, users' queries require the composition of several services. In this paper, we propose a novel approach for querying and automatically composing DP services. The proposed approach largely draws from the experiences and lessons learned in the areas of service composition, ontology, and answering queries over views. First, we introduce a model for the description of DP services and specification of service-oriented queries. We model DP services as RDF views over a mediated (domain) ontology. Each RDF view contains concepts and relations from the mediated ontology to capture the semantic relationships between input and output parameters. Second, we propose query rewriting algorithms for processing queries over DP services. The query mediator automatically transforms a user's query (during the query rewriting stage) into a composition of DP services. Finally, we describe an implementation and provide a performance evaluation of the proposed approach.

106 citations


Journal ArticleDOI
TL;DR: This paper developed a BPEL ranking platform that allows to find in a service repository, a set of service candidates satisfying user requirements, and then, to rank these candidates using a behavioral-based similarity measure.
Abstract: Finding useful services is a challenging and important task in several applications. Current approaches for services retrieval are mostly limited to the matching of their inputs/outputs. In this paper, we argue that in several applications (services having multiple and dependent operations and scientific workflows), the service discovery should be based on the specification of service behavior. The idea behind is to develop matching techniques that operate on behavior models and allow delivery of approximate matches and evaluation of semantic distance between these matches and the user requirements. To do so, we reduce the problem of behavioral matching to a graph matching problem and adapt existing algorithms for this purpose. To validate our approach, we developed a BPEL ranking platform that allows to find in a service repository, a set of service candidates satisfying user requirements, and then, to rank these candidates using a behavioral-based similarity measure.

93 citations


Journal ArticleDOI
TL;DR: Existing notions of cohesion in the Procedural and OO paradigms are extended in order to account for the unique characteristics of SOC, thereby supporting the derivation of design-level software metrics for objectively quantifying the degree of service cohesion.
Abstract: Service-Oriented Computing (SOC) is intended to improve software maintainability as businesses become more agile and underlying processes and rules change more frequently. However, to date, the impact of service cohesion on the analyzability subcharacteristic of maintainability has not been rigorously studied. Consequently, this paper extends existing notions of cohesion in the Procedural and OO paradigms in order to account for the unique characteristics of SOC, thereby supporting the derivation of design-level software metrics for objectively quantifying the degree of service cohesion. The metrics are theoretically validated, and an initial empirical evaluation using a small-scale controlled study suggests that the proposed metrics could help predict analyzability early in the Software Development Life Cycle. If future industrial studies confirm these findings, the practical applicability of such metrics is to support the development of service-oriented systems that can be analyzed, and thus maintained, more easily. In addition, such metrics could help identify design problems in existing systems.

Journal ArticleDOI
TL;DR: ISM is presented, the model to describe intentional services, and populate the service registry with their descriptions, and a set of transformations are introduced to bridge the gap from the intentional level to the implementation one.
Abstract: Despite its growing acceptance, Service-Oriented Computing (SOC) remains a computing mechanism to speed up the design of software applications by assembling ready-made software services. We argue that it is difficult for business people to fully benefit of SOC if it remains at the software level. The paper proposes a move toward a description of services in business terms, i.e., intentions and strategies to achieve them and to organize their publication, search, and composition on the basis of these descriptions. In this way, it leverages SOC to an intentional level, ISOC. We present ISM, the model to describe intentional services, and populate the service registry with their descriptions. We highlight its intention-driven perspective for service description, retrieval, and composition. Thereafter, we propose a methodology to determine intentional services that meet business goals and to publish them in the registry. Finally, the paper introduces a set of transformations to bridge the gap from the intentional level to the implementation one.

Journal ArticleDOI
TL;DR: This paper presents an integrated service-oriented enterprise system development framework as well as an instantiated design process model that was a result from a three-year action research case study with a Fortune 50 company in the financial services industry.
Abstract: This paper presents an integrated service-oriented enterprise system development framework (called the BITAM-SOA Framework) as well as an instantiated design process model (called the Service Engineering Schematic) that was a result from a three-year action research case study with a Fortune 50 company in the financial services industry. The BITAM-SOA Framework and Schematic advance are both business-IT alignment and software architecture analysis techniques supporting the engineering of enterprise-wide service-oriented systems-that is, service engineering.

Journal ArticleDOI
TL;DR: By using the Event Calculus formalism to specify and check the transactional behavior consistency of service composition, the approach provides a logical foundation to ensure service execution reliability.
Abstract: Different from process components, Web services are defined independently from any execution context A key challenge of (Web) service compositions is how to ensure reliable execution Due to their inherent autonomy and heterogeneity, it is difficult to reason about the behavior of service compositions especially in case of failures Therefore, there is a growing interest for verification techniques which help to prevent service composition execution failures In this paper, we propose an event-driven approach to validate the transactional behavior of service compositions The transactional behavior verification is done either at design time to validate recovery mechanisms consistency, or after runtime to report execution deviations and repair design errors, and therefore, formally ensure service execution reliability By using the Event Calculus formalism to specify and check the transactional behavior consistency of service composition, our approach provides a logical foundation to ensure service execution reliability

Journal ArticleDOI
TL;DR: The Mashup Services System (MSS), a novel platform to support users to create, use, and manage mashups with little or no programming effort, is described and its main enabling technologies are discussed.
Abstract: We propose a service-oriented approach to generate and manage mashups. The proposed approach is realized using the Mashup Services System (MSS), a novel platform to support users to create, use, and manage mashups with little or no programming effort. The proposed approach relieves users from programming-intensive, error-prone, and largely nonreusable output process for creating and maintaining mashups. We describe the overall design of MSS and discuss and evaluate its main enabling technologies.

Journal ArticleDOI
TL;DR: A formal scientific workflow provenance model as the basis for querying and access control for workflowprovenance; a security model for fine-grained access Control for multilevel provenance and an algorithm for the derivation of a full security specification based on inheritance, overriding, and conflict resolution.
Abstract: Provenance has become increasingly important in scientific workflows and services computing to capture the derivation history of a data product, including the original data sources, intermediate data products, and the steps that were applied to produce the data product. In many cases, both scientific results and the used protocol are sensitive and effective access control mechanisms are essential to protect their confidentiality. In this paper, we propose: 1) a formal scientific workflow provenance model as the basis for querying and access control for workflow provenance; 2) a security model for fine-grained access control for multilevel provenance and an algorithm for the derivation of a full security specification based on inheritance, overriding, and conflict resolution; 3) a formalization of the notion of security views and an algorithm for security view derivation; and 4) a formalization of the notion of secure abstraction views and an algorithm for its computation. A prototype called SecProv has been developed, and experiments show the effectiveness and efficiency of our approach.

Journal ArticleDOI
TL;DR: This paper believes that its proposal is the first of its kind to integrate several well-established theoretical and practical techniques from networking, microeconomics, and service-oriented computing to form a fully distributed service delivery platform.
Abstract: In this paper, we propose a novel autonomic service delivery platform for service-oriented network environments. The platform enables a self-optimizing infrastructure that balances the goals of maximizing the business value derived from processing service requests and the optimal utilization of IT resources. We believe that our proposal is the first of its kind to integrate several well-established theoretical and practical techniques from networking, microeconomics, and service-oriented computing to form a fully distributed service delivery platform. The principal component of the platform is a utility-based cooperative service routing protocol that disseminates congestion-based prices among intermediaries to enable the dynamic routing of service requests from consumers to providers. We provide the motivation for such a platform and formally present our proposed architecture. We discuss the underlying analytical framework for the service routing protocol, as well as key methodologies which together provide a robust framework for our service delivery platform that is applicable to the next-generation of middleware and telecommunications architectures. We discuss issues regarding the fairness of service rate allocations, as well as the use of nonconcave utility functions in the service routing protocol. We also provide numerical results that demonstrate the ability of the platform to provide optimal routing of service requests.

Journal ArticleDOI
TL;DR: This paper proposes the Service Data Link model (SDL), a service relationship modeling schema, to describe service data correlations, which are data mappings among the input and output attributes of services, and presents SDG+, the authors' enhanced model which extends the expressive power of SDG to include attribute quantifiers, attribute transforms, and explicit dependencies.
Abstract: In this paper, we propose the Service Data Link model (SDL), a service relationship modeling schema, to describe service data correlations, which are data mappings among the input and output attributes of services. SDL recognizes the close correspondence between service data correlations and webpage hyperlinks, and defines service data correlations with explicit declarations, making it more expressive than the implicit method. We developed an XML implementation for SDL that can be seamlessly integrated into WSDL, the primary web services modeling language nowadays, and serves as an extension of metadata of services interfaces. An application of the SDL model in the domain of data-driven automatic service composition is then presented. First, we combine SDL with the Service Dependency Graph domain model developed by Liang, and present SDG+, our enhanced model which extends the expressive power of SDG to include attribute quantifiers, attribute transforms, and explicit dependencies. Then, we show how SDG+ can be used to improve the performance of composition algorithms in this domain.

Journal ArticleDOI
TL;DR: An approach to consider different viewpoints of service composition behavior analysis is presented, which model service orchestration, choreography behavior, and service Orchestration deployment through formal semantics applied to service behavior and configuration descriptions.
Abstract: The Service-Oriented Architecture (SOA) approach to building systems of application and middleware components promotes the use of reusable services with a core focus of service interactions, obligations, and context. Although services technically relieve the difficulties of specific technology dependency, the difficulties in building reusable components is still prominent and a challenge to service engineers. Engineering the behavior of these services means ensuring that the interactions and obligations are correct and consistent with policies set out to guide partners in building the correct sequences of interactions to support the functions of one or more services. Hence, checking the suitability of service behavior is complex, particularly when dealing with a composition of services and concurrent interactions. How can we rigorously check implementations of service compositions? What are the semantics of service compositions? How does deployment configuration affect service composition behavior safety? To facilitate service engineers designing and implementing suitable and safe service compositions, we present in this paper an approach to consider different viewpoints of service composition behavior analysis. The contribution of the paper is threefold. First, we model service orchestration, choreography behavior, and service orchestration deployment through formal semantics applied to service behavior and configuration descriptions. Second, we define types of analysis and properties of interest for checking service models of orchestrations, choreography, and deployment. Third, we describe mechanical support by providing a comprehensive integrated workbench for the verification and validation of service compositions.

Journal ArticleDOI
TL;DR: This work proposes a design methodology, Service-Oriented Design with Aspects (SODA), for service-oriented systems to address the need to continually upgrade and evolve services while maintaining various versions.
Abstract: We propose a design methodology, Service-Oriented Design with Aspects (SODA), for service-oriented systems to address the need to continually upgrade and evolve services while maintaining various versions. Our approach treats aspects as first-class design elements and consistently applies the concept of aspect to all phases of design and evaluation. At the early design stages, crosscutting concerns are first separated out as aspects, and then, services are composed by weaving the different design elements together. The behavior of aspects and services is represented as basic Petri Nets and we present rules for weaving together Petri Nets so as to obtain behavior of the integrated system (with aspects crosscutting services). Even at the evaluation stages, performance and resource data are separated out as aspects to be woven in to the design so as to enable advanced analysis using Petri Net tools. A small order service example is used to illustrate our approach.

Journal ArticleDOI
TL;DR: A model-driven approach for the dynamic adaptation of Web services based on ontology-aware service templates, which raises the level of abstraction from concrete Web service implementations to high-level service models, which leads to more flexible and automated adaptations through template designs and transformations.
Abstract: Service-oriented enterprise systems, which tend to be heterogeneous, loosely coupled, long-lived, and continuously running, have to cope with frequent changes to their requirements and the environment. In order to address such changes, applications need to be inherently flexible and adaptive, supported by appropriate infrastructures. In this paper, we propose a model-driven approach for the dynamic adaptation of Web services based on ontology-aware service templates. Model-driven engineering raises the level of abstraction from concrete Web service implementations to high-level service models, which leads to more flexible and automated adaptations through template designs and transformations. The ontological semantics enhances the service matching capabilities required by the dynamic adaptation process. Service templates are based on OWL-S descriptions and provide the necessary means to capture and parameterize specific behavior patterns of service models. In this paper, we apply our approach in the context of the EU-funded ALIVE project and illustrate, as an example, how the proposed framework supports the adaptation of the authentication mechanism used by an interactive tourist recommendation system.

Journal ArticleDOI
TL;DR: The framework TracG is presented, which is based on WS-BusinessActivity and contains at its core a set of rules for deciding on the ongoing confirmation or cancelation status of participants' work and protocol extensions for monitoring the progress of a process.
Abstract: Current approaches to transactional support of distributed processes in service-oriented environments are limited to scenarios where the participant initiating the process maintains a controlling position throughout the lifetime of the process. This constraint impedes support of complex processes where participants may only possess limited local views on the overall process. Furthermore, there is little support of dynamic aspects: failure or exit of participants usually leads to cancelation of the whole process. In this paper, we address these limitations by introducing a framework that strengthens the role of the coordinator and allows for largely autonomous coordination of dynamic processes. We first discuss motivating examples and analyze existing approaches to transactional coordination. Subsequently, we present our framework TracG, which is based on WS-BusinessActivity. It contains at its core a set of rules for deciding on the ongoing confirmation or cancelation status of participants' work and protocol extensions for monitoring the progress of a process. Various types of participant vitality for a process are distinguished, facilitating the controlled exit of nonvital participants as well as continuation of a process in case of tolerable failures. The implementation of the framework is presented and discussed regarding interoperability issues.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper presented a new method for technical target setting in QFD, based on an artificial neural network, which set up technical targets consistent with relationships between quality of web service requirements and design attributes, no matter whether they are linear or nonlinear.
Abstract: There are at least two challenges with quality management of service-oriented architecture based web service systems: 1) how to link its technical capabilities with customer's needs explicitly to satisfy customers' functional and nonfunctional requirements; and 2) how to determine targets of web service design attributes. Currently, the first issue is not addressed and the second one is dealt with subjectively. Quality Function Deployment (QFD), a quality management system, has found its success in improving quality of complex products although it has not been used for developing web service systems. In this paper, we analyze requirements for web services and their design attributes, and apply the QFD for developing web service systems by linking quality of service requirements to web service design attributes. A new method for technical target setting in QFD, based on an artificial neural network, is also presented. Compared with the conventional methods for technical target setting in QFD, such as benchmarking and the linear regression method, which fail to incorporate nonlinear relationships between design attributes and quality of service requirements, it sets up technical targets consistent with relationships between quality of web service requirements and design attributes, no matter whether they are linear or nonlinear.

Journal ArticleDOI
TL;DR: A solution which can convert the enormous existing desktop software into on-demand software across the Internet without any modification of source code is presented, and its performance analysis and tests show that it can be efficient in the performance.
Abstract: To access personalized software applications on demand is an attractive usage mode of software. This paper presents such a solution based on lightweight virtualization technologies, which can convert the enormous existing desktop software into on-demand software across the Internet without any modification of source code. First, a construction and runtime model of software is proposed. It regards software as an entity containing three parts: Part1 includes all resources provided by the OS; Part2 contains what is created by the installation process; and Part3 is the data produced/modified during the runtime. Moreover, the software is executed in a lightweight virtualization environment where the APIs accessing Parts 2 and 3 are intercepted and redirected to the real storage positions (like the network and the portable storage device) as needed. In addition, a network-resource access protocol is developed for the software on demand, which implements content-addressable storage, p2p (peer-to-peer) transfer acceleration, content integrity check, and the prevention of illegal copy. From the viewpoint of the user, he/she can run his/her personalized software on any compatible computer although it does not exist on the host. Finally, its performance analysis and tests show that this proposed solution can be efficient in the performance.

Journal ArticleDOI
TL;DR: This special issue provides insights into the latest research on web service querying and efficient selection including modeling techniques for web services, service query languages, algorithms for efficient service selection, as well as quality of web service modeling and quality-based service selection.
Abstract: SERVICE-ORIENTED computing is gaining momentum as the next technological vehicle to leverage the huge investments in web application development. Web services are poised to take center stage as part of this adoption [4]. The ever-increasing number of web services will have the effect of transforming the web from a data-oriented repository to a service-oriented repository, also known as the Service Web [1]. In this new paradigm, existing business logic would be wrapped as web services to be accessible on the web via a web services middleware [2]. As the number of web services is expected to substantially increase, this would have the effect of introducing competition among web services that offer similar functionalities. Service users are enabled to select the “best” web services and/or their combinations with respect to their expected quality, such as price, response time, and reputation. There is a need to provide a sound framework to organize web services. This would form as a platform to query web services. Building this framework is especially important due to the ever-increasing large and heterogeneous web service deployment. A key ingredient of such a service framework is a formal service query model that can capture the key features of services to filter interactions and accelerate service searches. The query models must be congruent with the dynamic, active, autonomous, and highly heterogeneous nature of web services and their environment. Query languages and efficient selection techniques can then be developed once such a service model is in place. Existing service discovery technologies, such as service registries and service search engines, mainly support the simple keyword-based search on web services. However, keyword search cannot always precisely locate web services, partially because of the rich semantics embodied in these services. Due to the ambiguity of the keywords, which are typically described using natural language, either too many irrelevant services may be returned or some highly relevant services may be missed. As a key facilitator for application outsourcing, a common usage pattern of web services is to be programatically integrated into other applications (e.g., a travel package, navigation system, etc.). This further requires a service query mechanism that is more precise and reliable than keyword-based search. Query processing on web services is a novel concept that goes beyond the traditional data-centric view of query processing, which is mainly performance centered. It focuses on user quality parameters to select multiple services that are equivalent in functionality but exhibit a different quality of web service [3]. This special issue provides insights into the latest research on web service querying and efficient selection. Five articles were selected through a rigorous review process. They cover a set of key research topics including modeling techniques for web services, service query languages, algorithms for efficient service selection, as well as quality of web service modeling and quality-based service selection. The article by Skoutas et al., “Ranking and Clustering Web Services Using Multicriteria Dominance Relationships,” proposes a service selection framework that integrates the similarity matching scores of multiple parameters obtained from various matchmaking algorithms. The framework relies on the service dominance relationships to determine the relevance between services and users’ requests. Instead of using a weighting mechanism, the dominance relationship adopts a multi-objective strategy that simultaneously considers the matching scores of all the parameters for ranking the relevant services. A clustering algorithm is also proposed that captures the trade-offs among different parameters with respect to the considered matching criteria. The article by Grigori et al., “Ranking BPEL Processes for Service Discovery,” proposes a service discovery approach based on behavioral descriptions expressed in BPEL. Behavioral matchmaking goes beyond interface matchmaking as it considers the constraints on the invocation order of operations in service interfaces. Graph matching algorithms are applied, which enable the delivery of approximate behavioral matches. The article by Michlmayr et al., “End-to-End Support for QoSAware Service Selection, Binding, and Mediation in VRESCo,” describes a runtime environment for serviceoriented computing, called VRESCo. The proposed VRESCo framework provides a service metadata model. Service discovery and selection approaches are developed using this model. In addition, other important issues, such as QoS monitoring, dynamic binding, and service mediation, are addressed. The article by Barhamgi et al., “A Query Rewriting Approach for Web Service Composition,” proposes a service querying approach to compose dataproviding services. The data-providing services are modeled as RDF views over a mediated ontology specified in RDF to capture the consensual and shared knowledge in a IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 3, NO. 3, JULY-SEPTEMBER 2010 161

Journal ArticleDOI
TL;DR: This special section sheds light on the latest advances in the fields of Web services and transactional models in order to better address advanced research on and experience with transactional Web services.
Abstract: THE Internet, along with its related technologies, has created an interconnected world in which information flows easily and tasks are processed collaboratively. To make service-oriented applications more robust, Web services must be examined from a transactional perspective. By transactional, we mean first defining the actions that guide the execution of Web services when failures arise and then the states that permit claiming the success of this execution following the handling of these failures. A variety of transactional models are reported in the database community. Some of these models can be leveraged and enhanced in response to Web services characteristics while others are not suitable for Web services due to the dynamic nature of Web services, the long-running execution scenarios that Web services take part in, and the successful execution of Web services despite other peers’ failure. Today’s service-oriented applications require advanced transactional models that guarantee integrity and continuity of business processes despite the dynamic nature of the features of the environments hosting the execution of these applications. This special section sheds light on the latest advances in the fields of Web services and transactional models in order to better address advanced research on and experience with transactional Web services. Four papers out of 19 submissions were selected for inclusion in this special section. Each submission was subject to a double-review process by at least three peer reviewers. In the first paper, entitled “Event-Based Design and Runtime Verification of Composite Service Transactional Behavior,” Gaaloul et al. propose an event-driven approach to validate the transactional behavior of Web services taking part in compositions. A composition life cycle consists of four phases, namely, design, execution, monitoring, and reengineering. The verification of this behavior throughout this life cycle is done either at design time to validate recovery mechanisms’ consistency or after runtime to detect execution deviations and repair design errors, and, therefore, formally ensure the execution reliability of Web services. In addition, this verification is based on the Event Calculus formalism, which offers a sound definition for this reliability. In the second paper, entitled “FACTS: A Framework for Fault-Tolerant Composition of Transactional Web Services,” Liu et al. note that delivering reliable compositions over unreliable Web services is challenging. To address this challenge, Liu et al. identify a set of high-level exception handling strategies and a new taxonomy of transactional Web services to devise a fault-tolerant mechanism that combines exception handling and transaction techniques. In addition, Liu et al. devise two modules, known as specification and verification, to assist service designers in constructing fault handling logic conveniently and correctly. This logic is automatically injected into WS-BPEL. In the third paper, entitled “Rule-Based Coordination of Distributed Web Service Transactions,” von Riegen et al. study the distributed transactional activity control problem in the context of service choreographies. In such a context, many participants are allowed to work together to achieve a global, common goal, and the initiator of a process is not always the one who is able to commit or cancel a transaction. Von Riegen et al. introduce a framework that allows autonomous coordination of dynamic processes. This framework extends WSBusinessActivity to strengthen the role of the coordinator. Predicate rules are developed to let participants cancel or complete a process and help the coordinators decide on the completion processes and the confirmation or cancellation of the participants’ works. In the fourth paper, entitled “TQoS: Transactional and QoS-Aware Selection Algorithm for Automatic Web Service Composition,” El Haddad et al. focus on the issues of selecting and composing Web services. A design-time selection algorithm is proposed in which transactional and QoS requirements are both integrated into the selection process. This algorithm is based on the notion of risk that indicates if execution outcomes can either be compensated for or not. Two risk levels of execution in a transactional system are considered: Risk 0, meaning that if the execution is successful, the obtained outcomes can be compensated for by the user, and Risk 1, meaning that the system does not guarantee successful execution, but if it achieves it, the outcomes cannot be compensated for by the user. We hope this special section inspires researchers to develop new transactional models that would guarantee service-oriented applications integrity and continuity. 30 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 3, NO. 1, JANUARY-MARCH 2010

Journal ArticleDOI
TL;DR: This special issue focuses on the modeling and implementation issues of enterprise service systems, and investigates models and techniques for the design and implementation of service-oriented systems for the business enterprise.
Abstract: Ç 1 INTRODUCTION T HE services industry is one of the most important and fast-growing fields in computing. Businesses are rapidly becoming aware of the ubiquitous presence and significant advantages of service-oriented computing and supply chains. However, the heterogeneity of online businesses and their applications makes interoperability, an essential requirement for business globalization, a huge challenge. Many businesses, such as Amazon.com, United Airlines, and Motorola, have implemented or transformed their online business applications into Web Services using a variety of service infrastructures provided by vendors including IBM, SAP, Microsoft, Oracle, and BEA. As a result, the Internet is shifting from a repository of data to a repository of services. Gartner predicts that the worldwide Software as a Service (SaaS) market will grow from $6.3 billion in 2006 to $19.3 billion by the end of 2011. Services Computing is a rapidly emerging discipline that aims to enable efficient and effective interaction and collaboration among heterogeneous, distributed business services. Web Services promote abstraction, loose coupling, and interoperability among services. By allowing components to be decoupled using specified interfaces, service-oriented computing enables platform-independent integration. These new integration possibilities are valuable for constructing today's interoperable, large-scale, complex software-intensive systems that help make businesses more agile. With Web Services as an effective enabling technology and Service-Oriented Architecture (SOA) as the central architectural model, Services Computing leverages computing , service standards, and information technology to model, create, operate, and manage critical business transactions as defined and managed flows of services. Web Services are increasingly implemented and deployed as an effective means to create and streamline processes and collaborations among governments, businesses, and individuals. With such ubiquitous and powerful capabilities found in Services Computing, businesses are posed to move many of their enterprise-critical systems to SOA environments. However, many remain hesitant due to important open questions on security, performance, provenance, govern-ance, reliability, sustainability, and stakeholder acceptance issues. Thus, the need for more research, education, and training on Services Computing is vital before business adoption and the full impacts of Services Computing can be achieved. The papers published in this special issue fill a critical research gap by investigating models and techniques for the design and implementation of service-oriented systems for the business enterprise. This special issue focuses on the modeling and implementation issues of enterprise service systems. Out of 25 submissions , five papers were eventually accepted after an initial screening and then two to three rounds …

Journal ArticleDOI
TL;DR: This issue of the IEEE Transactions on Services Computing is pleased to publish six research papers, which include two regular submissions and four papers from a Special Section on Transactional Web Services.
Abstract: The Editor-in-Chief (EIC) welcomes you to the first issue of the IEEE Transactions on Services Computing in 2010. In this issue, he is pleased to publish six research papers, which include two regular submissions and four papers from a Special Section on Transactional Web Services. In this editorial preface, he introduces these papers in the context of the body of knowledge areas.