scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Web Services Research in 2008"


Journal ArticleDOI
TL;DR: Methods for identifying not only perfectly block-structured fragments in BPMN models, but quasi-structuring fragments that can be turned into perfectly structured ones and flow-based acyclic fragments that are mapped onto a combination of structured constructs and control links are defined.
Abstract: The Business Process Modelling Notation (BPMN) is a graph-oriented language in which control and action nodes can be connected almost arbitrarily. It is primarily targeted at domain analysts and is supported by many modelling tools, but in its current form, it lacks the semantic precision required to capture fully executable business processes. The Business Process Execution Language for Web Services (BPEL) on the other hand is a mainly block-structured language, targeted at software developers and supported by several execution platforms. In the current setting, translating BPMN models into BPEL code is a necessary step towards standards-based business process development environments. This translation is challenging since BPMN and BPEL represent two fundamentally different classes of languages. Existing BPMN-to-BPEL translations rely on the identification of block-structured patterns in BPMN models that are mapped into block-structured BPEL constructs. This paper advances the state of the art in BPMN-to-BPEL translation by defining methods for identifying not only perfectly block-structured fragments in BPMN models, but also quasi-structured fragments that can be turned into perfectly structured ones and flow-based acyclic fragments that can be mapped into a combination of block-structured constructs and control links. Beyond its direct relevance in the context of BPMN and BPEL, this paper addresses issues that arise generally when translating between graph-oriented and block-structured flow definition languages.

180 citations


Journal ArticleDOI
TL;DR: This work addresses the challenge to record uniform and usable provenance metadata that meets the domain needs while minimizing the modification burden on the service authors and the performance overhead on the workflow engine and the services.
Abstract: The increasing ability for the sciences to sense the world around us is resulting in a growing need for datadriven e-Science applications that are under the control of workflows composed of services on the Grid. The focus of our work is on provenance collection for these workflows that are necessary to validate the workflow and to determine quality of generated data products. The challenge we address is to record uniform and usable provenance metadata that meets the domain needs while minimizing the modification burden on the service authors and the performance overhead on the workflow engine and the services. The framework is based on generating discrete provenance activities during the lifecycle of a workflow execution that can be aggregated to form complex data and process provenance graphs that can span across workflows. The implementation uses a loosely coupled publish-subscribe architecture for propagating these activities, and the capabilities of the system satisfy the needs of detailed provenance collection. A performance evaluation of a prototype finds a minimal performance overhead (in the range of 1% for an eight-service workflow using 271 data products).

161 citations


Journal ArticleDOI
TL;DR: A metric is presented and described to analyze the control-flow complexity of business processes and is evaluated in terms of Weyuker’s properties in order to guarantee that it qualifies as good and comprehensive.
Abstract: Organizations are increasingly faced with the challenge of managing business processes, workflows, and recently, Web processes. One important aspect of business processes that has been overlooked is their complexity. High complexity in processes may result in poor understandability, errors, defects, and exceptions, leading processes to need more time to develop, test, and maintain. Therefore, excessive complexity should be avoided. Business process measurement is the task of empirically and objectively assigning numbers to the properties of business processes in such a way so as to describe them. Desirable attributes to study and measure include complexity, cost, maintainability, and reliability. In our work, we will focus on investigating process complexity. We present and describe a metric to analyze the control-flow complexity of business processes. The metric is evaluated in terms of Weyuker’s properties in order to guarantee that it qualifies as good and comprehensive. To test the validity of the metric, we describe the experiment we have carried out for empirically validating the metric.

95 citations


Journal ArticleDOI
TL;DR: A framework that adapts the conventional home electric appliances with the infrared remote controls (legacy appliances) to the emerging home network system (HNS) and extensively uses the concept of service-oriented architecture to improve programmatic interoperability among multi-vendor appliances.
Abstract: This article presents a framework that adapts the conventional home electric appliances with the infrared remote controls (legacy appliances) to the emerging home network system (HNS). The proposed method extensively uses the concept of service-oriented architecture to improve programmatic interoperability among multi-vendor appliances. We first prepare APIs that assist a PC to send infrared signals to the appliances. We then aggregate the APIs within self-contained service components, so that each of the components achieves a logical feature independent of device/vendor-specific operations. The service components are finally exhibited to the HNS as Web services. As a result, the legacy appliances can be used as distributed components with open interfaces. To demonstrate the effectiveness, we implement an actual HNS and integrated services with multi-vendor legacy appliances.

81 citations


Journal ArticleDOI
TL;DR: A new type of WS-BPEL activity is introduced to model human activities and a role-based access-control model for WS- BPEL and BPCL, a language to specify authorization constraints are developed.
Abstract: Business processes, the next-generation workflows, have attracted considerable research interest in the last 15 years More recently, several XML-based languages have been proposed for specifying and orchestrating business processes, resulting in the WS-BPEL language Even if WS-BPEL has been developed to specify automated business processes that orchestrate activities of multiple Web services, there are many applications and situations requiring that people be considered as additional participants who can influence the execution of a process Significant omissions from WS-BPEL are the specification of activities that require interactions with humans to be completed, called human activities, and the specification of authorization information associating users with human activities in a WS-BPEL business process and authorization constraints, such as separation of duty, on the execution of human activities In this article, we address these deficiencies by introducing a new type of WS-BPEL activity to model human activities and by developing RBAC-WS-BPEL, a role-based access-control model for WS-BPEL, and BPCL, a language to specify authorization constraints

55 citations


Journal ArticleDOI
TL;DR: A case study on applying the clustering data mining technique to the Web services usage data to improve the Web service discovery process and the challenges that arise are discussed.
Abstract: The business needs, the availability of huge volumes of data and the continuous evolution in Web services functions derive the need of application of data mining in the Web service domain. This paper recommends several data mining applications that can leverage problems concerned with the discovery and monitoring of Web services. This paper then presents a case study on applying the clustering data mining technique to the Web service usage data to improve the Web service discovery process. This paper also discusses the challenges that arise when applying data mining to Web services usage data and abstract information.

53 citations


Journal ArticleDOI
TL;DR: A new process to automate the design of transactional composite Web services based on the acceptable termination states model is proposed and the resulting composite Web service is compliant with the consistency requirements expressed by business application designers and its execution can easily be coordinated using the coordination rules provided as an outcome of this approach.
Abstract: Composite applications leveraging the functionalities offered by Web services today are the underpinnings of enterprise computing However, current Web services composition systems make only use of functional requirements in the selection process of component Web services while transactional consistency is a crucial parameter of most business applications The transactional challenges raised by the composition of Web services are twofold: integrating relaxed atomicity constraints at both design and composition time and coping with the dynamicity introduced by the service oriented computing paradigm In this paper, we propose a new process to automate the design of transactional composite Web services Our solution for Web services composition does not take into account functional requirements only but also transactional ones based on the acceptable termination states model The resulting composite Web service is compliant with the consistency requirements expressed by business application designers and its execution can easily be coordinated using the coordination rules provided as an outcome of our approach An implementation of our theoretical results augmenting an OWL-S matchmaker is further detailed as a proof of concept

37 citations


Journal ArticleDOI
TL;DR: Empirical evaluation results show that the proposed model-driven development (MDD) framework improves the reusability and maintainability of service-oriented applications by hiding low-level implementation technologies in SOA.
Abstract: Service oriented architecture (SOA) is an emerging style of software architectures to reuse and integrate existing systems for designing new applications. Each application is designed in an implementation independent manner using two major abstract concepts: services and connections between services. In SOA, non-functional aspects (e.g., security and fault tolerance) of services and connections should be described separately from their functional aspects (i.e., business logic) because different applications use services and connections in different non-functional contexts. This paper proposes a model-driven development (MDD) framework for non-functional aspects in SOA. The proposed MDD framework consists of (1) a Unified Modeling Language (UML) profile to model non-functional aspects in SOA, and (2) an MDD tool that transforms a UML model defined with the proposed profile to application code. Empirical evaluation results show that the proposed MDD framework improves the reusability and maintainability of service-oriented applications by hiding low-level implementation technologies in SOA.

35 citations


Journal ArticleDOI
TL;DR: StarSLEE platform is described which extends JAIN-SLEe in order to compose Jain-SleE services with Web services and the StarSCE service creation environment which allows exporting value added services as communication web services, and open issues that must be addressed to introduce Web Services in new telecom service platforms are analyzed.
Abstract: Meshing up telecommunication and IT resources seems to be the real challenge for supporting the evolution towards the next generation of Web Services. In telecom world, JAIN-SLEE (JAIN Service Logic Execution Environment) is an emerging standard specification for Java service platforms targeted to host value added services, composed of telecom and IT services. In this paper we describe StarSLEE platform which extends JAIN-SLEE in order to compose JAIN-SLEE services with Web services and the StarSCE service creation environment which allows exporting value added services as communication web services, and we analyze open issues that must be addressed to introduce Web Services in new telecom service platforms.

26 citations


Journal ArticleDOI
TL;DR: The content of an Internet or Web service security policy is derived and a flexible security personalization approach is proposed that will allow an Internetor Web service provider and customer to negotiate to an agreed-upon personalized security policy.
Abstract: The growth of the Internet has been accompanied by the growth of Internet services (e.g., e-commerce, e-health). This proliferation of services and the increasing attacks on them by malicious individuals have highlighted the need for service security. The security requirements of an Internet or Web service may be specified in a security policy. The provider of the service is then responsible for implementing the security measures contained in the policy. However, a service customer or consumer may have security preferences that are not reflected in the provider’s security policy. In order for service providers to attract and retain customers, as well as reach a wider market, a way of personalizing a security policy to a particular customer is needed. We derive the content of an Internet or Web service security policy and propose a flexible security personalization approach that will allow an Internet or Web service provider and customer to negotiate to an agreed-upon personalized security policy. In addition, we present two application examples of security policy personalization, and overview the design of our security personalization prototype.

22 citations


Journal ArticleDOI
TL;DR: The case for workflows and workflow discovery in science is presented and a mechanism for ranking workflow fragments is provided based on graph sub-isomorphism detection, finding that the average human ranking can largely be reproduced.
Abstract: Much has been written on the promise of Web service discovery and (semi-) automated composition. In this discussion, the value to practitioners of discovering and reusing existing service compositions, captured in workflows, is mostly ignored. We present the case for workflows and workflow discovery in science and develop one discovery solution. Through a survey with 21 scientists and developers from the (my)Grid/Taverna workflow environment, workflow disco very requirements are elicited Through a user experiment with 13 scientists, an attempt is made to build a benchmark for workflow ranking. Through the design and implementation of a workflow discovery tool, a mechanism for ranking workflow fragments is provided based on graph sub-isomorphism detection. The tool evaluation, drawing on a corpus of 89 public workflows and the results of the user experiment, finds that, for a simple showcase, the average human ranking can largely be reproduced.

Journal ArticleDOI
TL;DR: This work presents a reengineering approach that starts from CS executions log to improve its recovery mechanisms and proposes a set of mining techniques to discover CS transactional behavior from an event based log.
Abstract: Ensuring composite services reliability is a challenging problem. Indeed, due to the inherent autonomy and heterogeneity of Web services it is difficult to predict and reason about the behavior of the overall composite service. Generally, previous approaches develop, using their modeling formalisms, a set of techniques to analyze the composition model and check “correctness” properties. Although powerful, these approaches may fail, in some cases, to ensure CS reliable executions even if they formally validate its composition model. This is because properties specified in the studied composition model remains assumptions that may not coincide with the reality (i.e. effective CS executions). Sharing the same issue, we present a reengineering approach that starts from CS executions log to improve its recovery mechanisms. Basically, we propose a set of mining techniques to discover CS transactional behavior from an event based log. Then, based on this mining step, we use a set of rules in order to improve its reliability.

Journal ArticleDOI
TL;DR: This paper addresses the interoperability problem by first presenting its multiple dimensions and then by describing a conceptual model called generic service model (GeSMO), which can be used as a basis for the development of languages, tools and mechanisms that support interoperability.
Abstract: Service-oriented computing (SOC) has been marked as the technology trend that caters for interoperability among the components of a distributed system. However, the emergence of various incompatible instantiations of the SOC paradigm, e.g. Web or peer-to-peer services (P2P), and the divergences encountered within each of these instantiations state clearly that interoperability is still an open issue, mainly due to its multi-dimensional nature. In this paper we address the interoperability problem by first presenting its multiple dimensions and then by describing a conceptual model called generic service model (GeSMO), which can be used as a basis for the development of languages, tools and mechanisms that support interoperability. We then illustrate how GeSMO has been utilized for the provision of a P2P service description language and a P2P invocation mechanism which leverages interoperability between heterogeneous P2P services and between P2P services and Web services.

Journal ArticleDOI
TL;DR: Extensive performance measurements, including ones on a mobile phone on the effect of an alternate format when using XML-based security, indicate that, in the wireless world, reducing message sizes is the most pressing concern, and that processing efficiency gains are a much smaller concern.
Abstract: In the wireless world, there has recently been much interest in alternate serialization formats for XML data, mostly driven by the weak capabilities of both devices and networks. However, it is difficult to make an alternate serialization format compatible with XML security features such as encryption and signing. We consider here ways to integrate an alternate format with security, and present a solution that we see as a viable alternative. In addition to this, we present extensive performance measurements, including ones on a mobile phone on the effect of an alternate format when using XML-based security. These measurements indicate that, in the wireless world, reducing message sizes is the most pressing concern, and that processing efficiency gains of an alternate format are a much smaller concern. We also make specific recommendations on security usage based on our measurements.

Journal ArticleDOI
TL;DR: This article discusses the most relevant state-of-the-art technologies for compressing XML data and presents a novel solution for compacting SOAP messages, which leads to extremely compact data representations and is also usable in environments with very limited CPU and memory resources.
Abstract: Compared to other middleware approaches like CORBA or Java RMI the protocol overhead of SOAP is very high This fact is not only disadvantageous for several performance-critical applications, but especially in environments with limited network bandwidth or resource-constrained computing devices Although recent research work concentrated on more compact, binary representations of XML data only very few approaches account for the special characteristics of SOAP communication In this article we will discuss the most relevant state-of-the-art technologies for compressing XML data Furthermore, we will present a novel solution for compacting SOAP messages In order to achieve significantly better compression rates than current approaches, our compressor utilizes structure information from an XML Schema or WSDL document With this additional knowledge on the “grammar†of the exchanged messages, our compressor generates a single custom pushdown automaton, which can be used as a highly efficient validating parser as well as a highly efficient compressor The main idea is to tag the transitions of the automaton with short binary identifiers that are then used to encode the path trough the automaton during parsing Our approach leads to extremely compact data representations and is also usable in environments with very limited CPU and memory resources

Journal ArticleDOI
TL;DR: This article describes a novel reservation protocol that can be used to coordinate the tasks of a business activity with an explicit reservation phase and an explicit confirmation and cancellation phase, and shows how it maps to the Web services coordination specification.
Abstract: Web services can be used to automate business activities that span multiple enterprises over the Internet. Such business activities require a coordination protocol to reach consistent results among the participants in the business activity. In the current state of the art, either classical distributed transactions or extended transactions with compensating transactions are used. However, classical distributed transactions lock data in the databases of different enterprises for unacceptable durations or involve repeated retries, and compensating transactions can lead to inconsistencies in the databases of the different enterprises. In this article, we describe a novel reservation protocol that can be used to coordinate the tasks of a business activity. Instead of resorting to compensating transactions, the reservation protocol employs an explicit reservation phase and an explicit confirmation and cancellation phase. We show how our reservation protocol maps to the Web services coordination specification, and describe our implementation of the reservation protocol. We compare the performance of the reservation protocol with that of the two-phase commit protocol and optimistic two-phase commit protocol. We also compare the probability of inconsistency for the reservation protocol with that for compensating transactions.

Journal ArticleDOI
Yanzhen Zou1, Lu Zhang1, Yan Li1, Bing Xie1, Hong Mei1 
TL;DR: A new approach to improve this kind of category-based Web Services retrieval process that can refine the coarse matching results step by step and can increase the retrieval precision to a certain extent after one or two rounds of refinement.
Abstract: Web services retrieval is a critical step for reusing existing services in the SOA paradigm. In the UDDI registry, traditional category-based approaches have been used to locate candidate services. However, these approaches usually achieve relatively low precision because some candidate Web Services in the result set cannot provide actually suitable operations for users. In this article, we present a new approach to improve this kind of category-based Web Services retrieval process that can refine the coarse matching results step by step. The refinement is based on the idea that operation specification is very important to service reuse. Therefore, a Web Service is investigated via multiple instances view in our approach, which indicates that a service is labeled as positive if and only if at least one operation provided by this service is usable to the user. Otherwise, it is labeled as negative. Experimental results demonstrate that our approach can increase the retrieval precision to a certain extent after one or two rounds of refinement.

Journal ArticleDOI
TL;DR: E-government usage was positively related to managerial effectiveness, having a champion of e-government, and perceived effectiveness of citizen access to online information.
Abstract: This article examines the perceived effectiveness of e-government by Information Technology (IT) directors in local governments in the United States. Most of the existing empirical research has examined the level of adoption of e-government; it does not focus on what is the overall effectiveness of e-government for city governments as this study does. This is accomplished through a survey of IT directors exploring their perceptions of e-government to determine whether this is related to the overall usage of e-government in cities. Websites were the most effective service channel for getting information; the telephone was the most effective service channel for solving a problem; while in person at a government office was most effective service channel for citizens’ to access city services. E-government usage was positively related to managerial effectiveness, having a champion of e-government, and perceived effectiveness of citizen access to online information.

Journal ArticleDOI
TL;DR: Evaluation results show that the DSCWeaver approach can effectively reduce the development effort of process programmers while providing performance competitive to unwoven BPEL code.
Abstract: Correct synchronization among activities is critical in a business process. Current process languages such as BPEL specify the control flow of processes procedurally, which can lead to inflexible and tangled code for managing a crosscutting aspect—synchronization constraints that define permissible sequences of execution for activities. In this article, we present DSCWeaver, a tool that enables a synchronization-aspect extension to procedural languages. It uses DSCL (directed-acyclic-graph synchronization constraint language) to achieve three desirable properties for synchronization modeling: fine granularity, declarative syntax, and validation support. DSCWeaver then automatically generates executable code for synchronization. We demonstrate the advantages of our approach in a service deployment process written in BPEL and evaluate its performance using two metrics: lines of code (LoC) and places to visit (PtV). Evaluation results show that our approach can effectively reduce the development effort of process programmers while providing performance competitive to unwoven BPEL code.