scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Services Computing in 2013"


Journal Article•DOI•
TL;DR: This paper proposes a collaborative quality-of-service (QoS) prediction approach for web services by taking advantages of the past web service usage experiences of service users, and achieves higher prediction accuracy than other approaches.
Abstract: With the increasing presence and adoption of web services on the World Wide Web, the demand of efficient web service quality evaluation approaches is becoming unprecedentedly strong. To avoid the expensive and time-consuming web service invocations, this paper proposes a collaborative quality-of-service (QoS) prediction approach for web services by taking advantages of the past web service usage experiences of service users. We first apply the concept of user-collaboration for the web service QoS information sharing. Then, based on the collected QoS data, a neighborhood-integrated approach is designed for personalized web service QoS value prediction. To validate our approach, large-scale real-world experiments are conducted, which include 1,974,675 web service invocations from 339 service users on 5,825 real-world web services. The comprehensive experimental studies show that our proposed approach achieves higher prediction accuracy than other approaches. The public release of our web service QoS data set provides valuable real-world data for future research.

408 citations


Journal Article•DOI•
TL;DR: Experimental results not only validate the effectiveness of the approaches, but also show the audit system verifies the integrity with lower computation overhead and requiring less extra storage for audit metadata.
Abstract: In this paper, we propose a dynamic audit service for verifying the integrity of an untrusted and outsourced storage. Our audit service is constructed based on the techniques, fragment structure, random sampling, and index-hash table, supporting provable updates to outsourced data and timely anomaly detection. In addition, we propose a method based on probabilistic query and periodic verification for improving the performance of audit services. Our experimental results not only validate the effectiveness of our approaches, but also show our audit system verifies the integrity with lower computation overhead and requiring less extra storage for audit metadata.

285 citations


Journal Article•DOI•
Huaqun Wang1•
TL;DR: This paper designs an efficient PPDP protocol based on the bilinear pairing technique that is provable secure and efficient in public clouds when the client cannot perform the remote data possession checking.
Abstract: Recently, cloud computing rapidly expands as an alternative to conventional computing due to it can provide a flexible, dynamic and resilient infrastructure for both academic and business environments. In public cloud environment, the client moves its data to public cloud server (PCS) and cannot control its remote data. Thus, information security is an important problem in public cloud storage, such as data confidentiality, integrity, and availability. In some cases, the client has no ability to check its remote data possession, such as the client is in prison because of committing crime, on the ocean-going vessel, in the battlefield because of the war, and so on. It has to delegate the remote data possession checking task to some proxy. In this paper, we study proxy provable data possession (PPDP). In public clouds, PPDP is a matter of crucial importance when the client cannot perform the remote data possession checking. We study the PPDP system model, the security model, and the design method. Based on the bilinear pairing technique, we design an efficient PPDP protocol. Through security analysis and performance analysis, our protocol is provable secure and efficient.

238 citations


Journal Article•DOI•
TL;DR: This work proposes a novel collaborative filtering algorithm designed for large-scale web service recommendation that employs the characteristic of QoS and achieves considerable improvement on the recommendation accuracy.
Abstract: With the proliferation of web services, effective QoS-based approach to service recommendation is becoming more and more important. Although service recommendation has been studied in the recent literature, the performance of existing ones is not satisfactory, since (1) previous approaches fail to consider the QoS variance according to users' locations; and (2) previous recommender systems are all black boxes providing limited information on the performance of the service candidates. In this paper, we propose a novel collaborative filtering algorithm designed for large-scale web service recommendation. Different from previous work, our approach employs the characteristic of QoS and achieves considerable improvement on the recommendation accuracy. To help service users better understand the rationale of the recommendation and remove some of the mystery, we use a recommendation visualization technique to show how a recommendation is grouped with other choices. Comprehensive experiments are conducted using more than 1.5 million QoS records of real-world web service invocations. The experimental results show the efficiency and effectiveness of our approach.

190 citations


Journal Article•DOI•
TL;DR: This paper model the service provisioning problem as a generalized Nash game and shows the existence of equilibria for such game, and proposes and proves two solution methods based on the best-reply dynamics that can improve the efficiency of the cloud system evaluated in terms of Price of Anarchy.
Abstract: In recent years, the evolution and the widespread adoption of virtualization, service-oriented architectures, autonomic, and utility computing have converged letting a new paradigm to emerge: cloud computing. Clouds allow the on-demand delivering of software, hardware, and data as services. Currently, the cloud offer is becoming wider day by day because all the major IT companies and service providers, like Microsoft, Google, Amazon, HP, IBM, and VMWare, have started providing solutions involving this new technological paradigm. As cloud-based services are more numerous and dynamic, the development of efficient service provisioning policies becomes increasingly challenging. In this paper, we take the perspective of Software as a Service (SaaS) providers that host their applications at an Infrastructure as a Service (IaaS) provider. Each SaaS needs to comply with quality-of-service requirements, specified in service-level agreement (SLA) contracts with the end users, which determine the revenues and penalties on the basis of the achieved performance level. SaaS providers want to maximize their revenues from SLAs, while minimizing the cost of use of resources supplied by the IaaS provider. Moreover, SaaS providers compete and bid for the use of infrastructural resources. On the other hand, the IaaS wants to maximize the revenues obtained providing virtualized resources. In this paper, we model the service provisioning problem as a generalized Nash game and we show the existence of equilibria for such game. Moreover, we propose two solution methods based on the best-reply dynamics, and we prove their convergence in a finite number of iterations to a generalized Nash equilibrium. In particular, we develop an efficient distributed algorithm for the runtime allocation of IaaS resources among competing SaaS providers. We demonstrate the effectiveness of our approach by simulation and performing tests on a real prototype environment deployed on Amazon EC2. Results show that, compared to other state-of-the-art solutions, our model can improve the efficiency of the cloud system evaluated in terms of Price of Anarchy by 50-70 percent.

165 citations


Journal Article•DOI•
TL;DR: A novel Multiple Foreseen Path-Based Heuristic algorithm MFPB-HOSTP is proposed for the Optimal Social Trust Path selection, where multiple backward local social trust paths (BLPs) are identified and concatenated with one Forward Local Path (FLP), forming multiple foreseen paths.
Abstract: Online Social networks have provided the infrastructure for a number of emerging applications in recent years, e.g., for the recommendation of service providers or the recommendation of files as services. In these applications, trust is one of the most important factors in decision making by a service consumer, requiring the evaluation of the trustworthiness of a service provider along the social trust paths from a service consumer to the service provider. However, there are usually many social trust paths between two participants who are unknown to one another. In addition, some social information, such as social relationships between participants and the recommendation roles of participants, has significant influence on trust evaluation but has been neglected in existing studies of online social networks. Furthermore, it is a challenging problem to search the optimal social trust path that can yield the most trustworthy evaluation result and satisfy a service consumer's trust evaluation criteria based on social information. In this paper, we first present a novel complex social network structure incorporating trust, social relationships and recommendation roles, and introduce a new concept, Quality of Trust (QoT), containing the above social information as attributes. We then model the optimal social trust path selection problem with multiple end-to-end QoT constraints as a Multiconstrained Optimal Path (MCOP) selection problem, which is shown to be NP-Complete. To deal with this challenging problem, we propose a novel Multiple Foreseen Path-Based Heuristic algorithm MFPB-HOSTP for the Optimal Social Trust Path selection, where multiple backward local social trust paths (BLPs) are identified and concatenated with one Forward Local Path (FLP), forming multiple foreseen paths. Our strategy could not only help avoid failed feasibility estimation in path selection in certain cases, but also increase the chances of delivering a near-optimal solution with high quality. The results of our experiments conducted on a real data set of online social networks illustrate that MFPB-HOSTP algorithm can efficiently identify the social trust paths with better quality than our previously proposed H_OSTP algorithm that outperforms prior algorithms for the MCOP selection problem.

114 citations


Journal Article•DOI•
TL;DR: This paper formalizes the problem of finding the optimal set of adaptations, which minimizes the total costs arising from SLA violations and the adaptations to prevent them, and presents possible algorithms to solve this complex optimization problem.
Abstract: For providers of composite services, preventing cases of SLA violations is crucial. Previous work has established runtime adaptation of compositions as a promising tool to achieve SLA conformance. However, to get a realistic and complete view of the decision process of service providers, the costs of adaptation need to be taken into account. In this paper, we formalize the problem of finding the optimal set of adaptations, which minimizes the total costs arising from SLA violations and the adaptations to prevent them. We present possible algorithms to solve this complex optimization problem, and detail an end-to-end system based on our earlier work on the PREvent (prediction and prevention based on event monitoring) framework, which clearly indicates the usefulness of our model. We discuss experimental results that show how the application of our approach leads to reduced costs for the service provider, and explain the circumstances in which different algorithms lead to more or less satisfactory results.

110 citations


Journal Article•DOI•
TL;DR: A new similarity measure for web service similarity computation is presented and a novel collaborative filtering approach is proposed, called normal recovery collaborative filtering, for personalized web service recommendation that achieves better accuracy than other competing approaches.
Abstract: With the increasing amount of web services on the Internet, personalized web service selection and recommendation are becoming more and more important. In this paper, we present a new similarity measure for web service similarity computation and propose a novel collaborative filtering approach, called normal recovery collaborative filtering, for personalized web service recommendation. To evaluate the web service recommendation performance of our approach, we conduct large-scale real-world experiments, involving 5,825 real-world web services in 73 countries and 339 service users in 30 countries. To the best of our knowledge, our experiment is the largest scale experiment in the field of service computing, improving over the previous record by a factor of 100. The experimental results show that our approach achieves better accuracy than other competing approaches.

106 citations


Journal Article•DOI•
TL;DR: It is argued that identifying the factors that impact the total demand of exchanged memory pages is important to the in-depth understanding of interference costs in Dom0 and VMM, critical for effective performance management and tuning in virtualized clouds.
Abstract: User-perceived performance continues to be the most important QoS indicator in cloud-based data centers today. Effective allocation of virtual machines (VMs) to handle both CPU intensive and I/O intensive workloads is a crucial performance management capability in virtualized clouds. Although a fair amount of researches have dedicated to measuring and scheduling jobs among VMs, there still lacks of in-depth understanding of performance factors that impact the efficiency and effectiveness of resource multiplexing and scheduling among VMs. In this paper, we present the experimental research on performance interference in parallel processing of CPU-intensive and network-intensive workloads on Xen virtual machine monitor (VMM). Based on our study, we conclude with five key findings which are critical for effective performance management and tuning in virtualized clouds. First, colocating network-intensive workloads in isolated VMs incurs high overheads of switches and events in Dom0 and VMM. Second, colocating CPU-intensive workloads in isolated VMs incurs high CPU contention due to fast I/O processing in I/O channel. Third, running CPU-intensive and network-intensive workloads in conjunction incurs the least resource contention, delivering higher aggregate performance. Fourth, performance of network-intensive workload is insensitive to CPU assignment among VMs, whereas adaptive CPU assignment among VMs is critical to CPU-intensive workload. The more CPUs pinned on Dom0 the worse performance is achieved by CPU-intensive workload. Last, due to fast I/O processing in I/O channel, limitation on grant table is a potential bottleneck in Xen. We argue that identifying the factors that impact the total demand of exchanged memory pages is important to the in-depth understanding of interference costs in Dom0 and VMM.

101 citations


Journal Article•DOI•
TL;DR: It is shown how QoS-based service selection can be conducted based on the proposed QoS calculation, and four types of basic composition patterns for composite services are discussed: sequential, parallel, loop, and conditional patterns.
Abstract: Quality of service (QoS) is a major concern in the design and management of a composite service. In this paper, a systematic approach is proposed to calculate QoS for composite services with complex structures, taking into consideration of the probability and conditions of each execution path. Four types of basic composition patterns for composite services are discussed: sequential, parallel, loop, and conditional patterns. In particular, QoS solutions are provided for unstructured conditional and loop patterns. We also show how QoS-based service selection can be conducted based on the proposed QoS calculation. Experiments have been conducted to show the effectiveness of the proposed method.

78 citations


Journal Article•DOI•
TL;DR: It is argued that the existing techniques by turning on or off servers with the help of virtual machine (VM) migration is not enough and finding an optimized dynamic resource allocation method to solve the problem of on-demand resource provision for VMs is the key to improve the efficiency of data centers.
Abstract: In a shared virtual computing environment, dynamic load changes as well as different quality requirements of applications in their lifetime give rise to dynamic and various capacity demands, which results in lower resource utilization and application quality using the existing static resource allocation. Furthermore, the total required capacities of all the hosted applications in current enterprise data centers, for example, Google, may surpass the capacities of the platform. In this paper, we argue that the existing techniques by turning on or off servers with the help of virtual machine (VM) migration is not enough. Instead, finding an optimized dynamic resource allocation method to solve the problem of on-demand resource provision for VMs is the key to improve the efficiency of data centers. However, the existing dynamic resource allocation methods only focus on either the local optimization within a server or central global optimization, limiting the efficiency of data centers. We propose a two-tiered on-demand resource allocation mechanism consisting of the local and global resource allocation with feedback to provide on-demand capacities to the concurrent applications. We model the on-demand resource allocation using optimization theory. Based on the proposed dynamic resource allocation mechanism and model, we propose a set of on-demand resource allocation algorithms. Our algorithms preferentially ensure performance of critical applications named by the data center manager when resource competition arises according to the time-varying capacity demands and the quality of applications. Using Rainbow, a Xen-based prototype we implemented, we evaluate the VM-based shared platform as well as the two-tiered on-demand resource allocation mechanism and algorithms. The experimental results show that Rainbow without dynamic resource allocation (Rainbow-NDA) provides 26 to 324 percent improvements in the application performance, as well as 26 percent higher average CPU utilization than traditional service computing framework, in which applications use exclusive servers. The two-tiered on-demand resource allocation further improves performance by 9 to 16 percent for those critical applications, 75 percent of the maximum performance improvement, introducing up to 5 percent performance degradations to others, with 1 to 5 percent improvements in the resource utilization in comparison with Rainbow-NDA.

Journal Article•DOI•
TL;DR: In this paper, the authors focus on the opportunities and challenges for service mining, i.e., applying process mining techniques to services, and highlight challenges specific for service-oriented systems.
Abstract: Web services are an emerging technology to implement and integrate business processes within and across enterprises. Service orientation can be used to decompose complex systems into loosely coupled software components that may run remotely. However, the distributed nature of services complicates the design and analysis of service-oriented systems that support end-to-end business processes. Fortunately, services leave trails in so-called event logs and recent breakthroughs in process mining research make it possible to discover, analyze, and improve business processes based on such logs. Recently, the task force on process mining released the process mining manifesto. This manifesto is supported by 53 organizations and 77 process mining experts contributed to it. The active participation from end-users, tool vendors, consultants, analysts, and researchers illustrate the growing significance of process mining as a bridge between data mining and business process modeling. In this paper, we focus on the opportunities and challenges for service mining, i.e., applying process mining techniques to services. We discuss the guiding principles and challenges listed in the process mining manifesto and also highlight challenges specific for service-orientated systems.

Journal Article•DOI•
TL;DR: An extensive performance study of network I/O workloads in a virtualized cloud environment shows that current implementation of virtual machine monitor (VMM) does not provide sufficient performance isolation to guarantee the effectiveness of resource sharing across multiple virtual machine instances (VMs) running on a single physical host machine.
Abstract: Server consolidation and application consolidation through virtualization are key performance optimizations in cloud-based service delivery industry. In this paper, we argue that it is important for both cloud consumers and cloud providers to understand the various factors that may have significant impact on the performance of applications running in a virtualized cloud. This paper presents an extensive performance study of network I/O workloads in a virtualized cloud environment. We first show that current implementation of virtual machine monitor (VMM) does not provide sufficient performance isolation to guarantee the effectiveness of resource sharing across multiple virtual machine instances (VMs) running on a single physical host machine, especially when applications running on neighboring VMs are competing for computing and communication resources. Then we study a set of representative workloads in cloud-based data centers, which compete for either CPU or network I/O resources, and present the detailed analysis on different factors that can impact the throughput performance and resource sharing effectiveness. For example, we analyze the cost and the benefit of running idle VM instances on a physical host where some applications are hosted concurrently. We also present an in-depth discussion on the performance impact of colocating applications that compete for either CPU or network I/O resources. Finally, we analyze the impact of different CPU resource scheduling strategies and different workload rates on the performance of applications running on different VMs hosted by the same physical machine.

Journal Article•DOI•
TL;DR: A secure and nonobstructive billing system called THEMIS is proposed as a remedy for existing billing systems limited in terms of security capabilities or computational overhead and uses a novel concept of a cloud notary authority for the supervision of billing.
Abstract: With the widespread adoption of cloud computing, the ability to record and account for the usage of cloud resources in a credible and verifiable way has become critical for cloud service providers and users alike. The success of such a billing system depends on several factors: The billing transactions must have integrity and nonrepudiation capabilities; the billing transactions must be nonobstructive and have a minimal computation cost; and the service level agreement (SLA) monitoring should be provided in a trusted manner. Existing billing systems are limited in terms of security capabilities or computational overhead. In this paper, we propose a secure and nonobstructive billing system called THEMIS as a remedy for these limitations. The system uses a novel concept of a cloud notary authority for the supervision of billing. The cloud notary authority generates mutually verifiable binding information that can be used to resolve future disputes between a user and a cloud service provider in a computationally efficient way. Furthermore, to provide a forgery-resistive SLA monitoring mechanism, we devised a SLA monitoring module enhanced with a trusted platform module (TPM), called S-Mon. The performance evaluation confirms that the overall latency of THEMIS billing transactions (avg. 4.89 ms) is much shorter than the latency of public key infrastructure (PKI)-based billing transactions (avg. 82.51 ms), though THEMIS guarantees identical security features as a PKI. This work has been undertaken on a real cloud computing service called iCubeCloud.

Journal Article•DOI•
TL;DR: A novel compositional decision making process, CDP, which explores optimal solutions of individual component services and uses the knowledge to derive optimal QoS-driven composition solutions and can significantly reduce the search space and achieve great performance gains.
Abstract: Service-oriented architecture provides a framework for achieving rapid system composition and deployment. To satisfy different system QoS requirements, it is possible to select an appropriate set of concrete services and compose them to achieve the QoS goals. In addition, some of the services may be reconfigurable and provide various QoS tradeoffs. To make use of these reconfigurable services, the composition process should consider not only service selection, but also configuration parameter settings. However, existing QoS-driven service composition research does not consider reconfigurable services. Moreover, the decision space may be enormous when reconfigurable services are considered. In this paper, we deal with the issues of reconfigurable service modeling and efficient service composition decision making. We introduce a novel compositional decision making process, CDP, which explores optimal solutions of individual component services and uses the knowledge to derive optimal QoS-driven composition solutions. Experimental studies show that the CDP approach can significantly reduce the search space and achieve great performance gains. We also develop a case study system to validate the proposed approach and the results confirm the feasibility and effectiveness of reconfigurable services.

Journal Article•DOI•
TL;DR: This work presents a formal model, which provides the grounding semantics to support the automation of change management, and presents a set of change operators that allow to specify a change in a precise and formal manner.
Abstract: We propose a system, called EVolution of Long-term Composed Services (Ev-LCS), to address the change management issues in long-term composed services (LCSs). An LCS is a dynamic collaboration between autonomous web services that collectively provide a value-added service. It has a long-term commitment to its users. We first present a formal model, which provides the grounding semantics to support the automation of change management. We present a set of change operators that allow to specify a change in a precise and formal manner. We then propose a change enactment strategy that actually implements the changes. We develop a prototype system for the proposed Ev-LCS to demonstrate its effectiveness. We also conduct an experimental study to assess the performance of the change management approach.

Journal Article•DOI•
TL;DR: A novel computational framework called ACP (Artificial societies, Computational experiments, and Parallel systems), targeting at creating an effective computational theory and developing a systematic methodological framework for socioeconomic studies is proposed.
Abstract: This paper addresses issues related to the development of a computational theory and corresponding methods for studying complex socioeconomic systems. We propose a novel computational framework called ACP (Artificial societies, Computational experiments, and Parallel systems), targeting at creating an effective computational theory and developing a systematic methodological framework for socioeconomic studies. The basic idea behind the ACP approach is: 1) to model the complex socioeconomic systems as artificial societies using agent techniques in a "bottom-up” fashion; 2) to utilize innovative computing technologies and make computers as experimental laboratories for investigating socioeconomic problems; and 3) to achieve an effective management and control of the focal complex socioeconomic system through parallel executions between artificial and actual socioeconomic systems. An ACP-based experimental platform called MacroEconSim has been discussed, which can be used for modeling, analyzing, and experimenting on macroeconomic systems. A case study on economic inflation is also presented to illustrate the key research areas and algorithms integrated in this platform.

Journal Article•DOI•
TL;DR: This paper proposes a graph-based formulation for modeling sensor services that maps to the operational model of sensor networks and is amenable to analysis, and forms the process of sensor service composition as a cost-optimization problem and shows that it is NP-complete.
Abstract: Service modeling and service composition are software architecture paradigms that have been used extensively in web services where there is an abundance of resources. They mainly capture the idea that advanced functionality can be realized by combining a set of primitive services provided by the system. Many efforts in web services domain focused on detecting the initial composition, which is then followed for the rest of service operation. In sensor networks, however, communication among nodes is error-prone and unreliable, while sensor nodes have constrained resources. This dynamic environment requires a continuous adaptation of the composition of a complex service. In this paper, we first propose a graph-based formulation for modeling sensor services that maps to the operational model of sensor networks and is amenable to analysis. Based on this model, we formulate the process of sensor service composition as a cost-optimization problem and show that it is NP-complete. Two heuristic methods are proposed to solve the composition problem: the top-down and the bottom-up approaches. We discuss centralized and distributed implementations of these methods. Finally, using ns-2 simulations, we evaluate the performance and overhead of our proposed methods.

Journal Article•DOI•
TL;DR: A peer-to-peer-based decentralized service discovery approach named Chord4S, which achieves higher data availability and provides efficient query with reasonable overhead and supports QoS-aware service discovery.
Abstract: Service-Oriented Computing (SOC) is emerging as a paradigm for developing distributed applications. A critical issue of utilizing SOC is to have a scalable, reliable, and robust service discovery mechanism. However, traditional service discovery methods using centralized registries can easily suffer from problems such as performance bottleneck and vulnerability to failures in large scalable service networks, thus functioning abnormally. To address these problems, this paper proposes a peer-to-peer-based decentralized service discovery approach named Chord4S. Chord4S utilizes the data distribution and lookup capabilities of the popular Chord to distribute and discover services in a decentralized manner. Data availability is further improved by distributing published descriptions of functionally equivalent services to different successor nodes that are organized into virtual segments in the Chord4S circle. Based on the service publication approach, Chord4S supports QoS-aware service discovery. Chord4S also supports service discovery with wildcard(s). In addition, the Chord routing protocol is extended to support efficient discovery of multiple services with a single query. This enables late negotiation of Service Level Agreements (SLAs) between service consumers and multiple candidate service providers. The experimental evaluation shows that Chord4S achieves higher data availability and provides efficient query with reasonable overhead.

Journal Article•DOI•
TL;DR: A lightweight Byzantine fault tolerance (BFT) algorithm, which can be used to render the coordination of web services business activities (WS-BA) more trustworthy, is presented and incorporated into the open-source Kandula framework, which implements the WS-BA specification with thews-BA-I extension.
Abstract: We present a lightweight Byzantine fault tolerance (BFT) algorithm, which can be used to render the coordination of web services business activities (WS-BA) more trustworthy. The lightweight design of the BFT algorithm is the result of a comprehensive study of the threats to the WS-BA coordination services and a careful analysis of the state model of WS-BA. The lightweight BFT algorithm uses source ordering, rather than total ordering, of incoming requests to achieve Byzantine fault tolerant, state-machine replication of the WS-BA coordination services. We have implemented the lightweight BFT algorithm, and incorporated it into the open-source Kandula framework, which implements the WS-BA specification with the WS-BA-I extension. Performance evaluation results obtained from the prototype implementation confirm the efficiency and effectiveness of our lightweight BFT algorithm, compared to traditional BFT techniques.

Journal Article•DOI•
TL;DR: This paper investigates a scalable solution that can evaluate, compare, and rank these process mining algorithms efficiently, and proposes a novel framework that can efficiently select the processmining algorithms that are most suitable for a given model set.
Abstract: While many process mining algorithms have been proposed recently, there does not exist a widely accepted benchmark to evaluate and compare these process mining algorithms As a result, it can be difficult to choose a suitable process mining algorithm for a given enterprise or application domain Some recent benchmark systems have been developed and proposed to address this issue However, evaluating available process mining algorithms against a large set of business models (eg, in a large enterprise) can be computationally expensive, tedious, and time-consuming This paper investigates a scalable solution that can evaluate, compare, and rank these process mining algorithms efficiently, and hence proposes a novel framework that can efficiently select the process mining algorithms that are most suitable for a given model set In particular, using our framework, only a portion of process models need empirical evaluation and others can be recommended directly via a regression model As a further optimization, this paper also proposes a metric and technique to select high-quality reference models to derive an effective regression model Experiments using artificial and real data sets show that our approach is practical and outperforms the traditional approach

Journal Article•DOI•
TL;DR: This paper proposes a service definition, a service classification, and service specification framework, all based on a founded theory, the $(\Psi)$-theory, which originates from the scientific fields of Language Philosophy and Systemic Ontology.
Abstract: In recent years, the Web Service Definition Language (WSDL) and Universal Description Discovery Integration (UDDI) standards arose as ad hoc standards for the definition of service interfaces and service registries. However, even together these standards do not provide enough basis for a service consumer to get a full understanding of the behavior of a service. In practice, this often leads to a serious mismatch between the provider's intent and the consumer's expectations concerning the functionality of the corresponding service. Though additional standards have been proposed, a holistic view of what aspects of a service need to be specified is still lacking. This paper proposes a service definition, a service classification, and service specification framework, all based on a founded theory, the $(\Psi)$-theory. The $(\Psi)$-theory originates from the scientific fields of Language Philosophy and Systemic Ontology. According to this theory, the operation of organizations is all about communication between and production by social actors. The service specification framework can be applied both for specifying human services, i.e., services executed by human beings, and IT services (i.e., services executed by IT systems).

Journal Article•DOI•
Yitao Ni1, Shan-Shan Hou1, Lu Zhang1, Jun Zhu1, Zhong Jie Li2, Qian Lan1, Hong Mei1, Jiasu Sun1 •
TL;DR: A novel methodology to generate effective message sequences for testing WS-BPEL programs is presented and the results show that the message sequences generated by using the method can effectively expose faults in the WS- BPEL programs.
Abstract: With the popularity of Web Services and Service-Oriented Architecture (SOA), quality assurance of SOA applications, such as testing, has become a research focus Programs implemented by the Business Process Execution Language for Web Services (WS-BPEL), which can be used to compose partner Web Services into composite Web Services, are one popular kind of SOA applications The unique features of WS-BPEL programs bring new challenges into testing A test case for testing a WS-BPEL program is a sequence of messages that can be received by the WS-BPEL program under test Previous research has not studied the challenges of message-sequence generation induced by unique features of WS-BPEL as a new language In this paper, we present a novel methodology to generate effective message sequences for testing WS-BPEL programs To capture the order relationship in a message sequence and the constraints on correlated messages imposed by WS-BPEL's routing mechanism, we model the WS-BPEL program under test as a message-sequence graph (MSG), and generate message sequences based on MSG We performed experiments for our method and two other techniques with six WS-BPEL programs The results show that the message sequences generated by using our method can effectively expose faults in the WS-BPEL programs

Journal Article•DOI•
TL;DR: A novel three-phase composition protocol integrating information flow control is developed that uses historical information to efficiently evaluate and prune candidate compositions and perform local/remote policy evaluation only on top candidates, and introduces the novel concept of transformation factor to model the computation effect of intermediate services.
Abstract: Enforcing access control in composite services is essential in distributed multidomain environment. Many advanced access control models have been developed to secure web services at execution time. However, they do not consider access control validation at composition time, resulting in high execution-time failure rate of composite services due to access control violations. Performing composition-time access control validation is not straightforward. First, many candidate compositions need to be considered and validating them can be costly. Second, some service composers may not be trusted to access protected policies and validation has to be done remotely. Another major issue with existing models is that they do not consider information flow control in composite services, which may result in undesirable information leakage. To resolve all these problems, we develop a novel three-phase composition protocol integrating information flow control. To reduce the policy evaluation cost, we use historical information to efficiently evaluate and prune candidate compositions and perform local/remote policy evaluation only on top candidates. To achieve effective and efficient information flow control, we introduce the novel concept of transformation factor to model the computation effect of intermediate services. Experimental studies show significant performance benefit of the proposed mechanism.

Journal Article•DOI•
TL;DR: This paper presents a framework for behavioral modeling of business processes, focusing on their transactional properties, based on the channel-based coordination language Reo, which is an expressive, compositional, and semantically precise design language admitting formal reasoning.
Abstract: Ensuring transactional behavior of business processes and web service compositions is an essential issue in the area of service-oriented computing. Transactions in this context may require long periods of time to complete and must be managed using nonblocking techniques. Data integrity in long-running transactions (LRTs) is preserved using compensations, that is, activities explicitly programmed to eliminate the effects of a process terminated by a user or that failed to complete due to another reason. In this paper, we present a framework for behavioral modeling of business processes, focusing on their transactional properties. Our solution is based on the channel-based coordination language Reo, which is an expressive, compositional, and semantically precise design language admitting formal reasoning. The operational semantics of Reo is given by constraint automata (CA). We illustrate how Reo can be used for modeling termination and compensation handling in a number of commonly used workflow patterns, including sequential and parallel compositions, nested transactions, discriminator choice and concurrent flows with link dependences. Furthermore, we show how essential properties of LRTs can be expressed in LTL and CTL-like logics and verified using model checking technology. Our framework is supported by a number of Eclipse plug-ins that provides facilities for modeling, animation, and verification of LRTs to generate executable code for them.

Journal Article•DOI•
TL;DR: This paper model the community-based cooperation among autonomous WSs as a coalitional game in graph form and design and prove that the proposed algorithm can lead to an individually stable coalition partition, which indicates that every WS can maximize its benefit through cooperation without decreasing other WS's benefit.
Abstract: Web services (WSs) can cooperate with each other to provide more valuable WSs. Current approaches for WS cooperation have typically assumed that WSs are always willing to participate in some form of cooperation, and have undermined the fact that WSs are autonomous in this open environment. This assumption, however, becomes more problematic in community-based WS cooperation due to the dynamic nature of WS community. It is, therefore, important to devise a cooperation scheme respecting WS autonomy for community-based WS cooperation. In this paper, we model the community-based cooperation among autonomous WSs as a coalitional game in graph form. We show this game is non-cohesive and design a distributed coalition formation algorithm. We prove that the proposed algorithm can lead to an individually stable coalition partition, which indicates that every WS can maximize its benefit through cooperation without decreasing other WSs' benefit. We also conduct extensive simulations, and the results show that the proposed algorithm can greatly improve the average payoff per WS and average availability per coalition when compared with other cooperation schemes.

Journal Article•DOI•
TL;DR: A novel approach to model dynamic behaviors of interacting context-aware web services, aiming to effectively process and take advantage of contexts and realize behavior adaptation of web services and further to facilitate the development of context- aware application of web Services.
Abstract: Context-aware web services have been attracting significant attention as an important approach for improving the usability of web services. In this paper, we explore a novel approach to model dynamic behaviors of interacting context-aware web services, aiming to effectively process and take advantage of contexts and realize behavior adaptation of web services and further to facilitate the development of context-aware application of web services. We present an interaction model of context-aware web services based on context-aware process network (CAPN), which is a data-flow and channel-based model of cooperative computation. The CAPN is extended to context-aware web service network by introducing a kind of sensor processes, which is used to catch contextual data from external environment. Through modeling the register link's behaviors, we present how a web service can respond to its context changes dynamically. The formal behavior semantics of our model is described by calculus of communicating systems process algebra. The behavior adaptation and context awareness in our model are discussed. An eXtensible Markup Language-formatted service behavior description language named BML4WS is designed to describe behaviors and behavior adaptation of interacting context-aware web services. Finally, an application case is demonstrated to illustrate the proposed model how to adapt context changes and describe service behaviors and their changes.

Journal Article•DOI•
TL;DR: A novel model for the service selection problem of workflow-based applications in the context of self-managing situated computing, which evaluates at runtime the optimal binding to concrete services as well as the tradeoff between the remote execution of software fragments and their dynamic deployment on local nodes of the computational environment.
Abstract: This paper describes a novel model for the service selection problem of workflow-based applications in the context of self-managing situated computing. In such systems, the execution environment includes different types of devices, from remote servers to personal notebooks, smartphones, and wireless sensors, which build an infrastructure that can dynamically change both its physical and logical architecture at runtime. We assume that workflows are defined abstractly; i.e., they invoke abstract services whose concrete counterparts can be selected dynamically. We also assume that concrete service implementations may possibly migrate on the nodes of the infrastructure. The selection problem we address is framed as an optimization problem of the quality of service (QoS), which evaluates at runtime the optimal binding to concrete services as well as the tradeoff between the remote execution of software fragments and their dynamic deployment on local nodes of the computational environment. The final deployment takes into account quality of service constraints, the capabilities of the physical devices involved, including their performance and energy consumption, and the characteristics of the networking links connecting them.

Journal Article•DOI•
Weikai Miao1, Shaoying Liu1•
TL;DR: A novel formal engineering framework is proposed by integrating an evolutionary service selection approach into a formal engineering method to tackle the problem of effectively utilizing existing software services in the process of system modeling to ensure the reliability of the system while reducing the development cost and effort.
Abstract: Service-based software modeling is considered as an effective technique for developing high-quality service-based systems. One major challenge of this approach is how to effectively utilize existing software services in the process of system modeling to ensure the reliability of the system while reducing the development cost and effort. In this paper, we propose a novel formal engineering framework by integrating an evolutionary service selection approach into a formal engineering method to tackle this problem. In the framework, initial requirements are gradually transformed into a formal design specification through three steps during which existing services are discovered, filtered, selected, and employed. Candidate services are discovered through a keyword-based searching. A static behavior analysis technique is then used to filter the candidate services and a specification-based testing method is adopted to rigorously select the candidate services. The selected services are finally incorporated into the formal design model of the system. We present an empirical case study that was conducted for evaluating the usability of our framework by applying it to develop a travel agency system. The result of the study demonstrates several advantages of the framework over existing approaches but meanwhile also shows some limitation in practice.

Journal Article•DOI•
TL;DR: A stochastic, risk-constrained budget strategy, by considering a random factor of clicks per unit cost to capture a kind of uncertainty at the campaign level to deal with uncertainties in search marketing environments, following principles of a hierarchical budget optimization framework.
Abstract: How to rationally allocate the limited advertising budget is a critical issue in sponsored search auctions. There are plenty of uncertainties in the mapping from the budget into the advertising performance. This paper presented some preliminary efforts to deal with uncertainties in search marketing environments, following principles of a hierarchical budget optimization framework (BOF). We proposed a stochastic, risk-constrained budget strategy, by considering a random factor of clicks per unit cost to capture a kind of uncertainty at the campaign level. Uncertainties of random factors at the campaign level lead to risk at the market/system level. We also proved its theoretical soundness through analyzing some desirable properties. Some computational experiments were made to evaluate our proposed budget strategy with real-word data collected from reports and logs of search advertising campaigns. Experimental results illustrated that our strategy outperforms two baseline strategies. We also noticed that 1) the risk tolerance has great influences on the determination of optimal budget solutions; 2) the higher risk tolerance leads to more expected revenues.