scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A Collaborative Approach to Predicting Service Price for QoS-Aware Service Selection

TL;DR: This work proposes a collaborative approach to predicting a provider's minimum price for a desired QoS based on prior usage experience, and shows the approach can find the optimal service providers efficiently and effectively.
Abstract: In QoS-aware service selection, a service requester seeks to maximize its utility by selecting a service provider that charges the lowest service price while meeting the requester's QoS requirements. In existing selection approaches, a service requester focuses on finding providers based on their QoS and thereby ignores their service prices that could change with their QoS. High QoS may provide more benefits, but may require a high service price. As a result, the highest QoS may not produce the maximum utility. A service requester and candidate service providers have a conflicting interest over service prices. Since a provider would not reveal its minimum acceptabl price, it is important for a requester to predict the minimum price for a service that meets its QoS requirements. We propose a collaborative approach to predicting a provider's minimum price for a desired QoS based on prior usage experience. The experimental results show our approach can find the optimal service providers efficiently and effectively.
Citations
More filters
01 Jan 1981
TL;DR: In this article, the authors provide an overview of economic analysis techniques and their applicability to software engineering and management, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.
Abstract: This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.

283 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel method that seamlessly considers Quality of Service (QoS) and credibility of service providers to achieve optimal service compositions and treats creditability as the overall capability of a service provider to deliver its promised QoS.
Abstract: QoS-aware Web service composition is regarded as one of the fundamental issues in service computing. Given the open and dynamic internet environment, which lacks a central control of individual service providers, we propose in this paper a novel method that seamlessly considers Quality of Service (QoS) and credibility of service providers to achieve optimal service compositions. Instead of using creditability as one of the QoS attributes, we treat it as the overall capability of a service provider to deliver its promised QoS. We aggregate both user experience (i.e., user trust) and track record (i.e., service reputation) of a provider for accurate creditability evaluation. To facilitate user decision making when multiple (and sometimes conflicting) QoS attributes are involved, we develop an automatic weight calculation approach based on rough set theory and a fuzzy analytic hierarchy process, which assigns higher weights to the more discriminative attributes. Finally, to achieve an optimal service composition, a two phase optimization process is employed, where local optimization chooses services based on creditable QoS assessment and global optimization tackles a multi-objective problem using an effective cuckoo search algorithm. Extensive experimental results show that the proposed QoS-aware service composition approach achieves desirable QoS with credibility guarantees. The performance of our proposed approach also significantly outperforms other competitive methods.

40 citations

Journal ArticleDOI
TL;DR: Experimental results show that the incentive contracts have a positive impact on both service requesters and providers and that the incentives mechanism outperforms the existing combinatorial auction-based approaches in finding optimal solutions.
Abstract: QoS-aware service selection seeks to find the optimal service providers to achieve the optimization goal of a service requester, such as the maximization of utility, while satisfying global QoS requirements. Service providers are usually self-interested and have some private information, such as minimum prices, that would significantly factor into the decision making of the service requester. Thus, service requesters face a decision making dilemma with incomplete information. Recent work has used iterative combinatorial auctions to address this problem. However, such studies do not sufficiently consider that the service requester can elicit the private information from service providers by observing their actions. This can help the service selection process achieve better outcomes. In this paper, we propose a type of incentive contract that can motivate the service providers to offer the QoS and prices that the service requester prefers. Based on the incentive contracts, we propose an incentive mechanism for effective service selection. In the mechanism, a service requester offers a set of incentive contracts to the service providers and then elicits their private information based on their responses to the incentive contracts. The process is iterated until the service requester finally obtains a solution that fulfills the global QoS requirements. Experimental results show that the incentive contracts have a positive impact on both service requesters and providers and that the incentive mechanism outperforms the existing combinatorial auction-based approaches in finding optimal solutions.

24 citations


Cites background or methods from "A Collaborative Approach to Predict..."

  • ...In this paper, based on our previous works [9], [10], we propose a type of incentive contract that can motivate the service providers to perform tasks with the QoS and the pri-...

    [...]

  • ...Our previous work [10] proposed a collaborative approach for estimating the cost function...

    [...]

  • ...cost-performance indexes of the service providers are private knowledge but the probability distribution of the cost-performance indexes and the cost functions of the tasks are public knowledge, which can be evaluated from past experiences [10] or empirically specified....

    [...]

Journal ArticleDOI
TL;DR: A novel sampling method, enhanced importance resampling (EIRS), is proposed and applied that can not only sample efficiently and accurately but also can greatly improve the accuracy of Web service QoS prediction.
Abstract: In recent years, as the number of Web services, increases dramatically, the personalized Web service recommendation has become a hot topic in both academia and industry. The quality-of-service (QoS) prediction plays a key role in Web service recommendation systems. However, how to further improve the accuracy of QoS prediction is still a problem. Traditional QoS predicting models do not consider the impact of sampling methods on the accuracy of QoS prediction. However, the outstanding sampling method can train the predicting model more effectively and obtain higher accuracy. Therefore, it is necessary to study sampling methods based on the QoS dataset in order to obtain sample distribution closer to the original distribution, so as to improve the accuracy of the predicting models. In this paper, we first discuss how to apply several existing sampling methods to QoS datasets and then analyze their advantages and disadvantages. Finally, a novel sampling method, enhanced importance resampling (EIRS), is proposed and applied. The experiments on the real-world datasets show that our method can not only sample efficiently and accurately but also can greatly improve the accuracy of Web service QoS prediction.

4 citations

References
More filters
04 Oct 1993
TL;DR: In this paper, the authors provide an overview of economic analysis techniques and their applicability to software engineering and management, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.
Abstract: This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.

5,899 citations

Posted Content
TL;DR: In this article, the authors compare the predictive accuracy of various methods in a set of representative problem domains, including correlation coefficients, vector-based similarity calculations, and statistical Bayesian methods.
Abstract: Collaborative filtering or recommender systems use a database about user preferences to predict additional topics or products a new user might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients, vector-based similarity calculations, and statistical Bayesian methods. We compare the predictive accuracy of the various methods in a set of representative problem domains. We use two basic classes of evaluation metrics. The first characterizes accuracy over a set of individual predictions in terms of average absolute deviation. The second estimates the utility of a ranked list of suggested items. This metric uses an estimate of the probability that a user will see a recommendation in an ordered list. Experiments were run for datasets associated with 3 application areas, 4 experimental protocols, and the 2 evaluation metrics for the various algorithms. Results indicate that for a wide range of conditions, Bayesian networks with decision trees at each node and correlation methods outperform Bayesian-clustering and vector-similarity methods. Between correlation and Bayesian networks, the preferred method depends on the nature of the dataset, nature of the application (ranked versus one-by-one presentation), and the availability of votes with which to make predictions. Other considerations include the size of database, speed of predictions, and learning time.

4,883 citations

Proceedings Article
24 Jul 1998
TL;DR: Several algorithms designed for collaborative filtering or recommender systems are described, including techniques based on correlation coefficients, vector-based similarity calculations, and statistical Bayesian methods, to compare the predictive accuracy of the various methods in a set of representative problem domains.
Abstract: Collaborative filtering or recommender systems use a database about user preferences to predict additional topics or products a new user might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients, vector-based similarity calculations, and statistical Bayesian methods. We compare the predictive accuracy of the various methods in a set of representative problem domains. We use two basic classes of evaluation metrics. The first characterizes accuracy over a set of individual predictions in terms of average absolute deviation. The second estimates the utility of a ranked list of suggested items. This metric uses an estimate of the probability that a user will see a recommendation in an ordered list. Experiments were run for datasets associated with 3 application areas, 4 experimental protocols, and the 2 evaluation metr rics for the various algorithms. Results indicate that for a wide range of conditions, Bayesian networks with decision trees at each node and correlation methods outperform Bayesian-clustering and vector-similarity methods. Between correlation and Bayesian networks, the preferred method depends on the nature of the dataset, nature of the application (ranked versus one-by-one presentation), and the availability of votes with which to make predictions. Other considerations include the size of database, speed of predictions, and learning time.

4,557 citations


"A Collaborative Approach to Predict..." refers background in this paper

  • ...Since, service providers do not report their QoS attributes, a lot of work has been done to predict their QoS attributes based on collaborative filtering [20]....

    [...]

Book
01 Jan 1981
TL;DR: In this article, the authors provide an overview of economic analysis techniques and their applicability to software engineering and management, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.
Abstract: This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.

4,440 citations

Book
26 Dec 2001
TL;DR: Laffont and Martimort as mentioned in this paper focus on the principal-agent model, the "simple" situation where a principal, or company, delegates a task to a single agent through a contract, the essence of management and contract theory.
Abstract: Economics has much to do with incentives--not least, incentives to work hard, to produce quality products, to study, to invest, and to save. Although Adam Smith amply confirmed this more than two hundred years ago in his analysis of sharecropping contracts, only in recent decades has a theory begun to emerge to place the topic at the heart of economic thinking. In this book, Jean-Jacques Laffont and David Martimort present the most thorough yet accessible introduction to incentives theory to date. Central to this theory is a simple question as pivotal to modern-day management as it is to economics research: What makes people act in a particular way in an economic or business situation? In seeking an answer, the authors provide the methodological tools to design institutions that can ensure good incentives for economic agents. This book focuses on the principal-agent model, the "simple" situation where a principal, or company, delegates a task to a single agent through a contract--the essence of management and contract theory. How does the owner or manager of a firm align the objectives of its various members to maximize profits? Following a brief historical overview showing how the problem of incentives has come to the fore in the past two centuries, the authors devote the bulk of their work to exploring principal-agent models and various extensions thereof in light of three types of information problems: adverse selection, moral hazard, and non-verifiability. Offering an unprecedented look at a subject vital to industrial organization, labor economics, and behavioral economics, this book is set to become the definitive resource for students, researchers, and others who might find themselves pondering what contracts, and the incentives they embody, are really all about.

2,454 citations