scispace - formally typeset
Search or ask a question

Modeling Quality of Service for Workflows and Web Service Processes

TL;DR: This paper presents a predictive QoS model that makes it possible to compute the quality of service for workflows automatically based on atomic task QoS attributes, and presents the implementation of the model for the METEOR workflow system.
Abstract: Workflow management systems (WfMSs) have been used to support various types of business processes for more than a decade now. In workflows for e-commerce and Web service applications, suppliers and customers define a binding agreement or contract between the two parties, specifying Quality of Service (QoS) items such as products or services to be delivered, deadlines, quality of products, and cost of services. The management of QoS metrics directly impacts the success of organizations participating in e-commerce. Therefore, when services or products are created or managed using workflows, the underlying workflow system must accept the specifications and be able to estimate, monitor, and control the QoS rendered to customers. In this paper, we present a predictive QoS model that makes it possible to compute the quality of service for workflows automatically based on atomic task QoS attributes. To this end, we present a model that specifies QoS and describe an algorithm and a simulation system in order to compute, analyze and monitor workflow QoS metrics.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article•DOI•
25 Jun 2005
TL;DR: Genetic Algorithms, while being slower than integer programming, represent a more scalable choice, and are more suitable to handle generic QoS attributes.
Abstract: Web services are rapidly changing the landscape of software engineering. One of the most interesting challenges introduced by web services is represented by Quality Of Service (QoS)--aware composition and late--binding. This allows to bind, at run--time, a service--oriented system with a set of services that, among those providing the required features, meet some non--functional constraints, and optimize criteria such as the overall cost or response time. In other words, QoS--aware composition can be modeled as an optimization problem.We propose to adopt Genetic Algorithms to this aim. Genetic Algorithms, while being slower than integer programming, represent a more scalable choice, and are more suitable to handle generic QoS attributes. The paper describes our approach and its applicability, advantages and weaknesses, discussing results of some numerical simulations.

953 citations

Posted Content•
TL;DR: A taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids is proposed that highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.
Abstract: With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.

851 citations

Journal Article•DOI•
TL;DR: In this article, a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids has been proposed, highlighting the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, and identifying the areas that need further research.
Abstract: With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.

761 citations

Book•
23 Nov 2007
TL;DR: This work defines the set-theoretic operators on an instance of a neutrosophic set, and calls it an Interval Neutrosophics Set (INS), and introduces a new logic system based on interval neutrosophile sets and proposed data model based on the extension of fuzzy data model and paraconsistent data model.
Abstract: A neutrosophic set is a part of neutrosophy that studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra. The neutrosophic set is a powerful general formal framework that has been recently proposed. However, the neutrosophic set needs to be specified from a technical point of view. Here, we define the set-theoretic operators on an instance of a neutrosophic set, and call it an Interval Neutrosophic Set (INS). We prove various properties of INS, which are connected to operations and relations over INS. We also introduce a new logic system based on interval neutrosophic sets. We study the interval neutrosophic propositional calculus and interval neutrosophic predicate calculus. We also create a neutrosophic logic inference system based on interval neutrosophic logic. Under the framework of the interval neutrosophic set, we propose a data model based on the special case of the interval neutrosophic sets called Neutrosophic Data Model. This data model is the extension of fuzzy data model and paraconsistent data model. We generalize the set-theoretic operators and relation-theoretic operators of fuzzy relations and paraconsistent relations to neutrosophic relations. We propose the generalized SQL query constructs and tuple-relational calculus for Neutrosophic Data Model. We also design an architecture of Semantic Web Services agent based on the interval neutrosophic logic and do the simulation study.

643 citations

Proceedings Article•DOI•
20 Apr 2009
TL;DR: This paper proposes a solution that combines global optimization with local selection techniques to benefit from the advantages of both worlds and significantly outperforms existing solutions in terms of computation time while achieving close-to-optimal results.
Abstract: The run-time binding of web services has been recently put forward in order to support rapid and dynamic web service compositions. With the growing number of alternative web services that provide the same functionality but differ in quality parameters, the service composition becomes a decision problem on which component services should be selected such that user's end-to-end QoS requirements (e.g. availability, response time) and preferences (e.g. price) are satisfied. Although very efficient, local selection strategy fails short in handling global QoS requirements. Solutions based on global optimization, on the other hand, can handle global constraints, but their poor performance renders them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques to benefit from the advantages of both worlds. The proposed solution consists of two steps: first, we use mixed integer programming (MIP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use distributed local selection to find the best web services that satisfy these local constraints. The results of experimental evaluation indicate that our approach significantly outperforms existing solutions in terms of computation time while achieving close-to-optimal results.

628 citations


Cites background from "Modeling Quality of Service for Wor..."

  • ...Techniques for handling multiple execution paths and unfolding loops from [10], can be used for this purpose....

    [...]

References
More filters
Journal Article•DOI•
TL;DR: A new approach to rapid sequence comparison, basic local alignment search tool (BLAST), directly approximates alignments that optimize a measure of local similarity, the maximal segment pair (MSP) score.

88,255 citations


"Modeling Quality of Service for Wor..." refers background in this paper

  • ...(Note that our mathematical models could be extended to queuing network models (Lazowska, Zhorjan et al. 1984), but this requires making some simplifying assumptions)....

    [...]

Book•
12 Jan 1994
TL;DR: This book presents a step-by-step guide to making the research results presented in reports, slideshows, posters, and data visualizations more interesting, and describes how coding initiates qualitative data analysis.
Abstract: Matthew B. Miles, Qualitative Data Analysis A Methods Sourcebook, Third Edition. The Third Edition of Miles & Huberman's classic research methods text is updated and streamlined by Johnny Saldana, author of The Coding Manual for Qualitative Researchers. Several of the data display strategies from previous editions are now presented in re-envisioned and reorganized formats to enhance reader accessibility and comprehension. The Third Edition's presentation of the fundamentals of research design and data management is followed by five distinct methods of analysis: exploring, describing, ordering, explaining, and predicting. Miles and Huberman's original research studies are profiled and accompanied with new examples from Saldana's recent qualitative work. The book's most celebrated chapter, "Drawing and Verifying Conclusions," is retained and revised, and the chapter on report writing has been greatly expanded, and is now called "Writing About Qualitative Research." Comprehensive and authoritative, Qualitative Data Analysis has been elegantly revised for a new generation of qualitative researchers. Johnny Saldana, The Coding Manual for Qualitative Researchers, Second Edition. The Second Edition of Johnny Saldana's international bestseller provides an in-depth guide to the multiple approaches available for coding qualitative data. Fully up-to-date, it includes new chapters, more coding techniques and an additional glossary. Clear, practical and authoritative, the book: describes how coding initiates qualitative data analysis; demonstrates the writing of analytic memos; discusses available analytic software; suggests how best to use the book for particular studies. In total, 32 coding methods are profiled that can be applied to a range of research genres from grounded theory to phenomenology to narrative inquiry. For each approach, Saldana discusses the method's origins, a description of the method, practical applications, and a clearly illustrated example with analytic follow-up. A unique and invaluable reference for students, teachers, and practitioners of qualitative inquiry, this book is essential reading across the social sciences. Stephanie D. H. Evergreen, Presenting Data Effectively Communicating Your Findings for Maximum Impact. This is a step-by-step guide to making the research results presented in reports, slideshows, posters, and data visualizations more interesting. Written in an easy, accessible manner, Presenting Data Effectively provides guiding principles for designing data presentations so that they are more likely to be heard, remembered, and used. The guidance in the book stems from the author's extensive study of research reporting, a solid review of the literature in graphic design and related fields, and the input of a panel of graphic design experts. Those concepts are then translated into language relevant to students, researchers, evaluators, and non-profit workers - anyone in a position to have to report on data to an outside audience. The book guides the reader through design choices related to four primary areas: graphics, type, color, and arrangement. As a result, readers can present data more effectively, with the clarity and professionalism that best represents their work.

41,986 citations


"Modeling Quality of Service for Wor..." refers methods in this paper

  • ...In view of the fact humans often feel awkward in handling and interpreting such quantitative values (Tversky and Kahneman 1974), we allow the designer with the help of a domain expert to map the value resulting from applying the fidelity function to a qualitative scale (Miles and Huberman 1994)....

    [...]

Book•
01 Jan 1974
TL;DR: The authors described three heuristics that are employed in making judgements under uncertainty: representativeness, availability of instances or scenarios, and adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Abstract: This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.

31,082 citations

Journal Article•DOI•
TL;DR: Three computer programs for comparisons of protein and DNA sequences can be used to search sequence data bases, evaluate similarity scores, and identify periodic structures based on local sequence similarity.
Abstract: We have developed three computer programs for comparisons of protein and DNA sequences. They can be used to search sequence data bases, evaluate similarity scores, and identify periodic structures based on local sequence similarity. The FASTA program is a more sensitive derivative of the FASTP program, which can be used to search protein or DNA sequence data bases and can compare a protein sequence to a DNA sequence data base by translating the DNA data base as it is searched. FASTA includes an additional step in the calculation of the initial pairwise similarity score that allows multiple regions of similarity to be joined to increase the score of related sequences. The RDF2 program can be used to evaluate the significance of similarity scores using a shuffling method that preserves local sequence composition. The LFASTA program can display all the regions of local similarity between two sequences with scores greater than a threshold, using the same scoring parameters and a similar alignment algorithm; these local similarities can be displayed as a "graphic matrix" plot or as individual alignments. In addition, these programs have been generalized to allow comparison of DNA or protein sequences based on a variety of alternative scoring matrices.

12,432 citations


"Modeling Quality of Service for Wor..." refers methods in this paper

  • ...1990) and FASTA (Pearson and Lipman 1988) programs to compare sequences....

    [...]

  • ...For the time dimension, we have used the linear regression from Equation 1 and defined the function represented in Equation 3 to estimate its duration (FASTA has a linear running time (Pearson and Lipman 1988).)...

    [...]

  • ...For the time dimension, we have used the linear regression from Equation 1 and defined the function represented in Equation 3 to estimate its duration (FASTA has a linear running time (Pearson and Lipman 1988)....

    [...]

  • ...For this reason, it was decided to employ the BLAST (Altschul, Gish et al. 1990) and FASTA (Pearson and Lipman 1988) programs to compare sequences....

    [...]