scispace - formally typeset
Search or ask a question

Showing papers on "Web standards published in 2005"


Book ChapterDOI
TL;DR: Along with introducing the main elements of WSMO, this paper provides a logical language for defining formal statements in WSMO together with some motivating examples from practical use cases which shall demonstrate the benefits of Semantic Web Services.
Abstract: The potential to achieve dynamic, scalable and cost-effective marketplaces and eCommerce solutions has driven recent research efforts towards so-called Semantic Web Services that are enriching Web services with machine-processable semantics. To this end, the Web Service Modeling Ontology (WSMO) provides the conceptual underpinning and a formal language for semantically describing all relevant aspects of Web services in order to facilitate the automatization of discovering, combining and invoking electronic services over the Web. In this paper we describe the overall structure of WSMO by its four main elements: ontologies, which provide the terminology used by other WSMO elements, Web services, which provide access to services that, in turn, provide some value in some domain, goals that represent user desires, and mediators, which deal with interoperability problems between different WSMO elements. Along with introducing the main elements of WSMO, we provide a logical language for defining formal statements in WSMO together with some motivating examples from practical use cases which shall demonstrate the benefits of Semantic Web Services.

1,367 citations


Journal ArticleDOI
TL;DR: The urgent need for service composition is discussed, the required technologies to perform service composition are presented, and several different composition strategies, based on some currently existing composition platforms and frameworks are presented.
Abstract: Due to the web services' heterogeneous nature, which stems from the definition of several XML-based standards to overcome platform and language dependence, web services have become an emerging and promising technology to design and build complex inter-enterprise business applications out of single web-based software components. To establish the existence of a global component market, in order to enforce extensive software reuse, service composition experienced increasing interest in doing a lot of research effort. This paper discusses the urgent need for service composition, the required technologies to perform service composition. It also presents several different composition strategies, based on some currently existing composition platforms and frameworks, re-presenting first implementations of state-of the-art technologies, and gives an outlook to essential future research work.

920 citations


Book ChapterDOI
01 Jan 2005
TL;DR: The vision of a Semantic Web has recently drawn considerable attention, both from academia and industry, and description logics are often named as one of the tools that can support this vision and thus help to make this vision reality.
Abstract: The vision of a Semantic Web has recently drawn considerable attention, both from academia and industry. Description logics are often named as one of the tools that can support the Semantic Web and thus help to make this vision reality.

484 citations


Journal ArticleDOI
TL;DR: The implementation and architecture of the METEOR-S Web Service Discovery Infrastructure is described, which leverages peer-to-peer computing as a scalable solution and an ontology-based approach to organize registries into domains, enabling domain based classification of all Web services.
Abstract: Web services are the new paradigm for distributed computing. They have much to offer towards interoperability of applications and integration of large scale distributed systems. To make Web services accessible to users, service providers use Web service registries to publish them. Current infrastructure of registries requires replication of all Web service publications in all Universal Business Registries. Large growth in number of Web services as well as the growth in the number of registries would make this replication impractical. In addition, the current Web service discovery mechanism is inefficient, as it does not support discovery based on the capabilities of the services, leading to a lot of irrelevant matches. Semantic discovery or matching of services is a promising approach to address this challenge. In this paper, we present a scalable, high performance environment for Web service publication and discovery among multiple registries. This work uses an ontology-based approach to organize registries into domains, enabling domain based classification of all Web services. Each of these registries supports semantic publication and discovery of Web services. We believe that the semantic approach suggested in this paper will significantly improve Web service publication and discovery involving a large number of registries. This paper describes the implementation and architecture of the METEOR-S Web Service Discovery Infrastructure, which leverages peer-to-peer computing as a scalable solution.

483 citations


Proceedings ArticleDOI
15 Aug 2005
TL;DR: An additional criterion for web page ranking is introduced, namely the distance between a user profile defined using ODP topics and the sets of O DP topics covered by each URL returned in regular web search, and the boundaries of biasing PageRank on subtopics of the ODP are investigated.
Abstract: The Open Directory Project is clearly one of the largest collaborative efforts to manually annotate web pages. This effort involves over 65,000 editors and resulted in metadata specifying topic and importance for more than 4 million web pages. Still, given that this number is just about 0.05 percent of the Web pages indexed by Google, is this effort enough to make a difference? In this paper we discuss how these metadata can be exploited to achieve high quality personalized web search. First, we address this by introducing an additional criterion for web page ranking, namely the distance between a user profile defined using ODP topics and the sets of ODP topics covered by each URL returned in regular web search. We empirically show that this enhancement yields better results than current web search using Google. Then, in the second part of the paper, we investigate the boundaries of biasing PageRank on subtopics of the ODP in order to automatically extend these metadata to the whole web.

327 citations


Journal ArticleDOI
TL;DR: A look at how developers are going back to the future by building Web applications using Ajax (Asynchronous JavaScript and XML), a set of technologies mostly developed in the 1990s.
Abstract: Looks at how developers are going back to the future by building Web applications using Ajax (Asynchronous JavaScript and XML), a set of technologies mostly developed in the 1990s. A key advantage of Ajax applications is that they look and act more like desktop applications. Proponents argue that Ajax applications perform better than traditional Web programs. As an example, Ajax applications can add or retrieve new data for a page it is working with and the page will update immediately without reloading.

300 citations


Journal ArticleDOI
01 Jul 2005
TL;DR: This paper presents a case study of the development of standards in the area of cross-organizational workflows based on web services, and discusses two opposing types of standards: those based on SOAP, with tightly coupled designs similar to remote procedure calls, and thosebased on REST, with loosely coupled designsSimilar to the navigating of web links.
Abstract: This paper presents a case study of the development of standards in the area of cross-organizational workflows based on web services. We discuss two opposing types of standards: those based on SOAP, with tightly coupled designs similar to remote procedure calls, and those based on REST, with loosely coupled designs similar to the navigating of web links. We illustrate the standardization process, clarify the technical underpinnings of the conflict, and analyze the interests of stakeholders. The decision criteria for each group of stakeholders are discussed. Finally, we present implications for both the workflow and the wider Internet communities.

277 citations


Proceedings ArticleDOI
11 Jul 2005
TL;DR: The Web service execution environment (WSMX) is introduced, at software system that enables the creation and execution of semantic Web services based on the Web service modelling ontology.
Abstract: Web services offer an interoperability model that abstracts from the idiosyncrasies of specific implementations; they were introduced to address the increasing need for seamless interoperability between systems in the business-to-business domain. We analyse the requirements from this domain and show that to fully address interoperability demands we need to make use of semantic descriptions of Web services. We therefore introduce the Web service execution environment (WSMX), at software system that enables the creation and execution of semantic Web services based on the Web service modelling ontology. Providers can use it to register and offer their services and requesters can use it to dynamically discover and invoke relevant services. WSMX allows a requester to discover, mediate and invoke Web services in order to carry out its tasks, based on services available on the Internet.

273 citations


Journal ArticleDOI
TL;DR: In this paper, the authors analyze what research says about the demands that the use of the Web as an information resource in education makes on the support and supervision of students' learning processes.
Abstract: The use of the Web in K–12 education has increased substantially in recent years. The Web, however, does not support the learning processes of students as a matter of course. In this review, the authors analyze what research says about the demands that the use of the Web as an information resource in education makes on the support and supervision of students’ learning processes. They discuss empirical research focusing on the limitations of the actual search strategies of children, as well as theoretical literature that analyzes specific characteristics of the Web and their implications for the organization of education. The authors conclude that students need support in searching on the Web as well as in developing “information literacy.” Future research should focus on how the use of the Web in education can contribute to the development of deep and meaningful knowledge.

268 citations


Proceedings Article
30 Aug 2005
TL;DR: In this article, the authors present Colombo, a framework in which web services are characterized in terms of the atomic processes (i.e., operations) they can perform; their impact on the real world (modeled as a relational database); their transition-based behavior; and the messages they can send and receive (from/to other web services and human clients).
Abstract: In this paper we present Colombo, a framework in which web services are characterized in terms of (i) the atomic processes (i.e., operations) they can perform; (ii) their impact on the "real world" (modeled as a relational database); (iii) their transition-based behavior; and (iv) the messages they can send and receive (from/to other web services and "human" clients). As such, Colombo combines key elements from the standards and research literature on (semantic) web services. Using Colombo, we study the problem of automatic service composition (synthesis) and devise a sound, complete and terminating algorithm for building a composite service. Specifically, the paper develops (i) a technique for handling the data, which ranges over an infinite domain, in a finite, symbolic way, and (ii) a technique to automatically synthesize composite web services, based on Propositional Dynamic Logic.

265 citations


Proceedings ArticleDOI
10 May 2005
TL;DR: Thresher is described, a system that lets non-technical users teach their browsers how to extract semantic web content from HTML documents on the World Wide Web, and which enables a rich semantic interaction with existing web pages, "unwrapping" semantic data buried in the pages' HTML.
Abstract: We describe Thresher, a system that lets non-technical users teach their browsers how to extract semantic web content from HTML documents on the World Wide Web. Users specify examples of semantic content by highlighting them in a web browser and describing their meaning. We then use the tree edit distance between the DOM subtrees of these examples to create a general pattern, or wrapper, for the content, and allow the user to bind RDF classes and predicates to the nodes of these wrappers. By overlaying matches to these patterns on standard documents inside the Haystack semantic web browser, we enable a rich semantic interaction with existing web pages, "unwrapping" semantic data buried in the pages' HTML. By allowing end-users to create, modify, and utilize their own patterns, we hope to speed adoption and use of the Semantic Web and its applications.

Journal ArticleDOI
TL;DR: This article summarizes the SWSA committee's findings, emphasizing its review of requirements gathered from several different environments, and identifies the scope and potential requirements for a semantic Web services architecture.
Abstract: The semantic Web services initiative architecture (SWSA) committee has created a set of architectural and protocol abstractions that serve as a foundation for semantic Web service technologies. This article summarizes the committee's findings, emphasizing its review of requirements gathered from several different environments. We also identify the scope and potential requirements for a semantic Web services architecture.

Book ChapterDOI
04 Apr 2005
TL;DR: This paper defines the notion of usability – an intuitive and locally provable soundness criterion for a given Web services, and demonstrates how the other questions could be answered.
Abstract: This paper is concerned with the application of Web services to distributed, cross-organizational business processes. In this scenario, it is crucial to answer the following questions: Do two Web services fit together in a way such that the composed system is deadlock-free? – the question of compatibility. Can one Web service be replaced by another while the remaining components stay untouched? – the question of equivalence. Can we reason about the soundness of one given Web service without considering the actual environment it will by used in? This paper defines the notion of usability – an intuitive and locally provable soundness criterion for a given Web services. Based on this notion, this paper demonstrates how the other questions could be answered. The presented method is based on Petri nets, because this formalism is widely used for modeling and analyzing business processes. Due to the existing Petri net semantics for BPEL4WS – a language that is in the very act of becoming the industrial standard for Web service based business processes – the results are directly applicable to real world examples.

Proceedings Article
30 Jul 2005
TL;DR: This paper starts from descriptions of web services in standard process modeling and execution languages and automatically translates them into a planning domain that models the interactions among services at the knowledge level, to avoid the explosion of the search space due to the usually large and possibly infinite ranges of data values that are exchanged among services.
Abstract: In this paper, we address the problem of the automated composition of web services by planning on their "knowledge level" models. We start from descriptions of web services in standard process modeling and execution languages, like BPEL4WS, and automatically translate them into a planning domain that models the interactions among services at the knowledge level. This allows us to avoid the explosion of the search space due to the usually large and possibly infinite ranges of data values that are exchanged among services, and thus to scale up the applicability of state-of-the-art techniques for the automated composition of web services. We present the theoretical framework, implement it, and provide an experimental evaluation that shows the practical advantage of our approach w.r.t. techniques that are not based on a knowledgelevel representation.

Journal ArticleDOI
TL;DR: This paper presents an agent-based and context-oriented approach that supports the composition of Web services, where software agents engage in conversations with their peers to agree on the Web services that participate in this process.
Abstract: This paper presents an agent-based and context-oriented approach that supports the composition of Web services. A Web service is an accessible application that other applications and humans can discover and invoke to satisfy multiple needs. To reduce the complexity featuring the composition of Web services, two concepts are put forward, namely, software agent and context. A software agent is an autonomous entity that acts on behalf of users and the context is any relevant information that characterizes a situation. During the composition process, software agents engage in conversations with their peers to agree on the Web services that participate in this process. Conversations between agents take into account the execution context of the Web services. The security of the computing resources on which the Web services are executed constitutes another core component of the agent-based and context-oriented approach presented in this paper.

Book ChapterDOI
06 Nov 2005
TL;DR: A novel Semantic Web navigation model providing additional navigation paths through Swoogle's search services such as the Ontology Dictionary is proposed, and algorithms for ranking the importance ofSemantic Web objects at three levels of granularity: documents, terms and RDF graphs are developed.
Abstract: Swoogle helps software agents and knowledge engineers find Semantic Web knowledge encoded in RDF and OWL documents on the Web. Navigating such a Semantic Web on the Web is difficult due to the paucity of explicit hyperlinks beyond the namespaces in URIrefs and the few inter-document links like rdfs:seeAlso and owl:imports. In order to solve this issue, this paper proposes a novel Semantic Web navigation model providing additional navigation paths through Swoogle's search services such as the Ontology Dictionary. Using this model, we have developed algorithms for ranking the importance of Semantic Web objects at three levels of granularity: documents, terms and RDF graphs. Experiments show that Swoogle outperforms conventional web search engine and other ontology libraries in finding more ontologies, ranking their importance, and thus promoting the use and emergence of consensus ontologies.

Proceedings ArticleDOI
03 Jan 2005
TL;DR: This paper equips government agencies with a model that can not only evaluate their Web-based e-government services, but also helps them understand why their Web sites succeed or fail to help citizens find needed information.
Abstract: One of the challenges in delivering e-government services is to design the Web sites to make it easier for citizens to find desired information. However, little work is found to evaluate e-government services in this sense. In addition, current efforts on government Web site design mainly concentrate on Web site features that would enhance its usability, but few of them answers why some Web design is better than others to facilitate citizens' information seeking. This paper aims to contribute to both aspects: it equips government agencies with a model that can not only evaluate their Web-based e-government services, but also helps them understand why their Web sites succeed or fail to help citizens find needed information. In addition to the model itself, instruments for applying this model are also developed.

Book ChapterDOI
06 Nov 2005
TL;DR: Piggy Bank is a web browser extension that lets users make use of Semantic Web content within Web content as users browse the Web, and Semantic Bank, a web server application that lets Piggy Bank users share theSemantic Web information they have collected, enabling collaborative efforts to build so-phisticated SemanticWeb information repositories through simple, everyday's use of piggy Bank.
Abstract: The Semantic Web Initiative envisions a Web wherein information is offered free of presentation, allowing more effective exchange and mixing across web sites and across web pages. But without substantial Semantic Web content, few tools will be written to consume it; without many such tools, there is little appeal to publish Semantic Web content. To break this chicken-and-egg problem, thus enabling more flexible informa-tion access, we have created a web browser extension called Piggy Bankthat lets users make use of Semantic Web content within Web content as users browse the Web. Wherever Semantic Web content is not available, Piggy Bank can invoke screenscrapers to re-structure information within web pages into Semantic Web format. Through the use of Semantic Web technologies, Piggy Bank provides direct, immediate benefits to users in their use of the existing Web. Thus, the ex-istence of even just a few Semantic Web-enabled sites or a few scrapers already benefits users. Piggy Bank thereby offers an easy, incremental upgrade path to users without requiring a wholesale adoption of the Semantic Web's vision. To further improve this Semantic Web experience, we have created Semantic Bank, a web server application that lets Piggy Bank users share the Semantic Web information they have collected, enabling collaborative efforts to build so-phisticated Semantic Web information repositories through simple, everyday's use of Piggy Bank.

Book
24 Mar 2005
TL;DR: This work presents an efficient Algorithm for OWL-S Based Semantic Search in UDDI and a Semantic Approach for Designing E-Business Protocols using METEOR-S Web Service Annotation Framework with Machine Learning Classification.
Abstract: to Semantic Web Services and Web Process Composition.- Panel.- Academic and Industrial Research: Do Their Approaches Differ in Adding Semantics to Web Services?.- Talk.- Interoperability in Semantic Web Services.- Full Papers.- Bringing Semantics to Web Services: The OWL-S Approach.- A Survey of Automated Web Service Composition Methods.- Enhancing Web Services Description and Discovery to Facilitate Composition.- Compensation in the World of Web Services Composition.- Trust Negotiation for Semantic Web Services.- An Efficient Algorithm for OWL-S Based Semantic Search in UDDI.- A Semantic Approach for Designing E-Business Protocols.- Towards Automatic Discovery of Web Portals.- METEOR-S Web Service Annotation Framework with Machine Learning Classification.

Proceedings ArticleDOI
02 Apr 2005
TL;DR: A comparison of different methods for finding accessibility problems affecting users who are blind finds multiple developers, using a screen reader, were most consistently successful at finding most classes of problems, and tended to find about 50% of known problems.
Abstract: Web access for users with disabilities is an important goal and challenging problem for web content developers and designers. This paper presents a comparison of different methods for finding accessibility problems affecting users who are blind. Our comparison focuses on techniques that might be of use to Web developers without accessibility experience, a large and important group that represents a major source of inaccessible pages. We compare a laboratory study with blind users to an automated tool, expert review by web designers with and without a screen reader, and remote testing by blind users. Multiple developers, using a screen reader, were most consistently successful at finding most classes of problems, and tended to find about 50% of known problems. Surprisingly, a remote study with blind users was one of the least effective methods. All of the techniques, however, had different, complementary strengths and weaknesses.

Journal ArticleDOI
TL;DR: This article shows how Web process composition techniques can be enhanced by using semantic process templates to capture the semantic requirements of the process, using Semantic Web techniques for process template definition and Web service discovery.
Abstract: Web services have the potential to revolutionize e-commerce by enabling businesses to interact with each other on the fly. To date, however, Web processes using Web services have been created mostly at the syntactic level. Current composition standards focus on building processes based on the interface description of the participating services. This rigid approach, with its strong coupling between the process and the interface of the participating services, does not allow businesses to dynamically change partners and services. As shown in this article, Web process composition techniques can be enhanced by using semantic process templates to capture the semantic requirements of the process. The semantic process templates act as configurable modules for common industry processes maintaining the semantics of the participating activities, control flow, intermediate calculations, and conditional branches, and exposing them in an industry-accepted interface. The templates are instantiated to form executable processes according to the semantics of the activities in the templates. The use of ontologies in template definition allows much richer description of activity requirements and a more effective way of locating services to carry out activities in the executable Web process. Discovery of services considers not only functionality, but also the quality of service (QoS) of the corresponding activities. This unique approach combines the expressive power of present Web service composition standards with the advantages of Semantic Web techniques for process template definition and Web service discovery. The prototype implementation of the framework for building the templates carries out Semantic Web service discovery and generates the processes.

Journal ArticleDOI
TL;DR: This longitudinal benchmark study shows that European Web searching is evolving in certain directions, and European search topics are broadening, with a notable percentage decline in sexual and pornographic searching.
Abstract: The Web has become a worldwide source of information and a mainstream business tool. It is changing the way people conduct the daily business of their lives. As these changes are occurring, we need to understand what Web searching trends are emerging within the various global regions. What are the regional differences and trends in Web searching, if any? What is the effectiveness of Web search engines as providers of information? As part of a body of research studying these questions, we have analyzed two data sets collected from queries by mainly European users submitted to AlltheWeb.com on 6 February 2001 and 28 May 2002. AlltheWeb.com is a major and highly rated European search engine. Each data set contains approximately a million queries submitted by over 200,000 users and spans a 24-h period. This longitudinal benchmark study shows that European Web searching is evolving in certain directions. There was some decline in query length, with extremely simple queries. European search topics are broadening, with a notable percentage decline in sexual and pornographic searching. The majority of Web searchers view fewer than five Web documents, spending only seconds on a Web document. Approximately 50% of the Web documents viewed by these European users were topically relevant. We discuss the implications for Web information systems and information content providers.

Journal ArticleDOI
01 Jun 2005
TL;DR: An overview of the fundamental assumptions and concepts underlying current work on service composition are presented, and a sampling of key results in the area are provided.
Abstract: Web services technologies enable flexible and dynamic interoperation of autonomous software and information systems. A central challenge is the development of modeling techniques and tools for eanbling the (semi-)automatic composition and analysis of these services, taking into account their semantic and behavioral properties. This paper presents an overview of the fundamental assumptions and concepts underlying current work on service composition, and provides a sampling of key results in the area. It also provides a brief tour of several composition models including semantic web services, the "Roman" model, and the Mealy / conversation model.

Proceedings ArticleDOI
10 May 2005
TL;DR: This work presents the first integrated work in composing web services end to end from specification to deployment by synergistically combining the strengths of the above approaches.
Abstract: The demand for quickly delivering new applications is increasingly becoming a business imperative today. Application development is often done in an ad hoc manner, without standard frameworks or libraries, thus resulting in poor reuse of software assets. Web services have received much interest in industry due to their potential in facilitating seamless business-to-business or enterprise application integration. A web services composition tool can help automate the process, from creating business process functionality, to developing executable workflows, to deploying them on an execution environment. However, we find that the main approaches taken thus far to standardize and compose web services are piecemeal and insufficient. The business world has adopted a (distributed) programming approach in which web service instances are described using WSDL, composed into flows with a language like BPEL and invoked with the SOAP protocol. Academia has propounded the AI approach of formally representing web service capabilities in ontologies, and reasoning about their composition using goal-oriented inferencing techniques from planning. We present the first integrated work in composing web services end to end from specification to deployment by synergistically combining the strengths of the above approaches. We describe a prototype service creation environment along with a use-case scenario, and demonstrate how it can significantly speed up the time-to-market for new services.

Journal ArticleDOI
01 Jun 2005
TL;DR: The World Wide Web is a context in which traditional Information Retrieval methods are challenged, and given the volume of the Web and its speed of change, the coverage of modern search engines is relatively small.
Abstract: The key factors for the success of the World Wide Web are its large size and the lack of a centralized control over its contents. Both issues are also the most important source of problems for locating information. The Web is a context in which traditional Information Retrieval methods are challenged, and given the volume of the Web and its speed of change, the coverage of modern search engines is relatively small. Moreover, the distribution of quality is very skewed, and interesting pages are scarce in comparison with the rest of the content.

Journal ArticleDOI
TL;DR: A suite of methods that assess the similarity between two WSDL (Web Service Description Language) specifications based on the structure of their data types and operations and the semantics of their natural language descriptions and identifiers are developed.
Abstract: The web-services stack of standards is designed to support the reuse and interoperation of software components on the web. A critical step in the process of developing applications based on web services is service discovery, i.e. the identification of existing web services that can potentially be used in the context of a new web application. Discovery through catalog-style browsing (such as supported currently by web-service registries) is clearly insufficient. To support programmatic service discovery, we have developed a suite of methods that assess the similarity between two WSDL (Web Service Description Language) specifications based on the structure of their data types and operations and the semantics of their natural language descriptions and identifiers. Given only a textual description of the desired service, a semantic information-retrieval method can be used to identify and order the most relevant WSDL specifications based on the similarity of the element descriptions of the available specifications with the query. If a (potentially partial) specification of the desired service behavior is also available, this set of likely candidates can be further refined by a semantic structure-matching step, assessing the structural similarity of the desired vs the retrieved services and the semantic similarity of their identifiers. In this paper, we describe and experimentally evaluate our suite of service-similarity assessment methods.

Journal ArticleDOI
TL;DR: The cohesion metrics examine the fundamental quality of cohesion as it relates to ontologies in order to effectively make use of domain specific ontology development.
Abstract: Recently, domain specific ontology development has been driven by research on the Semantic Web. Ontologies have been suggested for use in many application areas targeted by the Semantic Web, such as dynamic web service composition and general web service matching. Fundamental characteristics of these ontologies must be determined in order to effectively make use of them: for example, Sirin, Hendler and Parsia have suggested that determining fundamental characteristics of ontologies is important for dynamic web service composition. Our research examines cohesion metrics for ontologies. The cohesion metrics examine the fundamental quality of cohesion as it relates to ontologies.

Journal ArticleDOI
31 Jan 2005
TL;DR: The methods people use in their workplace to organize web information for re-use are investigated, including the bookmarking and history list tools provided by web browsers and a variety of other methods and associated tools.
Abstract: This observational study investigates the methods people use in their workplace to organize web information for re-use. In addition to the bookmarking and history list tools provided by web browsers, people observed in our study used a variety of other methods and associated tools. For example, several participants emailed web addresses (URLs) along with comments to themselves and to others. Other methods observed included printing out web pages, saving web pages to the hard drive, pasting the address for a web page into a document and pasting the address into a personal web site. Differences emerged between people according to their workplace role and their relationship to the information they were gathering. Managers, for example, depended heavily on email to gather and disseminate information and did relatively little direct exploration of the Web. A functional analysis helps to explain differences in “keeping” behavior between people and to explain the overall diversity of methods observed. People differ in the functions they require according to their workplace role and the tasks they must perform; methods vary widely in the functions they provide. The functional analysis can also help to assess the likely success of various tools, current and proposed.

Journal ArticleDOI
TL;DR: This paper presents three techniques for automatically composing Web services into Web processes by using their ontological descriptions and relationships to other services by checking semantic similarities between interfaces of individual services.
Abstract: Discovering and composing individual Web services into more complex yet new and more useful Web processes is an important challenge. In this paper, we present three techniques for (semi) automatically composing Web services into Web processes by using their ontological descriptions and relationships to other services. In Interface-Matching Automatic composition technique, the possible compositions are obtained by checking semantic similarities between interfaces of individual services. Then these compositions are ranked considering their Quality of Services (QoS) and an optimum composition is selected. In Human-Assisted composition the user selects a service from a ranked list at certain stages. We also address automatic compositions in a Peer-to-Peer network.

Proceedings ArticleDOI
10 May 2005
TL;DR: This paper uses the Accepted Termination States (ATS) property as a mean to express the required failure atomicity of a CS, required by partners, and uses a set of transactional rules to assist designers to compose a valid CS with regards to the specified ATS.
Abstract: The recent evolution of Internet, driven by the Web services technology, is extending the role of the Web from a support of information interaction to a middleware for B2B interactions.Indeed, the Web services technology allows enterprises to outsource parts of their business processes using Web services. And it also provides the opportunity to dynamically offer new value-added services through the composition of pre-existing Web services.In spite of the growing interest in Web services, current technologies are found lacking efficient transactional support for composite Web services (CSs).In this paper, we propose a transactional approach to ensure the failure atomicity, of a CS, required by partners. We use the Accepted Termination States (ATS) property as a mean to express the required failure atomicity.Partners specify their CS, mainly its control flow, and the required ATS. Then, we use a set of transactional rules to assist designers to compose a valid CS with regards to the specified ATS.