scispace - formally typeset
Search or ask a question

Showing papers on "Web modeling published in 2008"


Journal ArticleDOI
TL;DR: This paper proposes a class of applications called collective knowledge systems, which unlock the ''collective intelligence'' of the Social Web with knowledge representation and reasoning techniques of the Semantic Web.

802 citations


Proceedings ArticleDOI
21 Apr 2008
TL;DR: This work conducts a thorough analytical investigation on the plurality of Web service interfaces that exist on the Web today and determines an intriguing result that 63% of the available Web services on theWeb are considered to be active.
Abstract: Searching for Web service access points is no longer attached to service registries as Web search engines have become a new major source for discovering Web services. In this work, we conduct a thorough analytical investigation on the plurality of Web service interfaces that exist on the Web today. Using our Web Service Crawler Engine (WSCE), we collect metadata service information on retrieved interfaces through accessible UBRs, service portals and search engines. We use this data to determine Web service statistics and distribution based on object sizes, types of technologies employed, and the number of functioning services. This statistical data can be used to help determine the current status of Web services. We determine an intriguing result that 63% of the available Web services on the Web are considered to be active. We further use our findings to provide insights on improving the service retrieval process.

573 citations


01 Jan 2008
TL;DR: In this paper, the authors identify the primary differences leading to the properties of interest in 2.0 to be characterized and identify novel challenges due to the different structures of Web2.0 sites, richer methods of user interaction, new technologies and fundamentally different philosophy.
Abstract: Web 2.0 is a buzzword introduced in 2003/04 which is commonly used to encompass various novel phenomena on the World Wide Web. Although largely a marketing term, some of the key attributes associated with Web 2.0 include the growth of social networks, bi-directional communication, various ‘glue’ technologies, and significant diversity in content types. We are not aware of a technical comparison between Web 1.0 and 2.0. While most of Web 2.0 runs on the same substrate as 1.0, there are some key differences. We capture those differences and their implications for technical work in this space. Our goal is to identify the primary differences leading to the properties of interest in 2.0 to be characterized. We identify novel challenges due to the different structures of Web 2.0 sites, richer methods of user interaction, new technologies, and fundamentally different philosophy. Although a significant amount of past work can be reapplied, some critical thinking is needed for the networking community to analyze the challenges of this new and rapidly evolving environment.

566 citations


Journal ArticleDOI
TL;DR: Novel challenges due to the different structures of Web 2.0 sites, richer methods of user interaction, new technologies, and fundamentally different philosophy are identified.
Abstract: Web 2.0 is a buzzword introduced in 2003-04 which is commonly used to encompass various novel phenomena on the World Wide Web. Although largely a marketing term, some of the key attributes associated with Web 2.0 include the growth of social networks, bi-directional communication, various 'glue' technologies, and significant diversity in content types. We are not aware of a technical comparison between Web 1.0 and 2.0. While most of Web 2.0 runs on the same substrate as 1.0, there are some key differences. We capture those differences and their implications for technical work in this paper. Our goal is to identify the primary differences leading to the properties of interest in 2.0 to be characterized. We identify novel challenges due to the different structures of Web 2.0 sites, richer methods of user interaction, new technologies, and fundamentally different philosophy. Although a significant amount of past work can be reapplied, some critical thinking is needed for the networking community to analyze the challenges of this new and rapidly evolving environment.

508 citations


Book ChapterDOI
29 Sep 2008
TL;DR: This paper analyzes the complexity of product description on the Semantic Web and defines the GoodRelations ontology that covers the representational needs of typical business scenarios for commodity products and services.
Abstract: A promising application domain for Semantic Web technology is the annotation of products and services offerings on the Web so that consumers and enterprises can search for suitable suppliers using products and services ontologies. While there has been substantial progress in developing ontologies for typesof products and services, namely eClassOWL, this alone does not provide the representational means required for e-commerce on the Semantic Web. Particularly missing is an ontology that allows describing the relationships between (1) Web resources, (2) offerings made by means of those Web resources, (3) legal entities, (4) prices, (5) terms and conditions, and the aforementioned ontologies for products and services (6). For example, we must be able to say that a particular Web site describes an offer to sell cell phones of a certain make and model at a certain price, that a piano house offers maintenance for pianos that weigh less than 150 kg, or that a car rental company leases out cars of a certain make and model from a set of branches across the country. In this paper, we analyze the complexity of product description on the Semantic Web and define the GoodRelations ontology that covers the representational needs of typical business scenarios for commodity products and services.

403 citations


Journal ArticleDOI
01 Aug 2008
TL;DR: An algorithm that efficiently navigates the search space of possible input combinations to identify only those that generate URLs suitable for inclusion into the authors' web search index and an extensive experimental evaluation validating the effectiveness of the algorithms is presented.
Abstract: The Deep Web, i.e., content hidden behind HTML forms, has long been acknowledged as a significant gap in search engine coverage. Since it represents a large portion of the structured data on the Web, accessing Deep-Web content has been a long-standing challenge for the database community. This paper describes a system for surfacing Deep-Web content, i.e., pre-computing submissions for each HTML form and adding the resulting HTML pages into a search engine index. The results of our surfacing have been incorporated into the Google search engine and today drive more than a thousand queries per second to Deep-Web content.Surfacing the Deep Web poses several challenges. First, our goal is to index the content behind many millions of HTML forms that span many languages and hundreds of domains. This necessitates an approach that is completely automatic, highly scalable, and very efficient. Second, a large number of forms have text inputs and require valid inputs values to be submitted. We present an algorithm for selecting input values for text search inputs that accept keywords and an algorithm for identifying inputs which accept only values of a specific type. Third, HTML forms often have more than one input and hence a naive strategy of enumerating the entire Cartesian product of all possible inputs can result in a very large number of URLs being generated. We present an algorithm that efficiently navigates the search space of possible input combinations to identify only those that generate URLs suitable for inclusion into our web search index. We present an extensive experimental evaluation validating the effectiveness of our algorithms.

378 citations


01 Jan 2008
TL;DR: This tutorial will provide participants with a solid foundation from which to begin publishing Linked Data on the Web, as well as to implement applications that consume Linked data from the Web.
Abstract: The Web is increasingly understood as a global information space consisting not just of linked documents, but also of Linked Data. The Linked Data principles provide a basis for realizing this Web of Data, or Semantic Web. Since early 2007 numerous data sets have been published on the Web according to these principles, in domains as broad as music, books, geographical information, films, people, events, reviews and photos. In combination these data sets consist of over 2 billion RDF triples, interlinked by more than 3 million triples that cross data sets. As this Web of Linked Data continues to grow, and an increasing number of applications are developed that exploit these data sets, there is a growing need for data publishers, researchers, developers and Web practitioners to understand Linked Data principles and practice. Run by some of the leading members of the Linked Data community, this tutorial will address those needs, and provide participants with a solid foundation from which to begin publishing Linked Data on the Web, as well as to implement applications that consume Linked Data from the Web.

377 citations


Journal ArticleDOI
01 May 2008
TL;DR: This survey focuses on investigating the different research problems, solutions, and directions to deploying Web services that are managed by an integrated Web Service Management System and conducts a comparative study on how current research approaches and projects fit in.
Abstract: Web services are expected to be the key technology in enabling the next installment of the Web in the form of the Service Web. In this paradigm shift, Web services would be treated as first-class objects that can be manipulated much like data is now manipulated using a database management system. Hitherto, Web services have largely been driven by standards. However, there is a strong impetus for defining a solid and integrated foundation that would facilitate the kind of innovations witnessed in other fields, such as databases. This survey focuses on investigating the different research problems, solutions, and directions to deploying Web services that are managed by an integrated Web Service Management System (WSMS). The survey identifies the key features of a WSMS and conducts a comparative study on how current research approaches and projects fit in.

359 citations


Proceedings ArticleDOI
21 Apr 2008
TL;DR: This workshop summary will outline the technical context in which Linked Data is situated, describe developments in the past year through initiatives such as the Linking Open Data community project, and look ahead to the workshop itself.
Abstract: The Web is increasingly understood as a global information space consisting not just of linked documents, but also of Linked Data. More than just a vision, the resulting Web of Data has been brought into being by the maturing of the Semantic Web technology stack, and by the publication of an increasing number of datasets according to the principles of Linked Data.The Linked Data on the Web (LDOW2008) workshop brings together researchers and practitioners working on all aspects of Linked Data. The workshop provides a forum to present the state of the art in the field and to discuss ongoing and future research challenges. In this workshop summary we will outline the technical context in which Linked Data is situated, describe developments in the past year through initiatives such as the Linking Open Data community project, and look ahead to the workshop itself.

351 citations


Book
10 Sep 2008
TL;DR: The authors provide an examination of Web searching from multiple levels of analysis, from theoretical overview to detailed study of term usage, and integrate these different level of analysis into a coherent picture of how people locate information on the Web using search engines.
Abstract: Web Search: Public Searching of the Web, co-authored by Drs. Amanda Spink and Bernard J. Jansen, is one of the first manuscripts that address the human - system interaction of Web searching in a thorough and complete manner. The authors provide an examination of Web searching from multiple levels of analysis, from theoretical overview to detailed study of term usage, and integrate these different levels of analysis into a coherent picture of how people locate information on the Web using search engines.Drawing primarily on their own research and work in the field, the authors present the temporal changes in, the growth of, and the stability of how people interact with Web search engines. Drs. Spink and Jansen present results from an analysis of multiple search engine data sets over a six year period, giving a firsthand account of the emergence of Web searching. They also compare and contrast their findings to the results of other researchers in the field, providing a valuable bibliographic resource.This research is directly relevant to those interested in providing information or services on the Web, along those who research and study the Web as an information resource. Graduate students, academic and corporate researchers, search engine designers, information architects, and search engine optimizers will find the book of particular benefit.

311 citations


Journal ArticleDOI
TL;DR: Service mashups facilitate the design and development of novel and modern Web applications based on easy-to-accomplish end-user service compositions.
Abstract: Web services are becoming a major technology for deploying automated interactions between distributed and heterogeneous applications, and for connecting business processes. Service mashups indicate a way to create new Web applications by combining existing Web resources utilizing data and Web APIs. They facilitate the design and development of novel and modern Web applications based on easy-to-accomplish end-user service compositions.

Proceedings ArticleDOI
21 Apr 2008
TL;DR: It is shown that Web 2.0 is not only well suited for learning but also for research on learning: the wealth of services that is available and their openness regarding API and data allow to assemble prototypes of technology-supported learning applications in amazingly small amount of time.
Abstract: The term "Web 2.0" is used to describe applications that distinguish themselves from previous generations of software by a number of principles. Existing work shows that Web 2.0 applications can be successfully exploited for technology-enhance learning. However, in-depth analyses of the relationship between Web 2.0 technology on the one hand and teaching and learning on the other hand are still rare. In this article, we will analyze the technological principles of the Web 2.0 and describe their pedagogical implications on learning. We will furthermore show that Web 2.0 is not only well suited for learning but also for research on learning: the wealth of services that is available and their openness regarding API and data allow to assemble prototypes of technology-supported learning applications in amazingly small amount of time. These prototypes can be used to evaluate research hypotheses quickly. We will present two example prototypes and discuss the lessons we learned from building and using these prototypes.

Journal ArticleDOI
TL;DR: The Internet and the Web are evolving to a platform for collaboration, sharing, innovation and user-created content—the so-called Web 2.0 environment, its tools, applications, characteristics, and various types of online groups are described.
Abstract: The Internet and the Web are evolving to a platform for collaboration, sharing, innovation and user-created content—the so-called Web 2.0 environment. This environment includes social and business networks, and it is influencing what people do on the Web and intranets, individually and in groups. This paper describes the Web 2.0 environment, its tools, applications, characteristics. It also describes various types of online groups, especially social networks, and how they operate in the Web 2.0 environment. Of special interest is the way organization members communicate and collaborate mainly via wikis and blogs. In addition, the paper includes a proposed triad relational model (Technology–People–Community) of social/work life on the Internet. Particularly, social/work groups are becoming sustainable because of the incentives for participants to connect and network with other users. A discussion of group dynamics that is based on the human needs for trust, support, and sharing, regardless if the setting is a physical or virtual one, follows. Finally, future research directions are outlined.

Proceedings ArticleDOI
09 Dec 2008
TL;DR: A microformat is proposed called hRESTS (HTML for RESTful Services) for machine-readable descriptions of Web APIs, backed by a simple service model, which captures the facets of public APIs important for mashup developers and MicroWSMO, which provides support for semantic automation.
Abstract: The Web 2.0 wave brings, among other aspects, the programmable Web: increasing numbers of Web sites provide machine-oriented APIs and Web services. However, most APIs are only described with text in HTML documents. The lack of machine-readable API descriptions affects the feasibility of tool support for developers who use these services. We propose a microformat called hRESTS (HTML for RESTful Services) for machine-readable descriptions of Web APIs, backed by a simple service model. The hRESTS microformat describes main aspects of services, such as operations, inputs and outputs. We also present two extensions of hRESTS: SA-REST, which captures the facets of public APIs important for mashup developers, and MicroWSMO, which provides support for semantic automation.

Proceedings ArticleDOI
09 Apr 2008
TL;DR: A novel state-based testing approach specifically designed to exercise Ajax Web applications that evaluates the approach on a case study in terms of fault revealing capability and the amount of manual interventions involved in constructing and refining the model required.
Abstract: Ajax supports the development of rich-client Web applications, by providing primitives for the execution of asynchronous requests and for the dynamic update of the page structure and content. Often, Ajax Web applications consist of a single page whose elements are updated in response to callbacks activated asynchronously by the user or by a server message. These features give rise to new kinds of faults that are hardly revealed by existing Web testing approaches. In this paper, we propose a novel state-based testing approach, specifically designed to exercise Ajax Web applications. The document object model (DOM) of the page manipulated by the Ajax code is abstracted into a state model. Callback executions triggered by asynchronous messages received from the Web server are associated with state transitions. Test cases are derived from the state model based on the notion of semantically interacting events. We evaluate the approach on a case study in terms of fault revealing capability. We also measure the amount of manual interventions involved in constructing and refining the model required by this approach.

Journal ArticleDOI
TL;DR: It is found that Web browsing is a rapid activity even for pages with substantial content, which calls for page designs that allow for cursory reading and characteristic usage patterns for different types of Web sites emphasize the need for more adaptive and customizable Web browsers.
Abstract: In the past decade, the World Wide Web has been subject to dramatic changes. Web sites have evolved from static information resources to dynamic and interactive applications that are used for a broad scope of activities on a daily basis. To examine the consequences of these changes on user behavior, we conducted a long-term client-side Web usage study with twenty-five participants. This report presents results of this study and compares the user behavior with previous long-term browser usage studies, which range in age from seven to thirteen years. Based on the empirical data and the interview results, various implications for the interface design of browsers and Web sites are discussed.A major finding is the decreasing prominence of backtracking in Web navigation. This can largely be attributed to the increasing importance of dynamic, service-oriented Web sites. Users do not navigate on these sites searching for information, but rather interact with an online application to complete certain tasks. Furthermore, the usage of multiple windows and tabs has partly replaced back button usage, posing new challenges for user orientation and backtracking. We found that Web browsing is a rapid activity even for pages with substantial content, which calls for page designs that allow for cursory reading. Click maps provide additional information on how users interact with the Web on page level. Finally, substantial differences were observed between users, and characteristic usage patterns for different types of Web sites emphasize the need for more adaptive and customizable Web browsers.

Journal ArticleDOI
TL;DR: N3Logic is a logic that allows rules to be expressed in a Web environment that extends RDF with syntax for nested graphs and quantified variables and with predicates for implication and accessing resources on the Web, and functions including cryptographic, string, math.
Abstract: The Semantic Web drives toward the use of the Web for interacting with logically interconnected data. Through knowledge models such as Resource Description Framework (RDF), the Semantic Web provides a unifying representation of richly structured data. Adding logic to the Web implies the use of rules to make inferences, choose courses of action, and answer questions. This logic must be powerful enough to describe complex properties of objects but not so powerful that agents can be tricked by being asked to consider a paradox. The Web has several characteristics that can lead to problems when existing logics are used, in particular, the inconsistencies that inevitably arise due to the openness of the Web, where anyone can assert anything. N3Logic is a logic that allows rules to be expressed in a Web environment. It extends RDF with syntax for nested graphs and quantified variables and with predicates for implication and accessing resources on the Web, and functions including cryptographic, string, math. The main goal of N3Logic is to be a minimal extension to the RDF data model such that the same language can be used for logic and data. In this paper, we describe N3Logic and illustrate through examples why it is an appropriate logic for the Web.

Journal ArticleDOI
TL;DR: This work introduces VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser and facilitates the construction of dynamic search queries that combine filters from more than one data dimension.
Abstract: In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.

Journal ArticleDOI
TL;DR: This paper presents a state-of-the-art of semantic wikis, and introduces SweetWiki, an example of an application reconciling two trends of the future web: a semantically augmented web and a web of social applications where every user is an active provider as well as a consumer of information.

Proceedings ArticleDOI
20 Jul 2008
TL;DR: An automated input test generation algorithm that uses runtime values to analyze dynamic code, models the semantics of string operations, and handles operations whose argument and return values may not share a common type is proposed.
Abstract: Web applications routinely handle sensitive data, and many people rely on them to support various daily activities, so errors can have severe and broad-reaching consequences. Unlike most desktop applications, many web applications are written in scripting languages, such as PHP. The dynamic features commonly supported by these languages significantly inhibit static analysis and existing static analysis of these languages can fail to produce meaningful results on realworld web applications.Automated test input generation using the concolic testing framework has proven useful for finding bugs and improving test coverage on C and Java programs, which generally emphasize numeric values and pointer-based data structures. However, scripting languages, such as PHP, promote a style of programming for developing web applications that emphasizes string values, objects, and arrays.In this paper, we propose an automated input test generation algorithm that uses runtime values to analyze dynamic code, models the semantics of string operations, and handles operations whose argument and return values may not share a common type. As in the standard concolic testing framework, our algorithm gathers constraints during symbolic execution. Our algorithm resolves constraints over multiple types by considering each variable instance individually, so that it only needs to invert each operation. By recording constraints selectively, our implementation successfully finds bugs in real-world web applications which state-of-the-art static analysis tools fail to analyze.

BookDOI
01 Jan 2008
TL;DR: This chapter discusses the Oows Model-Driven approach for Developing Web Applications, and the need for Empirical Web Engineering, as well as quality Evaluation and Experimental Web Engineering.
Abstract: Web Engineering and Web Applications Development.- Web Application Development: Challenges And The Role Of Web Engineering.- The Web as an Application Platform.- Web Design Methods.- Overview of Design Issues for Web Applications Development.- Applying the Oows Model-Driven Approach for Developing Web Applications. The Internet Movie Database Case Study.- Modeling and Implementing Web Applications with Oohdm.- Uml-Based Web Engineering.- Designing Multichannel Web Applications as "Dialogue Systems": the Idm Model.- Designing Web Applications with Webml and Webratio.- HERA.- WSDM: Web Semantics Design Method.- An Overview Of Model-Driven Web Engineering and the Mda.- Quality Evaluation and Experimental Web Engineering.- How to Measure and Evaluate Web Applications in a Consistent Way.- The Need for Empirical Web Engineering: An Introduction.- Conclusions.


Journal ArticleDOI
TL;DR: This paper studies the dynamic web service selection problem in a failure-prone environment and proposes two strategies to select Web services that are likely to successfully complete the execution of a given sequence of operations.
Abstract: This paper studies the dynamic web service selection problem in a failure-prone environment, which aims to determine a subset of Web services to be invoked at run-time so as to successfully orchestrate a composite web service. We observe that both the composite and constituent web services often constrain the sequences of invoking their operations and therefore propose to use finite state machine to model the permitted invocation sequences of Web service operations. We assign each state of execution an aggregated reliability to measure the probability that the given state will lead to successful execution in the context where each web service may fail with some probability. We show that the computation of aggregated reliabilities is equivalent to eigenvector computation and adopt the power method to efficiently derive aggregated reliabilities. In orchestrating a composite Web service, we propose two strategies to select Web services that are likely to successfully complete the execution of a given sequence of operations. A prototype that implements the proposed approach using BPEL for specifying the invocation order of a web service is developed and served as a testbed for comparing our proposed strategies and other baseline Web service selection strategies.

Journal ArticleDOI
TL;DR: This article discusses putting these together, with linked semantics coupled to linked social networks, to deliver a much greater effect on the power of the Web.

Journal ArticleDOI
TL;DR: A complete framework and findings in mining Web usage patterns from Web log files of a real Web site that has all the challenging aspects of real-life Web usage mining, including evolving user profiles and external data describing an ontology of the Web content is presented.
Abstract: In this paper, we present a complete framework and findings in mining Web usage patterns from Web log files of a real Web site that has all the challenging aspects of real-life Web usage mining, including evolving user profiles and external data describing an ontology of the Web content. Even though the Web site under study is part of a nonprofit organization that does not "sell" any products, it was crucial to understand "who" the users were, "what" they looked at, and "how their interests changed with time," all of which are important questions in Customer Relationship Management (CRM). Hence, we present an approach for discovering and tracking evolving user profiles. We also describe how the discovered user profiles can be enriched with explicit information need that is inferred from search queries extracted from Web log data. Profiles are also enriched with other domain-specific information facets that give a panoramic view of the discovered mass usage modes. An objective validation strategy is also used to assess the quality of the mined profiles, in particular their adaptability in the face of evolving user behavior.

Patent
29 Jul 2008
TL;DR: In this paper, the authors present an application management framework for web applications that may provide speed improvements, capability improvements, user experience improvements, increased advertising profit opportunities, and simplified application development to a wide range of network devices.
Abstract: Various embodiments are directed to an application management framework for web applications that may provide speed improvements, capability improvements, user experience improvements, increased advertising profit opportunities, and simplified application development to wide range of network devices. The described embodiments may employ techniques for containing, controlling, and presenting multiple web-based applications in a shared web browser application management framework. Sharing a web browser application management framework provides the capability for rapidly switching between applications, allows for multitasking, facilitates using a common set of input controls for applications, and makes it possible for applications to be available with little perceived startup ('boot') time. The described embodiments also provide incentives for web application users, web application developers, web application portal providers, and web advertising providers to share in transactions between one another.

Journal ArticleDOI
01 Jan 2008
TL;DR: A machine-learning-based approach that combines Web content analysis and Web structure analysis is proposed that represents each Web page by a set of content-based and link-based features, which can be used as the input for various machine learning algorithms.
Abstract: As the Web continues to grow, it has become increasingly difficult to search for relevant information using traditional search engines. Topic-specific search engines provide an alternative way to support efficient information retrieval on the Web by providing more precise and customized searching in various domains. However, developers of topic-specific search engines need to address two issues: how to locate relevant documents (URLs) on the Web and how to filter out irrelevant documents from a set of documents collected from the Web. This paper reports our research in addressing the second issue. We propose a machine-learning-based approach that combines Web content analysis and Web structure analysis. We represent each Web page by a set of content-based and link-based features, which can be used as the input for various machine learning algorithms. The proposed approach was implemented using both a feedforward/backpropagation neural network and a support vector machine. Two experiments were designed and conducted to compare the proposed Web-feature approach with two existing Web page filtering methods - a keyword-based approach and a lexicon-based approach. The experimental results showed that the proposed approach in general performed better than the benchmark approaches, especially when the number of training documents was small. The proposed approaches can be applied in topic-specific search engine development and other Web applications such as Web content management.

Journal ArticleDOI
TL;DR: This work formulate the Web-service composition problem in terms of AI planning and network optimization problems to investigate its complexity in detail, and develops a novel AI planning-based heuristic Web- service composition algorithm named WSPR.
Abstract: The main research focus of Web services is to achieve the interoperability between distributed and heterogeneous applications. Therefore, flexible composition of Web services to fulfill the given challenging requirements is one of the most important objectives in this research field. However, until now, service composition has been largely an error-prone and tedious process. Furthermore, as the number of available web services increases, finding the right Web services to satisfy the given goal becomes intractable. In this paper, toward these issues, we propose an AI planning-based framework that enables the automatic composition of Web services, and explore the following issues. First, we formulate the Web-service composition problem in terms of AI planning and network optimization problems to investigate its complexity in detail. Second, we analyze publicly available Web service sets using network analysis techniques. Third, we develop a novel Web-service benchmark tool called WSBen. Fourth, we develop a novel AI planning-based heuristic Web-service composition algorithm named WSPR. Finally, we conduct extensive experiments to verify WSPR against state-of-the-art AI planners. It is our hope that both WSPR and WSBen will provide useful insights for researchers to develop Web-service discovery and composition algorithms, and software.

Journal ArticleDOI
TL;DR: This paper outlines a semantic weblogs scenario that illustrates the potential for combining Web 2.0 and Semantic Web technologies, while highlighting the unresolved issues that impede its realization.

Journal ArticleDOI
TL;DR: This paper presents the design and implementation of two solutions that combine REST-based design and RDF data access: one solution for integrating existing web services and one server-side solution for creating RDF REST services.