scispace - formally typeset
Search or ask a question

Showing papers on "Web standards published in 2009"


Journal ArticleDOI
TL;DR: The authors describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked data community as it moves forward.
Abstract: The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.

5,113 citations


Book
23 Sep 2009

1,494 citations


Book
06 Aug 2009
TL;DR: This book concentrates on Semantic Web technologies standardized by the World Wide Web Consortium: RDF and SPARQL enable data exchange and querying, RDFS and OWL provide expressive ontology modeling, and RIF supports rule-based modeling.
Abstract: With more substantial funding from research organizations and industry, numerous large-scale applications, and recently developed technologies, the Semantic Web is quickly emerging as a well-recognized and important area of computer science. While Semantic Web technologies are still rapidly evolving, Foundations of Semantic Web Technologies focuses on the established foundations in this area that have become relatively stable over time. It thoroughly covers basic introductions and intuitions, technical details, and formal foundations.The book concentrates on Semantic Web technologies standardized by the World Wide Web Consortium: RDF and SPARQL enable data exchange and querying, RDFS and OWL provide expressive ontology modeling, and RIF supports rule-based modeling. The text also describes methods for specifying, querying, and reasoning with ontological information. In addition, it explores topics that are clearly beyond foundations, such as tools, applications, and engineering aspects.Written by highly respected researchers with a deep understanding of the material, this text centers on the formal specifications of the subject and supplies many pointers that are useful for employing Semantic Web technologies in practice.Updates, errata, slides for teaching, and links to further resources are available at http://semantic-web-book.org/

720 citations


Journal ArticleDOI
TL;DR: As work in Web page classification is reviewed, the importance of these Web-specific features and algorithms are noted, state-of-the-art practices are described, and the underlying assumptions behind the use of information from neighboring pages are tracked.
Abstract: Classification of Web page content is essential to many tasks in Web information retrieval such as maintaining Web directories and focused crawling. The uncontrolled nature of Web content presents additional challenges to Web page classification as compared to traditional text classification, but the interconnected nature of hypertext also provides features that can assist the process.As we review work in Web page classification, we note the importance of these Web-specific features and algorithms, describe state-of-the-art practices, and track the underlying assumptions behind the use of information from neighboring pages.

502 citations


Proceedings ArticleDOI
06 Jul 2009
TL;DR: The comprehensive experimental analysis shows that WSRec achieves better prediction accuracy than other approaches, and includes a user-contribution mechanism for Web service QoS information collection and an effective and novel hybrid collaborative filtering algorithm for Web Service QoS value prediction.
Abstract: As the abundance of Web services on the World Wide Web increase,designing effective approaches for Web service selection and recommendation has become more and more important. In this paper, we present WSRec, a Web service recommender system, to attack this crucial problem. WSRec includes a user-contribution mechanism for Web service QoS information collection and an effective and novel hybrid collaborative filtering algorithm for Web service QoS value prediction. WSRec is implemented by Java language and deployed to the real-world environment. To study the prediction performance, A total of 21,197 public Web services are obtained from the Internet and a large-scale real-world experiment is conducted, where more than 1.5 millions test results are collected from 150 service users in different countries on 100 publicly available Web services located all over the world. The comprehensive experimental analysis shows that WSRec achieves better prediction accuracy than other approaches.

436 citations


Journal ArticleDOI
TL;DR: Whether using WEB 2.0 concepts and tools can yield better assimilation of knowledge management in organizations can be investigated in order to learn.
Abstract: Purpose – The purpose of this paper is to provide an understanding of the WEB 2.0 phenomenon and its implications on knowledge management; thus, in order to learn whether using WEB 2.0 concepts and tools can yield better assimilation of knowledge management in organizations.

421 citations


Proceedings ArticleDOI
28 Dec 2009
TL;DR: This paper discussed aspects of semantic web and trust management including defining trust and describing trust negotiations and then relationship of them and how mechanisms of XML access control are used to protect the confidentiality, integrity and availability of ontologies in trust management.
Abstract: The contemporary Web is heading towards its next stage of evolution From a clump of unorganized information spaces, the Web is becoming more focused on the meaning of information that is a Semantic Web Trust is an integral component in semantic web, allowing people to act under uncertainty and with the risk of negative consequences In this paper we discussed trust management and its connection to the semantic web We first discussed aspects of semantic web and trust management including defining trust and describing trust negotiations and then relationship of them After that we explored how mechanisms of XML access control are used to protect the confidentiality, integrity and availability of ontologies in trust management

394 citations


Book ChapterDOI
17 Dec 2009
TL;DR: The pioneering role of Berners-Lee in the development of the Web, the accomplishments and vision of the W3C, and theDevelopment of the Semantic Web are described.
Abstract: The World Wide Web Consortium (W3C) is the organization that leads the development of standards for the Web. Sir Tim Berners-Lee, the founder and current director of the W3C, envisions a linked network of information resources that guides Web standards development and points the way towards the creation of a Semantic Web. This entry describes the pioneering role of Berners-Lee in the development of the Web, the accomplishments and vision of the W3C, and the development of the Semantic Web

368 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is both to promote scholarly inquiry about the need of a new type a pedagogy (Web 2.0 based) and the development / adoption of best practice in teaching and learning with web 2.

365 citations


Proceedings ArticleDOI
20 Apr 2009
TL;DR: Triplify is implemented as a light-weight software component, which can be easily integrated into and deployed by the numerous, widely installed Web applications and is usable to publish very large datasets, such as 160GB of geo data from the OpenStreetMap project.
Abstract: In this paper we present Triplify - a simplistic but effective approach to publish Linked Data from relational databases. Triplify is based on mapping HTTP-URI requests onto relational database queries. Triplify transforms the resulting relations into RDF statements and publishes the data on the Web in various RDF serializations, in particular as Linked Data. The rationale for developing Triplify is that the largest part of information on the Web is already stored in structured form, often as data contained in relational databases, but usually published by Web applications only as HTML mixing structure, layout and content. In order to reveal the pure structured information behind the current Web, we have implemented Triplify as a light-weight software component, which can be easily integrated into and deployed by the numerous, widely installed Web applications. Our approach includes a method for publishing update logs to enable incremental crawling of linked data sources. Triplify is complemented by a library of configurations for common relational schemata and a REST-enabled data source registry. Triplify configurations containing mappings are provided for many popular Web applications, including osCommerce, WordPress, Drupal, Gallery, and phpBB. We will show that despite its light-weight architecture Triplify is usable to publish very large datasets, such as 160GB of geo data from the OpenStreetMap project.

321 citations


Journal ArticleDOI
TL;DR: The paper concludes by stating that the Web has succeeded as a single global information space that has dramatically changed the way the authors use information, disrupted business models, and led to profound societal change.
Abstract: The paper discusses the semantic Web and Linked Data. The classic World Wide Web is built upon the idea of setting hyperlinks between Web documents. These hyperlinks are the basis for navigating and crawling the Web.Technologically, the core idea of Linked Data is to use HTTP URLs not only to identify Web documents, but also to identify arbitrary real world entities.Data about these entities is represented using the Resource Description Framework (RDF). Whenever a Web client resolves one of these URLs, the corresponding Web server provides an RDF/ XML or RDFa description of the identified entity. These descriptions can contain links to entities described by other data sources.The Web of Linked Data can be seen as an additional layer that is tightly interwoven with the classic document Web. The author mentions the application of Linked Data in media, publications, life sciences, geographic data, user-generated content, and cross-domain data sources. The paper concludes by stating that the Web has succeeded as a single global information space that has dramatically changed the way we use information, disrupted business models, and led to profound societal change.

Journal ArticleDOI
TL;DR: The paper questions whether Web 2.0 technologies (social software) are a real panacea for the challenges associated with the management of knowledge and enables a new model of personal knowledge management (PKM) that includes formal and informal communication, collaboration and social networking tools.
Abstract: Purpose – The purpose of this paper is to discuss new approaches for managing personal knowledge in the Web 2.0 era. The paper questions whether Web 2.0 technologies (social software) are a real panacea for the challenges associated with the management of knowledge. Can Web 2.0 reconcile the conflicting interests of managing organisational knowledge with personal objectives? Does Web 2.0 enable a more effective way of sharing and managing knowledge at the personal level?Design/methodology/approach – Theoretically deductive with illustrative examples.Findings – Web 2.0 plays a multifaceted role for communicating, collaborating, sharing and managing knowledge. Web 2.0 enables a new model of personal knowledge management (PKM) that includes formal and informal communication, collaboration and social networking tools. This new PKM model facilitates interaction, collaboration and knowledge exchanges on the web and in organisations.Practical implications – Based on these findings, professionals and scholars wil...

Book
27 Apr 2009
TL;DR: This book argues that it can be useful for social scientists to measure aspects of the web and explains how this can be achieved on both a small and large scale.
Abstract: Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number of web sites in Ireland, the number of web pages in the CNN web site, the number of blogs mentioning Barack Obama before the 2008 presidential campaign). This book argues that it can be useful for social scientists to measure aspects of the web and explains how this can be achieved on both a small and large scale. The book is intended for social scientists with research topics that are wholly or partly online (e.g., social networks, news, political communication) and social scientists with offline research topics with an online reflection, even if this is not a core component (e.g., diaspora communities, consumer culture, linguistic change). The book is also intended for library and information science students in the belief that the knowledge and techniques described will be useful for them to guide and aid other social scientists in their research. In addition, the techniques and issues are all directly relevant to library and information science research problems. Table of Contents: Introduction / Web Impact Assessment / Link Analysis / Blog Searching / Automatic Search Engine Searches: LexiURL Searcher / Web Crawling: SocSciBot / Search Engines and Data Reliability / Tracking User Actions Online / Advaned Techniques / Summary and Future Directions

Journal Article
TL;DR: With information systems (IS) classrooms quickly filling with the Google/Facebook generation accustomed to being connected to information sources and social networks all the time and in many forms, how can these technologies be used to transform, supplement, or even supplant current pedagogical practices?
Abstract: 1. INTRODUCTION Whether it is a social networking site like Facebook, a video stream delivered via YouTube, or collaborative discussion and document sharing via Google Apps, more people are using Web 2.0 and virtual world technologies in the classroom to communicate, express ideas, and form relationships centered around topical interests. Virtual Worlds immerse participants even deeper in technological realms rife with interaction. Instead of simply building information, people create entire communities comprised of self-built worlds and avatars centered around common interests, learning, or socialization in order to promote information exchange. With information systems (IS) classrooms quickly filling with the Google/Facebook generation accustomed to being connected to information sources and social networks all the time and in many forms, how can we best use these technologies to transform, supplement, or even supplant current pedagogical practices? Will holding office hours in a chat room make a difference? What about creating collaborative Web content with Wikis? How about demonstrations of complex concepts in a Virtual World so students can experiment endlessly? In this JISE special issue, we will explore these questions and more. 2. TYPES OF WEB 2.0 TECHNOLOGIES Web 2.0 technologies encompass a variety of different meanings that include an increased emphasis on user generated content, data and content sharing, collaborative effort, new ways of interacting with Web-based applications, and the use of the Web as a social platform for generating, repositioning and consuming content. The beginnings of the shared content nature of Web 2.0 appeared in 1980 in Tim Berners-Lee's prototype Web software. However, the content sharing aspects of the Web were lost in the original rollout, and did not reappear until Ward Cunningham wrote the first wiki in 1994-1995. Blogs, another early part of the Web 2.0 phenomenon, were sufficiently developed to gain the name weblogs in 1997 (Franklin & van Harmelen, 2007). The first use of the term Web 2.0 was in 2004 (Graham, 2005; O'Reilly, 2005a; O'Reilly, 2005b) "Web 2.0" refers to a perceived second generation of Web development and design that facilitates communications and secures information sharing, interoperability, and collaboration on the World Wide Web. Web 2.0 concepts have led to the development and evolution of Web-based communities, hosted services, and applications; such as social-networking sites, video-sharing sites, wikis, blogs, and folksonomies" (Web 2.0, 2009). The emphasis on user participation--also known as the "Read/Write" Web characterizes most people's definitions of Web 2.0. There are many types of Web 2.0 technologies and new offerings appear almost daily. The followng are some basic categories in which we can classify most Web 2.0 offerings. 2.1 Wikis A "wiki" is a collection of Web pages designed to enable anyone with access to contribute or modify content, using a simplified markup language, and is often used to create collaborative Websites. (Wiki, 2009). One of the best known wikis is Wikipedia. Wikis can be used in education to facilitate knowledge systems powered by students (Raman, Ryan, & Olfman, 2005). 2.2 Blogs A blog (weblog) is a type of Website, usually maintained by an individual with regular commentary entries, event descriptions, or other material such as graphics or video. One example of the use of blogs in education is the use of question blogging, a type of blog that answers questions. Moreover, these questions and discussions can be a collaborative endeavor among instructors and students. Wagner (2003) addressed using blogs in education by publishing learning logs. 2.3 Podcasts A podcast is a digital media file, usually digital audio or video that is freely available for download from the Internet using software that can handle RSS feeds (Podcast, 2009). …

Journal ArticleDOI
TL;DR: The base of Web 3.0 applications resides in the resource description framework (RDF) for providing a means to link data from multiple Web sites or databases, and with the SPARQL query language, applications can use native graph-based RDF stores and extract RDF data from traditional databases.
Abstract: While Web 3.0 technologies are difficult to define precisely, the outline of emerging applications has become clear over the past year. We can thus essentially view Web 3.0 as semantic Web technologies integrated into, or powering, large-scale Web applications. The base of Web 3.0 applications resides in the resource description framework (RDF) for providing a means to link data from multiple Web sites or databases. With the SPARQL query language, a SQL-like standard for querying RDF data, applications can use native graph-based RDF stores and extract RDF data from traditional databases.

Book
27 Mar 2009
TL;DR: This book explains examines how this powerful new technology can unify and fully leverage the ever-growing data, information, and services that are available on the Internet.
Abstract: The next major advance in the Web?Web 3.0?will be built on semantic Web technologies, which will allow data to be shared and reused across application, enterprise, and community boundaries. Written by a team of highly experienced Web developers, this book explains examines how this powerful new technology can unify and fully leverage the ever-growing data, information, and services that are available on the Internet. Helpful examples demonstrate how to use the semantic Web to solve practical, real-world problems while you take a look at the set of design principles, collaborative working groups, and technologies that form the semantic Web. The companion Web site features full code, as well as a reference section, a FAQ section, a discussion forum, and a semantic blog.

Journal ArticleDOI
01 Sep 2009
TL;DR: This paper shows how to publish a BPEL process as a RESTful Web service, by exposing selected parts of its execution state using the REST interaction primitives and discusses how the proposed extensions affect the architecture of a process execution engine.
Abstract: Current Web service technology is evolving towards a simpler approach to define Web service APIs that challenges the assumptions made by existing languages for Web service composition. RESTful Web services introduce a new kind of abstraction, the resource, which does not fit well with the message-oriented paradigm of the Web service description language (WSDL). RESTful Web services are thus hard to compose using the Business Process Execution Language (WS-BPEL), due to its tight coupling to WSDL. The goal of the BPEL for REST extensions presented in this paper is twofold. First, we aim to enable the composition of both RESTful Web services and traditional Web services from within the same process-oriented service composition language. Second, we show how to publish a BPEL process as a RESTful Web service, by exposing selected parts of its execution state using the REST interaction primitives. We include a detailed example on how BPEL for REST can be applied to orchestrate a RESTful e-Commerce scenario and discuss how the proposed extensions affect the architecture of a process execution engine.

01 Jan 2009
TL;DR: Web 2.0 tools present a vast array of opportunities—for companies that know how to use them and what to do with them.
Abstract: Web 2.0 tools present a vast array of opportunities—for companies that know how to use them.

Journal ArticleDOI
TL;DR: This article shows how linked data sets can be exploited to build rich Web applications with little effort.
Abstract: Semantic Web technologies have been around for a while. However, such technologies have had little impact on the development of real-world Web applications to date. With linked data, this situation has changed dramatically in the past few months. This article shows how linked data sets can be exploited to build rich Web applications with little effort.

Book
03 Oct 2009
TL;DR: Across these social websites, Breslin et al. demonstrate a twofold approach for interconnecting the islands that are social websites with semantic technologies, and for powering semantic applications with rich community-created content.
Abstract: The Social Web (including services such as MySpace, Flickr, last.fm, and WordPress) has captured the attention of millions of users as well as billions of dollars in investment and acquisition. Social websites, evolving around the connections between people and their objects of interest, are encountering boundaries in the areas of information integration, dissemination, reuse, portability, searchability, automation and demanding tasks like querying. The Semantic Web is an ideal platform for interlinking and performing operations on diverse person- and object-related data available from the Social Web, and has produced a variety of approaches to overcome the boundaries being experienced in Social Web application areas. After a short overview of both the Social Web and the Semantic Web, Breslin et al. describe some popular social media and social networking applications, list their strengths and limitations, and describe some applications of Semantic Web technology to address their current shortcomings by enhancing them with semantics. Across these social websites, they demonstrate a twofold approach for interconnecting the islands that are social websites with semantic technologies, and for powering semantic applications with rich community-created content. They conclude with observations on how the application of Semantic Web technologies to the Social Web is leading towards the "Social Semantic Web" (sometimes also called "Web 3.0"), forming a network of interlinked and semantically-rich content and knowledge. The book is intended for computer science professionals, researchers, and graduates interested in understanding the technologies and research issues involved in applying Semantic Web technologies to social software. Practitioners and developers interested in applications such as blogs, social networks or wikis will also learn about methods for increasing the levels of automation in these forms of Web communication.

Proceedings ArticleDOI
20 Apr 2009
TL;DR: This paper presents a framework, Semantic Web Pipes, that supports fast implementation of Semantic data mash-ups while preserving desirable properties such as abstraction, encapsulation, component-orientation, code re-usability and maintainability which are common and well supported in other application areas.
Abstract: The use of RDF data published on the Web for applications is still a cumbersome and resource-intensive task due to the limited software support and the lack of standard programming paradigms to deal with everyday problems such as combination of RDF data from dierent sources, object identifier consolidation, ontology alignment and mediation, or plain querying and filtering tasks. In this paper we present a framework, Semantic Web Pipes, that supports fast implementation of Semantic data mash-ups while preserving desirable properties such as abstraction, encapsulation, component-orientation, code re-usability and maintainability which are common and well supported in other application areas.

Proceedings ArticleDOI
06 Jul 2009
TL;DR: The challenges of composing RESTful Web services are discussed and a formal model for describing individual Web services and automating the composition is proposed and demonstrated by applying it to a real-world RESTfulWeb service composition problem.
Abstract: Emerging as the popular choice for leading Internet companies to expose internal data and resources, Restful Web services are attracting increasing attention in the industry.While automating WSDL/SOAP based Web service composition has been extensively studied in the research community, automated RESTful Web service composition in the context of service-oriented architecture (SOA), to the best of our knowledge, is less explored. As an early paper addressing this problem, this paper discusses the challenges of composing RESTful Web services and proposes a formal model for describing individual Web services and automating the composition. It demonstrates our approach by applying it to a real-world RESTful Web service composition problem. This paper represents our initial efforts towards the problem of automated RESTful Web service composition.We are hoping that it will draw interests from the research community on Web services, and engage more researchers in this challenge.

Book ChapterDOI
14 Jan 2009
TL;DR: A novel taxonomy is proposed that captures the possible failures that can arise in Web service composition, and classifies the faults that might cause them, and covers physical, development and interaction faults that can cause a variety of observable failures in a system's normal operation.
Abstract: Web services are becoming progressively popular in the building of both inter- and intra-enterprise business processes. These processes are composed from existing Web services based on defined requirements. In collecting together the services for such a composition, developers can employ languages and standards for the Web that facilitate the automation of Web service discovery, execution, composition and interoperation. However, there is no guarantee that a composition of even very good services will always work. Mechanisms are being developed to monitor a composition and to detect and recover from faults automatically. A key factor in such self-healing is to know what faults to look for. If the nature of a fault is known, the system can suggest a suitable recovery mechanism sooner. This paper proposes a novel taxonomy that captures the possible failures that can arise in Web service composition, and classifies the faults that might cause them. The taxonomy covers physical, development and interaction faults that can cause a variety of observable failures in a system's normal operation. An important use of the taxonomy is identifying the faults that can be excluded when a failure occurs. Examples of using the taxonomy are presented.

Book
30 Nov 2009
TL;DR: This in-depth two volume collection covers the latest aspects and applications of Web technologies including the introduction of virtual reality commerce systems, the importance of social bookmarking, cross-language data retrieval, image searching, cutting-edge Web security technologies, and innovative healthcare and finance applications on the Web.
Abstract: As the Web continues to evolve, advances in Web technology forge many new applications that were not previously feasible, resulting in new usage paradigms in business, social interaction, governance, and education. The Handbook of Research on Web 2.0, 3.0, and X.0: Technologies, Business, and Social Applications is a comprehensive reference source on next-generation Web technologies and their applications. This in-depth two volume collection covers the latest aspects and applications of Web technologies including the introduction of virtual reality commerce systems, the importance of social bookmarking, cross-language data retrieval, image searching, cutting-edge Web security technologies, and innovative healthcare and finance applications on the Web. Examining the social, cultural, and ethical issues these applications present, this Handbook of Research discusses real-world examples and case studies valuable to academicians, researchers, and practitioners.

Journal ArticleDOI
TL;DR: DBpedia Mobile, a location-aware Semantic Web client that can be used on an iPhone and other mobile devices, is described and it is described how published content is interlinked with a nearby DBpedia resource and thus contributes to the overall richness of the GeospatialSemantic Web.

Book
22 Oct 2009
TL;DR: Elisa Bertino and her coauthors provide a comprehensive guide to security for Web services and SOA, covering in detail all recent standards that address Web service security, including XML Encryption, XML Signature, WS-Security, and WS-SecureConversation.
Abstract: Web services based on the eXtensible Markup Language (XML), the Simple Object Access Protocol (SOAP), and related standards, and deployed in Service-Oriented Architectures (SOA), are the key to Web-based interoperability for applications within and across organizations. It is crucial that the security of services and their interactions with users is ensured if Web services technology is to live up to its promise. However, the very features that make it attractive such as greater and ubiquitous access to data and other resources, dynamic application configuration and reconfiguration through workflows, and relative autonomy conflict with conventional security models and mechanisms. Elisa Bertino and her coauthors provide a comprehensive guide to security for Web services and SOA. They cover in detail all recent standards that address Web service security, including XML Encryption, XML Signature, WS-Security, and WS-SecureConversation, as well as recent research on access control for simple and conversation-based Web services, advanced digital identity management techniques, and access control for Web-based workflows. They explain how these implement means for identification, authentication, and authorization with respect to security aspects such as integrity, confidentiality, and availability. This book will serve practitioners as a comprehensive critical reference on Web service standards, with illustrative examples and analyses of critical issues; researchers will use it as a state-of-the-art overview of ongoing research and innovative new directions; and graduate students will use it as a textbook on advanced topics in computer and system security.

Book ChapterDOI
10 Nov 2009
TL;DR: This work offers a form-based approach to ontology creation that provides a way to create Web 3.0 ontologies without the need for specialized training and shows that mappings between conceptual-model-based ontologies and forms are sufficient for creating the kind of ontologies needed for Web 2.0.
Abstract: Creating an ontology and populating it with data are both labor-intensive tasks requiring a high degree of expertise. Thus, scaling ontology creation and population to the size of the web in an effort to create a web of data--which some see as Web 3.0--is prohibitive. Can we find ways to streamline these tasks and lower the barrier enough to enable Web 3.0? Toward this end we offer a form-based approach to ontology creation that provides a way to create Web 3.0 ontologies without the need for specialized training. And we offer a way to semi-automatically harvest data from the current web of pages for a Web 3.0 ontology. In addition to harvesting information with respect to an ontology, the approach also annotates web pages and links facts in web pages to ontological concepts, resulting in a web of data superimposed over the web of pages. Experience with our prototype system shows that mappings between conceptual-model-based ontologies and forms are sufficient for creating the kind of ontologies needed for Web 3.0, and experiments with our prototype system show that automatic harvesting, automatic annotation, and automatic superimposition of a web of data over a web of pages work well.

Journal ArticleDOI
TL;DR: SEMMAS is presented, an ontology-based framework for seamlessly integrating Intelligent Agents and Semantic Web Services and the potential benefits of their combination are analyzed.
Abstract: Intelligent agents and semantic web services are two technologies with great potential. Striking new applications can be developed by using the tools and techniques they provide. However, semantic web services need for an upper software entity able to deal with them and, on the other hand agent technology has historically suffered from a number of drawbacks that must be addressed. Integrating these two technologies in a joint environment can overcome their problems while strengthening their advantages. In this paper, the necessity for integrating these technologies and the potential benefits of their combination are analyzed. Based on this study, we present SEMMAS, an ontology-based framework for seamlessly integrating Intelligent Agents and Semantic Web Services. The basics of the framework are detailed and a proof-of-concept implementation described.

Journal ArticleDOI
TL;DR: An overview of prominent Web 2.0 applications is provided, explains how they are being used within education environments, and elaborates on some of the potential opportunities and challenges that these applications present.
Abstract: New types of social Internet applications (often referred to as Web 2.0) are becoming increasingly popular within higher education environments. Although developed primarily for entertainment and social communication within the general population, applications such as blogs, social video sites, and virtual worlds are being adopted by higher education institutions. These newer applications differ from standard Web sites in that they involve the users in creating and distributing information, hence effectively changing how the Web is used for knowledge generation and dispersion. Although Web 2.0 applications offer exciting new ways to teach, they should not be the core of instructional planning, but rather selected only after learning objectives and instructional strategies have been identified. This paper provides an overview of prominent Web 2.0 applications, explains how they are being used within education environments, and elaborates on some of the potential opportunities and challenges that these applications present.

Journal ArticleDOI
TL;DR: This paper analyzes the service discovery requirements from the service consumer's perspective and outlines a conceptual model of homogeneous Web service communities, and describes a similarity measurement model for Web services by leveraging the metadata from WSDL and design a graph-based algorithm to support both of the two discovery types.
Abstract: The Web has undergone a tremendous change toward a highly user-centric environment. Millions of users can participate and collaborate for their own interests and benefits. Services computing paradigm together with the proliferation of Web services have created great potential opportunities for the users, also known as service consumers, to produce value-added services by means of service discovery and composition. In this paper, we propose an efficient approach to facilitating the service consumer on discovering Web services. First, we analyze the service discovery requirements from the service consumer's perspective and outline a conceptual model of homogeneous Web service communities. The homogeneous service community contains two types of discovery: the search of similar operations and that of composible operations. Second, we describe a similarity measurement model for Web services by leveraging the metadata from WSDL, and design a graph-based algorithm to support both of the two discovery types. Finally, adopting the popular atom feeds, we design a prototype to facilitate the consumers to discover while subscribing Web services in an easy-of-use manner. With the experimental evaluation and prototype demonstration, our approach not only alleviates the consumers from time-consuming discovery tasks but also lowers their entry barrier in the user-centric Web environment.