scispace - formally typeset
Search or ask a question

Showing papers on "Web standards published in 2006"


Journal ArticleDOI
TL;DR: Ask a dozen Internet experts what the term Web 2.0 means, and a few journalists maintain that the term doesn't mean anything at all -it's just a marketing ploy used to hype social networking sites.
Abstract: Ask a dozen Internet experts what the term Web 2.0 means, and you'll get a dozen different answers. Some say that Web 2.0 is a set of philosophies and practices that provide Web users with a deep and rich experience. Others say it's a new collection of applications and technologies that make it easier for people to find information and connect with one another online. A few journalists maintain that the term doesn't mean anything at all -it's just a marketing ploy used to hype social networking sites.

670 citations


Proceedings ArticleDOI
23 May 2006
TL;DR: This paper explores a complement approach that focuses on the "social annotations of the web" which are annotations manually made by normal web users without a pre-defined formal ontology, and shows how emergent semantics can be statistically derived from the social annotations.
Abstract: In order to obtain a machine understandable semantics for web resources, research on the Semantic Web tries to annotate web resources with concepts and relations from explicitly defined formal ontologies. This kind of formal annotation is usually done manually or semi-automatically. In this paper, we explore a complement approach that focuses on the "social annotations of the web" which are annotations manually made by normal web users without a pre-defined formal ontology. Compared to the formal annotations, although social annotations are coarse-grained, informal and vague, they are also more accessible to more people and better reflect the web resources' meaning from the users' point of views during their actual usage of the web resources. Using a social bookmark service as an example, we show how emergent semantics [2] can be statistically derived from the social annotations. Furthermore, we apply the derived emergent semantics to discover and search shared web bookmarks. The initial evaluation on our implementation shows that our method can effectively discover semantically related web bookmarks that current social bookmark service can not discover easily.

410 citations


Book
06 Oct 2006
TL;DR: This book offers a thorough, practical introduction to one of the most promising approaches the Web Service Modeling Ontology (WSMO), from the fundamentals to applications in e-commerce, e-government and e-banking.
Abstract: Service-oriented computing is an emerging factor in IT research and development. Organizations like W3C and the EU have begun research projects to develop industrial-strength applications. This book offers a thorough, practical introduction to one of the most promising approaches the Web Service Modeling Ontology (WSMO). After a brief review of technologies and standards of the Worldwide Web, the Semantic Web, and Web Services, the book examines WSMO from the fundamentals to applications in e-commerce, e-government and e-banking; it also describes its relation to OWL-S and WSDL-S and other applications. The book offers an up-to-date introduction, plus pointers to future applications.

390 citations


Book
01 Jan 2006
TL;DR: This text sets out a series of approaches to the analysis and synthesis of the World Wide Web, and other web-like information structures, and a comprehensive set of research questions is outlined, together with a sub-disciplinary breakdown, emphasising the multi-faceted nature of the Web.
Abstract: This text sets out a series of approaches to the analysis and synthesis of the World Wide Web, and other web-like information structures. A comprehensive set of research questions is outlined, together with a sub-disciplinary breakdown, emphasising the multi-faceted nature of the Web, and the multi-disciplinary nature of its study and development. These questions and approaches together set out an agenda for Web Science, the science of decentralised information systems. Web Science is required both as a way to understand the Web, and as a way to focus its development on key communicational and representational requirements. The text surveys central engineering issues, such as the development of the Semantic Web, Web services and P2P. Analytic approaches to discover the Web's topology, or its graph-like structures, are examined. Finally, the Web as a technology is essentially socially embedded; therefore various issues and requirements for Web use and governance are also reviewed.

343 citations


Journal ArticleDOI
21 Dec 2006-BMJ
TL;DR: In this paper, the authors define the Web 2.0 as "a highly connected digital network of practitioners (medical or otherwise), where knowledge exchange is not limited or controlled by private interests" and "the spirit of open sharing and collaboration that is paramount".
Abstract: Few concepts in information technology create more confusion than Web 2.0. The truth is that Web 2.0 is a difficult term to define, even for web experts.1 Nebulous phrases like “the web as platform” and “architecture of participation” are often used to describe Web 2.0. Medical librarians suggest that rather than intrinsic benefits of the platform itself, it's the spirit of open sharing and collaboration that is paramount.2 The more we use, share, and exchange information on the web in a continual loop of analysis and refinement, the more open and creative the platform becomes; hence, the more useful it is in our work. What seems clear is that Web 2.0 brings people together in a more dynamic, interactive space. This new generation of internet services and devices—often referred to as social software—can be leveraged to enrich our web experience, as information is continually requested, consumed, and reinterpreted. The new environment features a highly connected digital network of practitioners (medical or otherwise), where knowledge exchange is not limited or controlled by private interests. For me, the promise of open access in Web 2.0—freed of publishing barriers and multinational interests—is especially compelling. Web 2.0 is primarily about the benefits of easy to use and free internet software. For example, blogs and wikis facilitate participation and conversations across a vast geographical expanse. Information pushing devices, like RSS feeds, permit continuous instant alerting to the latest ideas in medicine.3 Helpful but lesser known website tagging and organising tools, such as Connotea and Del.icio.us, are proving useful (table​(table).). Multimedia tools like podcasts and videocasts are increasingly popular in medical schools and medical journals.4 (This bird's eye view of social software can be fully explored with your favourite medical librarian, after the holidays.) Web 2.0 examples in medicine For now, let's examine the notion of a blog, which was the first of the social software tools. Blogs are interactive websites that consist of regular diary-like entries. Unlike static web pages (a feature of Web 1.0), blogs are more dynamic and permit bloggers to write articles and engage in “one to many” conversations with readers. Political bloggers are said even to have an influence on the outcome of elections.5 One of the best blogs in medicine is Ves Dimov's C linical Cases and Images. It contains a rich collection of “presurfed” material for busy clinicians and features interactivity and timely discussion. Dimov is also a supporter of medical librarian bloggers.6 Why waste time fumbling with search engines when you can consult this blog for timely updates? As well as case discussions, Ves provides links to today's medical headlines from Reuters and clinical images via a dynamic, free photo sharing tool called Flickr. One of his slide presentations “Web 2.0 in medicine”7 is available on Slideshare (itself a fantastic new 2.0 tool). Clinical Cases and Images is a virtual laboratory for doctors and medical librarians interested in Web 2.0. In the past year, several doctors and medical librarians have put Web 2.0 in the spotlight8; one excellent article even discusses its impact in clinical practice.9 What is obvious is that doctors are seeking new methods of information discovery because of the limitations of search engines. Even Medline, for all its benefits, is no longer a sufficiently detailed map of the medical literature. Busy but organised doctors need a variety of evidence sent to them in a single organising interface—easily accomplished using an RSS reader (ask your favourite medical librarian to show you how to use aggregators like Bloglines and MedWorm). RSS may be a useful way to fight information overload. RSS feeds help to organise new web content sent to you in real time by the best medical blogs, evidence based sites like the Cochrane Library, and newly published video and audio from major medical journals. In fact, technology savvy doctors are keen to use RSS feeds on mobile devices, iPods, and Blackberries and scan research on their way to ward rounds. For those who prefer to play in the digital sandbox while on-call, try photo sharing software like Flickr and medical video sharing at YouTube,10 two of the more popular multimedia sites. By searching YouTube (bought by Google for £1bn (€1.5bn; $2.0bn) in 2006), you can dazzle your family during the holidays. Over the past year, as a medical librarian, I have watched the impact of Web 2.0 tools on access to information. A highlight for me was a recent BMJ article,11 which concluded that Google—the quintessential Web 2.0 company—is a useful diagnostic aid. Google is a useful tool within certain parameters, if you know what to search for. Doctors can retrieve lots of evidence and open access material via search tools, and they need to learn how to use these tools responsibly. With its many multilingual editions, Google is a boon for developing countries with few information retrieval alternatives. This tour through Web 2.0 ultimately returns to the idea of using software to create optimal knowledge building opportunities for doctors. The rise of wikis as a publishing medium—especially Wikipedia—holds some unexamined pearls for the advancement of medicine. The notion of a medical wikipedia—freely accessible and continually updated by doctors—is worthy of further exploration. Could wikis be used, for example, as a low cost alternative to commercial point of care tools like UpToDate? To a certain extent, this is happening now as the search portal Trip already indexes Ganfyd, one of a handful of medical wikis being developed. In closing, let me say that Web 2.0's push for openness has resulted in the expectation of equal amounts of transparency and openness in medical publishing. The collapse of the Canadian Medical Association Journal this past year12 was, in a sense, due to the opposing tensions of openness exemplified by Web 2.0 and the monolithic lack of openness in old forms of media like CMAJ. The web is a reflection of who we are as human beings—but it also reflects who we aspire to be. In that sense, Web 2.0 may be one of the most influential technologies in the history of publishing, as old proprietary notions of control and ownership fall away. An expert (that is, doctor) moderated repository of the knowledge base, in the form of a medical wiki, may be the answer to the world's inequities of information access in medicine if we have the will to create one.

311 citations


Proceedings ArticleDOI
23 May 2006
TL;DR: SecuBat, a generic and modular web vulnerability scanner that, similar to a port scanner, automatically analyzes web sites with the aim of finding exploitable SQL injection and XSS vulnerabilities is developed.
Abstract: As the popularity of the web increases and web applications become tools of everyday use, the role of web security has been gaining importance as well. The last years have shown a significant increase in the number of web-based attacks. For example, there has been extensive press coverage of recent security incidences involving the loss of sensitive credit card information belonging to millions of customers.Many web application security vulnerabilities result from generic input validation problems. Examples of such vulnerabilities are SQL injection and Cross-Site Scripting (XSS). Although the majority of web vulnerabilities are easy to understand and to avoid, many web developers are, unfortunately, not security-aware. As a result, there exist many web sites on the Internet that are vulnerable.This paper demonstrates how easy it is for attackers to automatically discover and exploit application-level vulnerabilities in a large number of web applications. To this end, we developed SecuBat, a generic and modular web vulnerability scanner that, similar to a port scanner, automatically analyzes web sites with the aim of finding exploitable SQL injection and XSS vulnerabilities. Using SecuBat, we were able to find many potentially vulnerable web sites. To verify the accuracy of SecuBat, we picked one hundred interesting web sites from the potential victim list for further analysis and confirmed exploitable flaws in the identified web pages. Among our victims were well-known global companies and a finance ministry. Of course, we notified the administrators of vulnerable sites about potential security problems. More than fifty responded to request additional information or to report that the security hole was closed.

257 citations


Journal ArticleDOI
TL;DR: This book argues that Web 2.0 is not a technological innovation, but is changing the understanding of the status of information, knowledge and the role of the user in information applications, and suggests that, as information proliferates, control is being gradually ceded to users.
Abstract: Explores the application of Web 2.0 technologies to business intranets, and their potential use in managing and developing business information and knowledge assets. Considers how Web 2.0 approaches on the public web are subtly reshaping the relationship between users and information. Argues that Web 2.0 is not a technological innovation, but is changing the understanding of the status of information, knowledge and the role of the user in information applications. Suggests that, as information proliferates, control is being gradually ceded to users, opening up the possibility of a new, more democratic, and more evaluative phase in the exploitation of information within organizations.

243 citations


Journal ArticleDOI
TL;DR: Semantic Web Mining aims at combining the two fast-developing research areas Semantic Web and Web Mining as mentioned in this paper. But the full potential of this convergence is not yet realized, and the authors of this paper give an overview of where the two areas meet today and sketches ways of how a closer integration could be profitable.

242 citations


Journal ArticleDOI
TL;DR: A framework to evaluate Web sites from a customer's perspective of value-added is proposed and a global study covering 1,800 sites is conducted to give a profile of commercial use of the World Wide Web in 1996.
Abstract: While commercial applications of the Internet proliferate, particularly in the form of business sites on the World Wide Web, on-line business is still relatively insignificant. One reason is that truly compelling applications have yet to be devised to penetrate the mass market. To help identify approaches that may eventually be successful, one must address the question of what value is being created on the Web. As a first step, this paper proposes a framework to evaluate Web sites from a customer's perspective of value-added. A global study covering 1,800 sites, with representative samples from diverse industries and localities worldwide, is conducted to give a profile of commercial use of the World Wide Web in 1996.

233 citations


Book ChapterDOI
05 Nov 2006
TL;DR: The /facet tool as discussed by the authors is a tool for Semantic Web developers as an instant interface to their complete dataset that allows the inclusion of facet-specific display options that go beyond the hierarchical navigation that characterizes current facet browsing.
Abstract: Facet browsing has become popular as a user friendly interface to data repositories. The Semantic Web raises new challenges due to the heterogeneous character of the data. First, users should be able to select and navigate through facets of resources of any type and to make selections based on properties of other, semantically related, types. Second, where traditional facet browsers require manual configuration of the software, a semantic web browser should be able to handle any RDFS dataset without any additional configuration. Third, hierarchical data on the semantic web is not designed for browsing: complementary techniques, such as search, should be available to overcome this problem. We address these requirements in our browser, /facet. Additionally, the interface allows the inclusion of facet-specific display options that go beyond the hierarchical navigation that characterizes current facet browsing. /facet is a tool for Semantic Web developers as an instant interface to their complete dataset. The automatic facet configuration generated by the system can then be further refined to configure it as a tool for end users. The implementation is based on current Web standards and open source software. The new functionality is motivated using a scenario from the cultural heritage domain.

231 citations


Book ChapterDOI
05 Nov 2006
TL;DR: This presentation will show several ways that the best shot the authors have of collective intelligence in their lifetimes is large, distributed human-computer systems, and the best way to get there is to harness the ”people power” of the Web with the techniques of the Semantic Web.
Abstract: The Semantic Web is an ecosystem of interaction among computer systems. The social web is an ecosystem of conversation among people. Both are enabled by conventions for layered services and data exchange. Both are driven by human-generated content and made scalable by machine-readable data. Yet there is a popular misconception that the two worlds are alternative, opposing ideologies about how the web ought to be. Folksonomy vs. ontology. Practical vs. formalistic. Humans vs. machines. This is nonsense, and it is time to embrace a unified view. I subscribe to the vision of the Semantic Web as a substrate for collective intelligence. The best shot we have of collective intelligence in our lifetimes is large, distributed human-computer systems. The best way to get there is to harness the ”people power” of the Web with the techniques of the Semantic Web. In this presentation I will show several ways that this can be, and is, happening.

ReportDOI
01 Jan 2006
TL;DR: This work describes how an AI planning system (SHOP2) can be used with DAML-S Web service descriptions to automatically compose Web services.
Abstract: : Semantic markup of Web services will enable the automation of various kinds of tasks, including discovery, composition, and execution of Web services. We describe how an AI planning system (SHOP2) can be used with DAML-S Web service descriptions to automatically compose Web services.

Journal ArticleDOI
TL;DR: The objective of the approach is to automatically discover ontologies from data sets in order to build complete concept models for Web user information needs and a method for capturing evolving patterns to refine discovered ontologies is proposed.
Abstract: It is not easy to obtain the right information from the Web for a particular Web user or a group of users due to the obstacle of automatically acquiring Web user profiles. The current techniques do not provide satisfactory structures for mining Web user profiles. This paper presents a novel approach for this problem. The objective of the approach is to automatically discover ontologies from data sets in order to build complete concept models for Web user information needs. It also proposes a method for capturing evolving patterns to refine discovered ontologies. In addition, the process of assessing relevance in ontology is established. This paper provides both theoretical and experimental evaluations for the approach. The experimental results show that all objectives we expect for the approach are achievable.

Journal ArticleDOI
TL;DR: The main differences between Web-based applications and traditional ones, how these differences impact the testing of the former ones, and some relevant contributions in the field of Web application testing developed in recent years are presented.
Abstract: Software testing is a difficult task and testing Web-based applications may be even more difficult, due to the peculiarities of such applications. In the last years, several problems in the field of Web-based applications testing have been addressed by research work, and several methods and techniques have been defined and used to test Web-based applications effectively. This paper will present the main differences between Web-based applications and traditional ones, how these differences impact the testing of the former ones, and some relevant contributions in the field of Web application testing developed in recent years. The focus is mainly on testing the functionality of a Web-based application, even if some discussion about the testing of non-functional requirements is provided too. Some indications about future trends in Web application testing are also outlined in the paper.

Proceedings Article
16 Jul 2006
TL;DR: Generalized contextual extraction patterns allow for fast iterative progression towards extracting one million facts of a given type (e.g., Person-BornIn-Year) from 100 million Web documents of arbitrary quality.
Abstract: Due to the inherent difficulty of processing noisy text, the potential of the Web as a decentralized repository of human knowledge remains largely untapped during Web search. The access to billions of binary relations among named entities would enable new search paradigms and alternative methods for presenting the search results. A first concrete step towards building large searchable repositories of factual knowledge is to derive such knowledge automatically at large scale from textual documents. Generalized contextual extraction patterns allow for fast iterative progression towards extracting one million facts of a given type (e.g., Person-BornIn-Year) from 100 million Web documents of arbitrary quality. The extraction starts from as few as 10 seed facts, requires no additional input knowledge or annotated text, and emphasizes scale and coverage by avoiding the use of syntactic parsers, named entity recognizers, gazetteers, and similar text processing tools and resources.

Journal ArticleDOI
TL;DR: The legal requirements of accessibility, the previous research, and the data and findings of this study are discussed, and recommendations for increasing federal e-government Web site compliance with Section 508 are offered.

Book
24 Jul 2006
TL;DR: The impact of web accessibility Implementing accessible websites Accessibility law and policy is studied.
Abstract: The impact of web accessibility Implementing accessible websites Accessibility law & policy

Journal Article
TL;DR: The /facet tool as discussed by the authors is a tool for Semantic Web developers as an instant interface to their complete dataset that allows the inclusion of facet-specific display options that go beyond the hierarchical navigation that characterizes current facet browsing.
Abstract: Facet browsing has become popular as a user friendly interface to data repositories. The Semantic Web raises new challenges due to the heterogeneous character of the data. First, users should be able to select and navigate through facets of resources of any type and to make selections based on properties of other, semantically related, types. Second, where traditional facet browsers require manual configuration of the software, a semantic web browser should be able to handle any RDFS dataset without any additional configuration. Third, hierarchical data on the semantic web is not designed for browsing: complementary techniques, such as search, should be available to overcome this problem. We address these requirements in our browser, /facet. Additionally, the interface allows the inclusion of facet-specific display options that go beyond the hierarchical navigation that characterizes current facet browsing. /facet is a tool for Semantic Web developers as an instant interface to their complete dataset. The automatic facet configuration generated by the system can then be further refined to configure it as a tool for end users. The implementation is based on current Web standards and open source software. The new functionality is motivated using a scenario from the cultural heritage domain.

Proceedings ArticleDOI
22 Apr 2006
TL;DR: Phetch is an enjoyable computer game that collects explanatory descriptions of images, and is an example of a new class of games that provide entertainment in exchange for human processing power.
Abstract: Images on the Web present a major accessibility issue for the visually impaired, mainly because the majority of them do not have proper captions. This paper addresses the problem of attaching proper explanatory text descriptions to arbitrary images on the Web. To this end, we introduce Phetch, an enjoyable computer game that collects explanatory descriptions of images. People play the game because it is fun, and as a side effect of game play we collect valuable information. Given any image from the World Wide Web, Phetch can output a correct annotation for it. The collected data can be applied towards significantly improving Web accessibility. In addition to improving accessibility, Phetch is an example of a new class of games that provide entertainment in exchange for human processing power. In essence, we solve a typical computer vision problem with HCI tools alone.

Book ChapterDOI
05 Nov 2006
TL;DR: A collection of Semantic Web documents from an estimated ten million available on the Web is harvested and analyzed, and a number of metrics, properties and usage patterns found to follow a power law distribution are described.
Abstract: Semantic Web languages are being used to represent, encode and exchange semantic data in many contexts beyond the Web – in databases, multiagent systems, mobile computing, and ad hoc networking environments. The core paradigm, however, remains what we call the Web aspect of the Semantic Web – its use by independent and distributed agents who publish and consume data on the World Wide Web. To better understand this central use case, we have harvested and analyzed a collection of Semantic Web documents from an estimated ten million available on the Web. Using a corpus of more than 1.7 million documents comprising over 300 million RDF triples, we describe a number of global metrics, properties and usage patterns. Most of the metrics, such as the size of Semantic Web documents and the use frequency of Semantic Web terms, were found to follow a power law distribution.

Journal ArticleDOI
TL;DR: This paper describes the proof markup language (PML), an interlingua representation for justifications of results produced by Semantic Web services, and introduces the Inference Web infrastructure that uses PML as the foundation for providing explanations of Web services to end users.

Proceedings ArticleDOI
23 May 2006
TL;DR: The WS-Replication framework is provided, based on a group communication web service, WS-Multicast, that respects the web service autonomy and relies exclusively on web service technology for interaction across organizations.
Abstract: Due to the rapid acceptance of web services and its fast spreading, a number of mission-critical systems will be deployed as web services in next years. The availability of those systems must be guaranteed in case of failures and network disconnections. An example of web services for which availability will be a crucial issue are those belonging to coordination web service infrastructure, such as web services for transactional coordination (e.g., WS-CAF and WS-Transaction). These services should remain available despite site and connectivity failures to enable business interactions on a 24x7 basis. Some of the common techniques for attaining availability consist in the use of a clustering approach. However, in an Internet setting a domain can get partitioned from the network due to a link overload or some other connectivity problems. The unavailability of a coordination service impacts the availability of all the partners in the business process. That is, coordination services are an example of critical components that need higher provisions for availability. In this paper, we address this problem by providing an infrastructure, WS-Replication, for WAN replication of web services. The infrastructure is based on a group communication web service, WS-Multicast, that respects the web service autonomy. The transport of WS-Multicast is based on SOAP and relies exclusively on web service technology for interaction across organizations. We have replicated WS-CAF using our WS-Replication framework and evaluated its performance.

Journal Article
TL;DR: The issue of semantic interoperability of educational contents on the Web is dealt with by considering the integration of learning standards, Semantic Web, and adaptive technologies to meet the requirements of learners.
Abstract: Personalized adaptive learning requires semantic-based and context-aware systems to manage the Web knowledge efficiently as well as to achieve semantic interoperability between heterogeneous information resources and services. The technological and conceptual differences can be bridged either by means of standards or via approaches based on the Semantic Web. This article deals with the issue of semantic interoperability of educational contents on the Web by considering the integration of learning standards, Semantic Web, and adaptive technologies to meet the requirements of learners. Discussion is m ade on the state of the art and the main challenges in this field, including metadata access and design issues relating to adaptive learning. Additionally, a way how to integrate several original approaches is proposed.

Book ChapterDOI
05 Nov 2006
TL;DR: The IRS-III methodology for building applications using Semantic Web Services is described and illustrated through a use case on e-government.
Abstract: In this paper we describe IRS-III which takes a semantic broker based approach to creating applications from Semantic Web Services by mediating between a service requester and one or more service providers. Business organisations can view Semantic Web Services as the basic mechanism for integrating data and processes across applications on the Web. This paper extends previous publications on IRS by providing an overall description of our framework from the point of view of application development. More specifically, we describe the IRS-III methodology for building applications using Semantic Web Services and illustrate our approach through a use case on e-government.

Journal ArticleDOI
01 Jan 2006
TL;DR: A comprehensive survey of trust on the Web in all its contexts, where the role of trust in web-based social networks and algorithms for inferring trust relationships are examined.
Abstract: The success of the Web is based largely on its open, decentralized nature; at the same time, that allows for a wide range of perspectives and intentions. Trust is required to foster successful interactions and to filter the abundance of information. In this review, we present a comprehensive survey of trust on the Web in all its contexts. Three main targets of trust are identified: content, services, and people. Trust in the content on the Web, including webpages, websites, and Semantic Web data is addressed first. Then, we move on to look at services including peer-to-peer environments and Web services. This includes a discussion of Web policy frameworks for access control. People are the final group, where we look at the role of trust in web-based social networks and algorithms for inferring trust relationships. Finally, we review applications that rely on trust and address how they utilize trust to improve functionality and interface.

Journal ArticleDOI
TL;DR: The nature of the hypertext link as a communication tool for Web designers and authors is examined closely and network analysis is suggested as a methodology that can be used by researchers investigating the World Wide Web from a communication perspective.
Abstract: This paper examines closely the nature of the hypertext link as a communication tool for Web designers and authors. The strategic nature of the link raises important questions for the representation and interpretation of Web structure. Network analysis is suggested as a methodology that can be used by researchers investigating the World Wide Web from a communication perspective.

Proceedings ArticleDOI
18 Sep 2006
TL;DR: Challenges to interoperability are examined; the types of heterogeneities that can occur between interacting services are classified; and a possible solution for data mediation using the mapping support provided by WSDL-S, the extensibility features of W SDL and the popular SOAP engine, Axis 2 is presented.
Abstract: With the rising popularity of Web services, both academia and industry have invested considerably in Web service description standards, discovery, and composition techniques. The standards based approach utilized by Web services has supported interoperability at the syntax level. However, issues of structural and semantic heterogeneity between messages exchanged by Web services are far more complex and crucial to interoperability. It is for these reasons that we recognize the value that schema/data mappings bring to Web service descriptions. In this paper, we examine challenges to interoperability; classify the types of heterogeneities that can occur between interacting services and present a possible solution for data mediation using the mapping support provided by WSDL-S, the extensibility features of WSDL and the popular SOAP engine, Axis 2.

Journal ArticleDOI
TL;DR: The architecture of the Artemis project is described, which exploits ontologies based on the domain knowledge exposed by the healthcare information standards through standard bodies like HL7, CEN TC251, ISO TC215 and GEHR.

Book
01 Dec 2006
TL;DR: This book introduces advanced semantic web technologies, illustrating their utility and highlighting their implementation in biological, medical, and clinical scenarios and the factors impacting on the establishment of the semantic web in life science and the legal challenges that will impact on its proliferation.
Abstract: This book introduces advanced semantic web technologies, illustrating their utility and highlighting their implementation in biological, medical, and clinical scenarios It covers topics ranging from database, ontology, and visualization to semantic web services and workflows The volume also details the factors impacting on the establishment of the semantic web in life science and the legal challenges that will impact on its proliferation

Patent
31 Mar 2006
TL;DR: In this paper, a system and method for providing integration of service-oriented architecture (SOA) is provided, comprising the steps of identifying SOA drivers, determining matters that are driving the company to integrate the SOA and Web services into the company, developing a business initiative roadmap, and performing an analysis of current and planned business initiatives and projects of the company.
Abstract: A system and method for providing integration of service-oriented architecture (SOA) is provided. Generally, the method comprising the steps of: identifying SOA drivers, thereby determining matters that are driving the company to integrate the SOA and Web services into the company; developing a business initiative roadmap, thereby performing an analysis of current and planned business initiatives and projects of the company, and an analysis of current and potential services that will be required to implement or support the business initiatives during the providing integration of the SOA and Web services; developing an SOA technology roadmap, thereby determining necessary SOA enabling technical solutions that can be implemented to support the developed business initiative roadmap; and prioritizing and sequencing the business initiative roadmap and the SOA technology roadmap, thereby synchronizing the business initiatives and Web service initiatives with implementation of the supporting SOA technical solutions determined during the step of developing the SOA technology roadmap.