scispace - formally typeset
Search or ask a question

Showing papers on "Web modeling published in 2005"


Book ChapterDOI
TL;DR: Along with introducing the main elements of WSMO, this paper provides a logical language for defining formal statements in WSMO together with some motivating examples from practical use cases which shall demonstrate the benefits of Semantic Web Services.
Abstract: The potential to achieve dynamic, scalable and cost-effective marketplaces and eCommerce solutions has driven recent research efforts towards so-called Semantic Web Services that are enriching Web services with machine-processable semantics. To this end, the Web Service Modeling Ontology (WSMO) provides the conceptual underpinning and a formal language for semantically describing all relevant aspects of Web services in order to facilitate the automatization of discovering, combining and invoking electronic services over the Web. In this paper we describe the overall structure of WSMO by its four main elements: ontologies, which provide the terminology used by other WSMO elements, Web services, which provide access to services that, in turn, provide some value in some domain, goals that represent user desires, and mediators, which deal with interoperability problems between different WSMO elements. Along with introducing the main elements of WSMO, we provide a logical language for defining formal statements in WSMO together with some motivating examples from practical use cases which shall demonstrate the benefits of Semantic Web Services.

1,367 citations


Proceedings ArticleDOI
15 Aug 2005
TL;DR: This research suggests that rich representations of the user and the corpus are important for personalization, but that it is possible to approximate these representations and provide efficient client-side algorithms for personalizing search.
Abstract: We formulate and study search algorithms that consider a user's prior interactions with a wide variety of content to personalize that user's current Web search. Rather than relying on the unrealistic assumption that people will precisely specify their intent when searching, we pursue techniques that leverage implicit information about the user's interests. This information is used to re-rank Web search results within a relevance feedback framework. We explore rich models of user interests, built from both search-related information, such as previously issued queries and previously visited Web pages, and other information about the user such as documents and email the user has read and created. Our research suggests that rich representations of the user and the corpus are important for personalization, but that it is possible to approximate these representations and provide efficient client-side algorithms for personalizing search. We show that such personalization algorithms can significantly improve on current Web search.

928 citations


Journal ArticleDOI
TL;DR: The urgent need for service composition is discussed, the required technologies to perform service composition are presented, and several different composition strategies, based on some currently existing composition platforms and frameworks are presented.
Abstract: Due to the web services' heterogeneous nature, which stems from the definition of several XML-based standards to overcome platform and language dependence, web services have become an emerging and promising technology to design and build complex inter-enterprise business applications out of single web-based software components. To establish the existence of a global component market, in order to enforce extensive software reuse, service composition experienced increasing interest in doing a lot of research effort. This paper discusses the urgent need for service composition, the required technologies to perform service composition. It also presents several different composition strategies, based on some currently existing composition platforms and frameworks, re-presenting first implementations of state-of the-art technologies, and gives an outlook to essential future research work.

920 citations


Journal ArticleDOI
TL;DR: The implementation and architecture of the METEOR-S Web Service Discovery Infrastructure is described, which leverages peer-to-peer computing as a scalable solution and an ontology-based approach to organize registries into domains, enabling domain based classification of all Web services.
Abstract: Web services are the new paradigm for distributed computing. They have much to offer towards interoperability of applications and integration of large scale distributed systems. To make Web services accessible to users, service providers use Web service registries to publish them. Current infrastructure of registries requires replication of all Web service publications in all Universal Business Registries. Large growth in number of Web services as well as the growth in the number of registries would make this replication impractical. In addition, the current Web service discovery mechanism is inefficient, as it does not support discovery based on the capabilities of the services, leading to a lot of irrelevant matches. Semantic discovery or matching of services is a promising approach to address this challenge. In this paper, we present a scalable, high performance environment for Web service publication and discovery among multiple registries. This work uses an ontology-based approach to organize registries into domains, enabling domain based classification of all Web services. Each of these registries supports semantic publication and discovery of Web services. We believe that the semantic approach suggested in this paper will significantly improve Web service publication and discovery involving a large number of registries. This paper describes the implementation and architecture of the METEOR-S Web Service Discovery Infrastructure, which leverages peer-to-peer computing as a scalable solution.

483 citations


Journal ArticleDOI
TL;DR: This survey paper focuses on Web information retrieval methods that use eigenvector computations, presenting the three popular methods of HITS, PageRank, and SALSA.
Abstract: Web information retrieval is significantly more challenging than traditional well-controlled, small document collection information retrieval. One main difference between traditional information retrieval and Web information retrieval is the Web's hyperlink structure. This structure has been exploited by several of today's leading Web search engines, particularly Google and Teoma. In this survey paper, we focus on Web information retrieval methods that use eigenvector computations, presenting the three popular methods of HITS, PageRank, and SALSA.

415 citations


Proceedings ArticleDOI
10 May 2005
TL;DR: Experimental evaluations using a real-world data set collected from an MSN search engine show that CubeSVD achieves encouraging search results in comparison with some standard methods.
Abstract: As the competition of Web search market increases, there is a high demand for personalized Web search to conduct retrieval incorporating Web users' information needs. This paper focuses on utilizing clickthrough data to improve Web search. Since millions of searches are conducted everyday, a search engine accumulates a large volume of clickthrough data, which records who submits queries and which pages he/she clicks on. The clickthrough data is highly sparse and contains different types of objects (user, query and Web page), and the relationships among these objects are also very complicated. By performing analysis on these data, we attempt to discover Web users' interests and the patterns that users locate information. In this paper, a novel approach CubeSVD is proposed to improve Web search. The clickthrough data is represented by a 3-order tensor, on which we perform 3-mode analysis using the higher-order singular value decomposition technique to automatically capture the latent factors that govern the relations among these multi-type objects: users, queries and Web pages. A tensor reconstructed based on the CubeSVD analysis reflects both the observed interactions among these objects and the implicit associations among them. Therefore, Web search activities can be carried out based on CubeSVD analysis. Experimental evaluations using a real-world data set collected from an MSN search engine show that CubeSVD achieves encouraging search results in comparison with some standard methods.

386 citations


Journal ArticleDOI
TL;DR: A system-level testing technique that combines test generation based on finite state machines with constraints with the goal of reducing the state space explosion otherwise inherent in using FSMs is proposed.
Abstract: Researchers and practitioners are still trying to find effective ways to model and test Web applications This paper proposes a system-level testing technique that combines test generation based on finite state machines with constraints We use a hierarchical approach to model potentially large Web applications The approach builds hierarchies of Finite State Machines (FSMs) that model subsystems of the Web applications, and then generates test requirements as subsequences of states in the FSMs These subsequences are then combined and refined to form complete executable tests The constraints are used to select a reduced set of inputs with the goal of reducing the state space explosion otherwise inherent in using FSMs The paper illustrates the technique with a running example of a Web-based course student information system and introduces a prototype implementation to support the technique

368 citations


Proceedings ArticleDOI
15 Aug 2005
TL;DR: An additional criterion for web page ranking is introduced, namely the distance between a user profile defined using ODP topics and the sets of O DP topics covered by each URL returned in regular web search, and the boundaries of biasing PageRank on subtopics of the ODP are investigated.
Abstract: The Open Directory Project is clearly one of the largest collaborative efforts to manually annotate web pages. This effort involves over 65,000 editors and resulted in metadata specifying topic and importance for more than 4 million web pages. Still, given that this number is just about 0.05 percent of the Web pages indexed by Google, is this effort enough to make a difference? In this paper we discuss how these metadata can be exploited to achieve high quality personalized web search. First, we address this by introducing an additional criterion for web page ranking, namely the distance between a user profile defined using ODP topics and the sets of ODP topics covered by each URL returned in regular web search. We empirically show that this enhancement yields better results than current web search using Google. Then, in the second part of the paper, we investigate the boundaries of biasing PageRank on subtopics of the ODP in order to automatically extend these metadata to the whole web.

327 citations


Patent
27 Jul 2005
TL;DR: In this paper, a system and methodology to assist users with data access activities and that includes such activities as routine web browsing and/or data access applications is presented. But, the system is limited to one-button access to user's desired web or data source information/destinations in order to mitigate efforts in retrieving and viewing such information.
Abstract: The present invention relates to a system and methodology to assist users with data access activities and that includes such activities as routine web browsing and/or data access applications. A coalesced display or montage of aggregated information is provided that is focused from a plurality of sources to achieve substantially one-button access to user's desired web or data source information/destinations in order to mitigate efforts in retrieving and viewing such information. Past web or other type data access patterns can be mined to predict future browsing sites or desired access locations. A system is provided that builds personalized web portals for associated users based on models mined from past data access patterns. The portals can provide links to web resources as well as embed content from distal (remote) pages or sites producing a montage of web or other type data content. Automated topic classification is employed to create multiple topic-centric views that can be invoked by a user.

325 citations


Proceedings ArticleDOI
10 May 2005
TL;DR: The experimental results show that PopRank can achieve significantly better ranking results than naively applying PageRank on the object graph, and the proposed efficient approaches to automatically decide these factors are proposed.
Abstract: In contrast with the current Web search methods that essentially do document-level ranking and retrieval, we are exploring a new paradigm to enable Web search at the object level. We collect Web information for objects relevant for a specific application domain and rank these objects in terms of their relevance and popularity to answer user queries. Traditional PageRank model is no longer valid for object popularity calculation because of the existence of heterogeneous relationships between objects. This paper introduces PopRank, a domain-independent object-level link analysis model to rank the objects within a specific domain. Specifically we assign a popularity propagation factor to each type of object relationship, study how different popularity propagation factors for these heterogeneous relationships could affect the popularity ranking, and propose efficient approaches to automatically decide these factors. Our experiments are done using 1 million CS papers, and the experimental results show that PopRank can achieve significantly better ranking results than naively applying PageRank on the object graph.

319 citations


Journal ArticleDOI
TL;DR: A look at how developers are going back to the future by building Web applications using Ajax (Asynchronous JavaScript and XML), a set of technologies mostly developed in the 1990s.
Abstract: Looks at how developers are going back to the future by building Web applications using Ajax (Asynchronous JavaScript and XML), a set of technologies mostly developed in the 1990s. A key advantage of Ajax applications is that they look and act more like desktop applications. Proponents argue that Ajax applications perform better than traditional Web programs. As an example, Ajax applications can add or retrieve new data for a page it is working with and the page will update immediately without reloading.

Proceedings Article
05 Jun 2005
TL;DR: A novel planning framework for the automated composition of web services that are specified and implemented in industrial standard languages for business processes modeling and execution, like BPEL4WS, based on state of the art techniques for planning under uncertainty.
Abstract: We propose a novel planning framework for the automated composition of web services We consider services that are specified and implemented in industrial standard languages for business processes modeling and execution, like BPEL4WS These languages describe web services whose behavior is intrinsically asynchronous For this reason, the key aspect of our framework is the modeling of asynchronous planning problems In the paper we describe the framework and propose a planning approach that is based on state of the art techniques for planning under uncertainty Our experiments show that this approach can scale up to significant cases, ie, to cases in which the manual development of BPEL4WS composed services is not trivial and is time consuming

Proceedings Article
30 Aug 2005
TL;DR: In this article, the authors present Colombo, a framework in which web services are characterized in terms of the atomic processes (i.e., operations) they can perform; their impact on the real world (modeled as a relational database); their transition-based behavior; and the messages they can send and receive (from/to other web services and human clients).
Abstract: In this paper we present Colombo, a framework in which web services are characterized in terms of (i) the atomic processes (i.e., operations) they can perform; (ii) their impact on the "real world" (modeled as a relational database); (iii) their transition-based behavior; and (iv) the messages they can send and receive (from/to other web services and "human" clients). As such, Colombo combines key elements from the standards and research literature on (semantic) web services. Using Colombo, we study the problem of automatic service composition (synthesis) and devise a sound, complete and terminating algorithm for building a composite service. Specifically, the paper develops (i) a technique for handling the data, which ranges over an infinite domain, in a finite, symbolic way, and (ii) a technique to automatically synthesize composite web services, based on Propositional Dynamic Logic.

Proceedings ArticleDOI
10 May 2005
TL;DR: Thresher is described, a system that lets non-technical users teach their browsers how to extract semantic web content from HTML documents on the World Wide Web, and which enables a rich semantic interaction with existing web pages, "unwrapping" semantic data buried in the pages' HTML.
Abstract: We describe Thresher, a system that lets non-technical users teach their browsers how to extract semantic web content from HTML documents on the World Wide Web. Users specify examples of semantic content by highlighting them in a web browser and describing their meaning. We then use the tree edit distance between the DOM subtrees of these examples to create a general pattern, or wrapper, for the content, and allow the user to bind RDF classes and predicates to the nodes of these wrappers. By overlaying matches to these patterns on standard documents inside the Haystack semantic web browser, we enable a rich semantic interaction with existing web pages, "unwrapping" semantic data buried in the pages' HTML. By allowing end-users to create, modify, and utilize their own patterns, we hope to speed adoption and use of the Semantic Web and its applications.

01 Jan 2005
TL;DR: Unlike OWL-S atomic processes, this work does not use a “pre-condition”, or equivalently, it assumes that the pre-condition is uniformly true, to enable a more uniform treatment of atomic process executions.
Abstract: Remark 1.1: Unlike OWL-S atomic processes, we do not use a “pre-condition”, or equivalently, we assume that the pre-condition is uniformly true. We do this to enable a more uniform treatment of atomic process executions: when a web service invokes an atomic process in Colombo, the invoking service will transition to a new state whether or not the atomic process “succeeds”. Optionally, the designer of the atomic process can include an output boolean variable ‘flag’, which is set to true if the execution “succeeded” and is set to false if the execution “failed”. These are conveniences that simplifies bookkeeping, with no real impact on expressive power. 2

Proceedings ArticleDOI
11 Jul 2005
TL;DR: This paper presents a modeling language for the model-driven development of context-aware Web services based on the Unified Modeling Language (UML), and shows how UML can be used to specify information related to the design ofcontext-aware services.
Abstract: Context-aware Web services are emerging as a promising technology for the electronic businesses in mobile and pervasive environments. Unfortunately, complex context-aware services are still hard to build. In this paper, we present a modeling language for the model-driven development of context-aware Web services based on the Unified Modeling Language (UML). Specifically, we show how UML can be used to specify information related to the design of context-aware services. We present the abstract syntax and notation of the language and illustrate its usage using an example service. Our language offers significant design flexibility that considerably simplifies the development of context-aware Web services.

Journal ArticleDOI
TL;DR: Results show that user session data can be used to produce test suites more effective overall than those produced by the white-box techniques considered; however, the faults detected by the two classes of techniques differ, suggesting that the techniques are complementary.
Abstract: Web applications are vital components of the global information infrastructure, and it is important to ensure their dependability. Many techniques and tools for validating Web applications have been created, but few of these have addressed the need to test Web application functionality and none have attempted to leverage data gathered in the operation of Web applications to assist with testing. In this paper, we present several techniques for using user session data gathered as users operate Web applications to help test those applications from a functional standpoint. We report results of an experiment comparing these new techniques to existing white-box techniques for creating test cases for Web applications, assessing both the adequacy of the generated test cases and their ability to detect faults on a point-of-sale Web application. Our results show that user session data can be used to produce test suites more effective overall than those produced by the white-box techniques considered; however, the faults detected by the two classes of techniques differ, suggesting that the techniques are complementary.

Proceedings ArticleDOI
23 Oct 2005
TL;DR: Chickenfoot is described, a programming system embedded in the Firefox web browser, which enables end-users to automate, customize, and integrate web applications without examining their source code.
Abstract: On the desktop, an application can expect to control its user interface down to the last pixel, but on the World Wide Web, a content provider has no control over how the client will view the page, once delivered to the browser. This creates an opportunity for end-users who want to automate and customize their web experiences, but the growing complexity of web pages and standards prevents most users from realizing this opportunity. We describe Chickenfoot, a programming system embedded in the Firefox web browser, which enables end-users to automate, customize, and integrate web applications without examining their source code. One way Chickenfoot addresses this goal is a novel technique for identifying page components by keyword pattern matching. We motivate this technique by studying how users name web page components, and present a heuristic keyword matching algorithm that identifies the desired component from the user's name.

Book ChapterDOI
04 Apr 2005
TL;DR: This paper defines the notion of usability – an intuitive and locally provable soundness criterion for a given Web services, and demonstrates how the other questions could be answered.
Abstract: This paper is concerned with the application of Web services to distributed, cross-organizational business processes. In this scenario, it is crucial to answer the following questions: Do two Web services fit together in a way such that the composed system is deadlock-free? – the question of compatibility. Can one Web service be replaced by another while the remaining components stay untouched? – the question of equivalence. Can we reason about the soundness of one given Web service without considering the actual environment it will by used in? This paper defines the notion of usability – an intuitive and locally provable soundness criterion for a given Web services. Based on this notion, this paper demonstrates how the other questions could be answered. The presented method is based on Petri nets, because this formalism is widely used for modeling and analyzing business processes. Due to the existing Petri net semantics for BPEL4WS – a language that is in the very act of becoming the industrial standard for Web service based business processes – the results are directly applicable to real world examples.

Proceedings Article
30 Jul 2005
TL;DR: This paper starts from descriptions of web services in standard process modeling and execution languages and automatically translates them into a planning domain that models the interactions among services at the knowledge level, to avoid the explosion of the search space due to the usually large and possibly infinite ranges of data values that are exchanged among services.
Abstract: In this paper, we address the problem of the automated composition of web services by planning on their "knowledge level" models. We start from descriptions of web services in standard process modeling and execution languages, like BPEL4WS, and automatically translate them into a planning domain that models the interactions among services at the knowledge level. This allows us to avoid the explosion of the search space due to the usually large and possibly infinite ranges of data values that are exchanged among services, and thus to scale up the applicability of state-of-the-art techniques for the automated composition of web services. We present the theoretical framework, implement it, and provide an experimental evaluation that shows the practical advantage of our approach w.r.t. techniques that are not based on a knowledgelevel representation.

Journal ArticleDOI
TL;DR: This paper presents an agent-based and context-oriented approach that supports the composition of Web services, where software agents engage in conversations with their peers to agree on the Web services that participate in this process.
Abstract: This paper presents an agent-based and context-oriented approach that supports the composition of Web services. A Web service is an accessible application that other applications and humans can discover and invoke to satisfy multiple needs. To reduce the complexity featuring the composition of Web services, two concepts are put forward, namely, software agent and context. A software agent is an autonomous entity that acts on behalf of users and the context is any relevant information that characterizes a situation. During the composition process, software agents engage in conversations with their peers to agree on the Web services that participate in this process. Conversations between agents take into account the execution context of the Web services. The security of the computing resources on which the Web services are executed constitutes another core component of the agent-based and context-oriented approach presented in this paper.

Journal ArticleDOI
TL;DR: Results show that knowledge map outperformed Kartoo, a commercial search engine with graphical display, in terms of effectiveness and efficiency, and Web community was found to be more effective, efficient, and usable than result list.
Abstract: Information overload often hinders knowledge discovery on the Web. Existing tools lack analysis and visualization capabilities. Search engine displays often overwhelm users with irrelevant information. This research proposes a visual framework for knowledge discovery on the Web. The framework incorporates Web mining, clustering, and visualization techniques to support effective exploration of knowledge. Two new browsing methods were developed and applied to the business intelligence domain: Web community uses a genetic algorithm to organize Web sites into a tree format; knowledge map uses a multidimensional scaling algorithm to place Web sites as points on a screen. Experimental results show that knowledge map outperformed Kartoo, a commercial search engine with graphical display, in terms of effectiveness and efficiency. Web community was found to be more effective, efficient, and usable than result list. Our visual framework thus helps to alleviate information overload on the Web and offers practical implications for search engine developers.

Proceedings ArticleDOI
07 Jun 2005
TL;DR: This paper provides a theoretical framework to investigate the query generation problem for the hidden Web and proposes effective policies for generating queries automatically and experimentally evaluates the effectiveness of these policies on 4 real hidden Web sites.
Abstract: An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users.In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only "entry point" to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.

Book ChapterDOI
06 Nov 2005
TL;DR: A novel Semantic Web navigation model providing additional navigation paths through Swoogle's search services such as the Ontology Dictionary is proposed, and algorithms for ranking the importance ofSemantic Web objects at three levels of granularity: documents, terms and RDF graphs are developed.
Abstract: Swoogle helps software agents and knowledge engineers find Semantic Web knowledge encoded in RDF and OWL documents on the Web. Navigating such a Semantic Web on the Web is difficult due to the paucity of explicit hyperlinks beyond the namespaces in URIrefs and the few inter-document links like rdfs:seeAlso and owl:imports. In order to solve this issue, this paper proposes a novel Semantic Web navigation model providing additional navigation paths through Swoogle's search services such as the Ontology Dictionary. Using this model, we have developed algorithms for ranking the importance of Semantic Web objects at three levels of granularity: documents, terms and RDF graphs. Experiments show that Swoogle outperforms conventional web search engine and other ontology libraries in finding more ontologies, ranking their importance, and thus promoting the use and emergence of consensus ontologies.

Book ChapterDOI
06 Nov 2005
TL;DR: Piggy Bank is a web browser extension that lets users make use of Semantic Web content within Web content as users browse the Web, and Semantic Bank, a web server application that lets Piggy Bank users share theSemantic Web information they have collected, enabling collaborative efforts to build so-phisticated SemanticWeb information repositories through simple, everyday's use of piggy Bank.
Abstract: The Semantic Web Initiative envisions a Web wherein information is offered free of presentation, allowing more effective exchange and mixing across web sites and across web pages. But without substantial Semantic Web content, few tools will be written to consume it; without many such tools, there is little appeal to publish Semantic Web content. To break this chicken-and-egg problem, thus enabling more flexible informa-tion access, we have created a web browser extension called Piggy Bankthat lets users make use of Semantic Web content within Web content as users browse the Web. Wherever Semantic Web content is not available, Piggy Bank can invoke screenscrapers to re-structure information within web pages into Semantic Web format. Through the use of Semantic Web technologies, Piggy Bank provides direct, immediate benefits to users in their use of the existing Web. Thus, the ex-istence of even just a few Semantic Web-enabled sites or a few scrapers already benefits users. Piggy Bank thereby offers an easy, incremental upgrade path to users without requiring a wholesale adoption of the Semantic Web's vision. To further improve this Semantic Web experience, we have created Semantic Bank, a web server application that lets Piggy Bank users share the Semantic Web information they have collected, enabling collaborative efforts to build so-phisticated Semantic Web information repositories through simple, everyday's use of Piggy Bank.

Book
24 Mar 2005
TL;DR: This work presents an efficient Algorithm for OWL-S Based Semantic Search in UDDI and a Semantic Approach for Designing E-Business Protocols using METEOR-S Web Service Annotation Framework with Machine Learning Classification.
Abstract: to Semantic Web Services and Web Process Composition.- Panel.- Academic and Industrial Research: Do Their Approaches Differ in Adding Semantics to Web Services?.- Talk.- Interoperability in Semantic Web Services.- Full Papers.- Bringing Semantics to Web Services: The OWL-S Approach.- A Survey of Automated Web Service Composition Methods.- Enhancing Web Services Description and Discovery to Facilitate Composition.- Compensation in the World of Web Services Composition.- Trust Negotiation for Semantic Web Services.- An Efficient Algorithm for OWL-S Based Semantic Search in UDDI.- A Semantic Approach for Designing E-Business Protocols.- Towards Automatic Discovery of Web Portals.- METEOR-S Web Service Annotation Framework with Machine Learning Classification.

Proceedings ArticleDOI
02 Apr 2005
TL;DR: A comparison of different methods for finding accessibility problems affecting users who are blind finds multiple developers, using a screen reader, were most consistently successful at finding most classes of problems, and tended to find about 50% of known problems.
Abstract: Web access for users with disabilities is an important goal and challenging problem for web content developers and designers. This paper presents a comparison of different methods for finding accessibility problems affecting users who are blind. Our comparison focuses on techniques that might be of use to Web developers without accessibility experience, a large and important group that represents a major source of inaccessible pages. We compare a laboratory study with blind users to an automated tool, expert review by web designers with and without a screen reader, and remote testing by blind users. Multiple developers, using a screen reader, were most consistently successful at finding most classes of problems, and tended to find about 50% of known problems. Surprisingly, a remote study with blind users was one of the least effective methods. All of the techniques, however, had different, complementary strengths and weaknesses.

Proceedings Article
01 Jan 2005
TL;DR: The OWL web ontology language is extended, with fuzzy set theory, in order to be able to capture, represent and reason with information that is many times imprecise or vague.
Abstract: In the Semantic Web context information would be retrieved, processed, shared, reused and aligned in the maximum automatic way possible. Our experience with such applications in the Semantic Web has shown that these are rarely a matter of true or false but rather procedures that require degrees of relatedness, similarity, or ranking. Apart from the wealth of applications that are inherently imprecise, information itself is many times imprecise or vague. For example, the concepts of a “hot” place, an “expensive” item, a “fast” car, a “near” city, are examples of such concepts. Dealing with such type of information would yield more realistic, intelligent and effective applications. In the current paper we extend the OWL web ontology language, with fuzzy set theory, in order to be able to capture, represent and reason with such type of information.

Journal ArticleDOI
TL;DR: The development of on-line software tools is changing the way the authors traditionally perform analysis in drug design, but will chemoinformatics be forever behind bioinformatics in this development?

Journal ArticleDOI
TL;DR: This article shows how Web process composition techniques can be enhanced by using semantic process templates to capture the semantic requirements of the process, using Semantic Web techniques for process template definition and Web service discovery.
Abstract: Web services have the potential to revolutionize e-commerce by enabling businesses to interact with each other on the fly. To date, however, Web processes using Web services have been created mostly at the syntactic level. Current composition standards focus on building processes based on the interface description of the participating services. This rigid approach, with its strong coupling between the process and the interface of the participating services, does not allow businesses to dynamically change partners and services. As shown in this article, Web process composition techniques can be enhanced by using semantic process templates to capture the semantic requirements of the process. The semantic process templates act as configurable modules for common industry processes maintaining the semantics of the participating activities, control flow, intermediate calculations, and conditional branches, and exposing them in an industry-accepted interface. The templates are instantiated to form executable processes according to the semantics of the activities in the templates. The use of ontologies in template definition allows much richer description of activity requirements and a more effective way of locating services to carry out activities in the executable Web process. Discovery of services considers not only functionality, but also the quality of service (QoS) of the corresponding activities. This unique approach combines the expressive power of present Web service composition standards with the advantages of Semantic Web techniques for process template definition and Web service discovery. The prototype implementation of the framework for building the templates carries out Semantic Web service discovery and generates the processes.