scispace - formally typeset
Search or ask a question

Showing papers on "Web modeling published in 2004"


Journal ArticleDOI
TL;DR: This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service.
Abstract: The paradigmatic shift from a Web of manual interactions to a Web of programmatic interactions driven by Web services is creating unprecedented opportunities for the formation of online business-to-business (B2B) collaborations. In particular, the creation of value-added services by composition of existing ones is gaining a significant momentum. Since many available Web services provide overlapping or identical functionality, albeit with different quality of service (QoS), a choice needs to be made to determine which services are to participate in a given composite service. This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service. Two selection approaches are described and compared: one based on local (task-level) selection of services and the other based on global allocation of tasks to services using integer programming.

2,872 citations


BookDOI
01 Jan 2004
TL;DR: DODDLE-R, a support environment for user-centered ontology development, consists of two main parts: pre-processing part and quality improvement part, which generates a prototype ontology semi-automatically and supports the refinement of it interactively.
Abstract: In order to realize the on-the-fly ontology construction for the Semantic Web, this paper proposes DODDLE-R, a support environment for user-centered ontology development. It consists of two main parts: pre-processing part and quality improvement part. Pre-processing part generates a prototype ontology semi-automatically, and quality improvement part supports the refinement of it interactively. As we believe that careful construction of ontologies from preliminary phase is more efficient than attempting generate ontologies full-automatically (it may cause too many modification by hand), quality improvement part plays significant role in DODDLE-R. Through interactive support for improving the quality of prototype ontology, OWL-Lite level ontology, which consists of taxonomic relationships (class sub class relationship) and non-taxonomic relationships (defined as property), is constructed effi-

2,006 citations


Book ChapterDOI
06 Jul 2004
TL;DR: An overview of recent research efforts of automatic Web service composition both from the workflow and AI planning research community is given.
Abstract: In today’s Web, Web services are created and updated on the fly. It’s already beyond the human ability to analysis them and generate the composition plan manually. A number of approaches have been proposed to tackle that problem. Most of them are inspired by the researches in cross-enterprise workflow and AI planning. This paper gives an overview of recent research efforts of automatic Web service composition both from the workflow and AI planning research community.

1,216 citations


Proceedings ArticleDOI
Daniel E. Rose1, Danny Levinson1
17 May 2004
TL;DR: A framework for understanding the underlying goals of user searches is described and the experience in using the framework to manually classify queries from a web search engine is illustrated.
Abstract: Previous work on understanding user web search behavior has focused on how people search and what they are searching for, but not why they are searching. In this paper, we describe a framework for understanding the underlying goals of user searches, and our experience in using the framework to manually classify queries from a web search engine. Our analysis suggests that so-called navigational" searches are less prevalent than generally believed while a previously unexplored "resource-seeking" goal may account for a large fraction of web searches. We also illustrate how this knowledge of user search goals might be used to improve future web search engines.

1,062 citations


Book ChapterDOI
Xin Dong1, Alon Halevy1, Jayant Madhavan1, Ema Nemes1, Jun Zhang1 
31 Aug 2004
TL;DR: Woogle supports similarity search for web services, such as finding similar web-service operations and finding operations that compose with a given one, and novel techniques to support these types of searches are described.
Abstract: Web services are loosely coupled software components, published, located, and invoked across the web. The growing number of web services available within an organization and on the Web raises a new and challenging search problem: locating desired web services. Traditional keyword search is insufficient in this context: the specific types of queries users require are not captured, the very small text fragments in web services are unsuitable for keyword search, and the underlying structure and semantics of the web services are not exploited. We describe the algorithms underlying the Woogle search engine for web services. Woogle supports similarity search for web services, such as finding similar web-service operations and finding operations that compose with a given one. We describe novel techniques to support these types of searches, and an experimental study on a collection of over 1500 web-service operations that shows the high recall and precision of our algorithms.

828 citations


Journal ArticleDOI
TL;DR: A sound and complete algorithm is provided to translate OWL-S service descriptions to a SHOP2 domain and it is proved the correctness of the algorithm by showing the correspondence to the situation calculus semantics of OWl-S.

819 citations


Journal ArticleDOI
TL;DR: Four key issues for Web service composition are described, which offer developers reuse possibilities and users seamless access to a variety of complex services.
Abstract: Web service composition lets developers create applications on top of service-oriented computing's native description, discovery, and communication capabilities. Such applications are rapidly deployable and offer developers reuse possibilities and users seamless access to a variety of complex services. There are many existing approaches to service composition, ranging from abstract methods to those aiming to be industry standards. The authors describe four key issues for Web service composition.

770 citations


Proceedings ArticleDOI
17 May 2004
TL;DR: A lattice-based static analysis algorithm derived from type systems and typestate is created, and its soundness is addressed, thus securing Web applications in the absence of user intervention and reducing potential runtime overhead by 98.4%.
Abstract: Security remains a major roadblock to universal acceptance of the Web for many kinds of transactions, especially since the recent sharp increase in remotely exploitable vulnerabilities have been attributed to Web application bugs. Many verification tools are discovering previously unknown vulnerabilities in legacy C programs, raising hopes that the same success can be achieved with Web applications. In this paper, we describe a sound and holistic approach to ensuring Web application security. Viewing Web application vulnerabilities as a secure information flow problem, we created a lattice-based static analysis algorithm derived from type systems and typestate, and addressed its soundness. During the analysis, sections of code considered vulnerable are instrumented with runtime guards, thus securing Web applications in the absence of user intervention. With sufficient annotations, runtime overhead can be reduced to zero. We also created a tool named.WebSSARI (Web application Security by Static Analysis and Runtime Inspection) to test our algorithm, and used it to verify 230 open-source Web application projects on SourceForge.net, which were selected to represent projects of different maturity, popularity, and scale. 69 contained vulnerabilities. After notifying the developers, 38 acknowledged our findings and stated their plans to provide patches. Our statistics also show that static analysis reduced potential runtime overhead by 98.4%.

655 citations


Book
16 Apr 2004
TL;DR: This guide will help you dramatically reduce the risk, complexity, and cost of integrating the many new concepts and technologies introduced by the SOA platform.
Abstract: Web services is the integration technology preferred by organizations implementing service-oriented architectures. I would recommend that anybody involved in application development obtain a working knowledge of these technologies, and I'm pleased to recommend Erl's book as a great place to begin.-Tom Glover, Senior Program Manager, Web Services Standards, IBM Software Group, and Chairman of the Web Services Interoperability Organization (WS-I).An excellent guide to building and integrating XML and Web services, providing pragmatic recommendations for applying these technologies effectively. The author tackles numerous integration challenges, identifying common mistakes and providing guidance needed to get it right the first time. A valuable resource for understanding and realizing the benefits of service-oriented architecture in the enterprise.-David Keogh, Program Manager, Visual Studio Enterprise Tools, Microsoft.Leading-edge IT organizations are currently exploring second generation web service technologies, but introductory material beyond technical specifications is sparse. Erl explains many of these emerging technologies in simple terms, elucidating the difficult concepts with appropriate examples, and demonstrates how they contribute to service-oriented architectures. I highly recommend this book to enterprise architects for their shelves.-Kevin P. Davis, Ph. D., Software Architect.Service-oriented integration with less cost and less riski? i? The emergence of key second-generation Web services standards has positioned service-oriented architecture (SOA) as the foremost platform for contemporary business automation solutions. The integration of SOA principles and technology is empowering organizations to build applications with unprecedented levels of flexibility, agility, and sophistication (while also allowing them to leverage existing legacy environments).This guide will help you dramatically reduce the risk, complexity, and cost of integrating the many new concepts and technologies introduced by the SOA platform. It brings together the first comprehensive collection of field-proven strategies, guidelines, and best practices for making the transition toward the service-oriented enterprise.Writing for architects, analysts, managers, and developers, Thomas Erl offers expert advice for making strategic decisions about both immediate and long-term integration issues. Erl addresses a broad spectrum of integration challenges, covering technical and design issues, as well as strategic planning. Covers crucial second-generation (WS-*) Web services standards: BPEL4WS, WS-Security, WS-Coordination, WS-Transaction, WS-Policy, WS-ReliableMessaging, and WS-Attachments Includes hundreds of individual integration strategies and more than 60 best practices for both XML and Web services technologies Includes a complete tutorial on service-oriented design principles for business and technical modeling Explores design issues related to a wide variety of service-oriented integration architectures that integrate XML and Web services into legacy and EAI environments Provides a clear roadmap for planning a long-term migration toward a standardized service-oriented enterpriseService-oriented architecture is no longer an exclusive discipline practiced only by expensive consultants. With this book's help, you can plan, architect, and implement your own service-oriented environments-efficiently and cost-effectively.About the Web SitesErl's Service-Oriented Architecture books are supported by two Web sites. http://www.soabooks.com provides a variety of content resources and http://www.soaspecs.com supplies a descriptive portal to referenced specifications.

627 citations


Proceedings ArticleDOI
17 May 2004
TL;DR: MWSAF (METEOR-S Web Service Annotation Framework), a framework for semi-automatically marking up Web service descriptions with ontologies, which has developed algorithms to match and annotate WSDL files with relevant ontologies.
Abstract: The World Wide Web is emerging not only as an infrastructure for data, but also for a broader variety of resources that are increasingly being made available as Web services. Relevant current standards like UDDI, WSDL, and SOAP are in their fledgling years and form the basis of making Web services a workable and broadly adopted technology. However, realizing the fuller scope of the promise of Web services and associated service oriented architecture will requite further technological advances in the areas of service interoperation, service discovery, service composition, and process orchestration. Semantics, especially as supported by the use of ontologies, and related Semantic Web technologies, are likely to provide better qualitative and scalable solutions to these requirements. Just as semantic annotation of data in the Semantic Web is the first critical step to better search, integration and analytics over heterogeneous data, semantic annotation of Web services is an equally critical first step to achieving the above promise. Our approach is to work with existing Web services technologies and combine them with ideas from the Semantic Web to create a better framework for Web service discovery and composition. In this paper we present MWSAF (METEOR-S Web Service Annotation Framework), a framework for semi-automatically marking up Web service descriptions with ontologies. We have developed algorithms to match and annotate WSDL files with relevant ontologies. We use domain ontologies to categorize Web services into domains. An empirical study of our approach is presented to help evaluate its performance.

573 citations


Book ChapterDOI
01 Jan 2004
TL;DR: This chapter introduces web services and explains their role in Microsoft’s vision of the programmable web and removes some of the confusion surrounding technical terms like WSDL, SOAP, and UDDI.
Abstract: Microsoft has promoted ASP.NET’s new web services more than almost any other part of the.NET Framework. But despite their efforts, confusion is still widespread about what a web service is and, more importantly, what it’s meant to accomplish. This chapter introduces web services and explains their role in Microsoft’s vision of the programmable web. Along the way, you’ll learn about the open standards plumbing that allows web services to work and removes some of the confusion surrounding technical terms like WSDL (Web Service Description Language), SOAP, and UDDI (universal description, discovery, and integration).

Proceedings ArticleDOI
15 Sep 2004
TL;DR: This work presents a constraint driven Web service composition tool in METEOR-S, which allows the process designers to bind Web services to an abstract process, based on business and process constraints and generate an executable process.
Abstract: Creating Web processes using Web service technology gives us the opportunity for selecting new services, which best suit our need at the moment. Doing this automatically requires us to quantify our criteria for selection. In addition, there are challenging issues of correctness and optimality. We present a constraint driven Web service composition tool in METEOR-S, which allows the process designers to bind Web services to an abstract process, based on business and process constraints and generate an executable process. Our approach is to reduce much of the service composition problem to a constraint satisfaction problem. It uses a multiphase approach for constraint analysis. This work was done as part of the METEORS framework, which aims to support the complete lifecycle of semantic Web processes.

Proceedings ArticleDOI
20 Sep 2004
TL;DR: A mechanism is introduced that determines theQoS of a Web service composition by aggregating the QoS dimensions of the individual services by building upon abstract composition patterns derived from Van der Aalst's et al. comprehensive collection of workflow patterns.
Abstract: Contributions in the field of Web services have identified that (a) finding matches between semantic descriptions of advertised and requested services and (b) nonfunctional characteristics - the quality of service (QoS) - are the most crucial criteria for composition of Web services. A mechanism is introduced that determines the QoS of a Web service composition by aggregating the QoS dimensions of the individual services. This allows to verify whether a set of services selected for composition satisfies the QoS requirements for the whole composition. The aggregation performed builds upon abstract composition patterns, which represent basic structural elements of a composition, like sequence, loop, or parallel execution. This work focusses on workflow management environments. We define composition patterns that are derived from Van der Aalst's et al. comprehensive collection of workflow patterns. The resulting aggregation schema supports the same structural elements as found in workflows. Furthermore, the aggregation of several QoS dimensions is discussed.

Proceedings ArticleDOI
17 May 2004
TL;DR: PANKOW (Pattern-based Annotation through Knowledge on theWeb), a method which employs an unsupervised, pattern-based approach to categorize instances with regard to an ontology, is proposed.
Abstract: The success of the Semantic Web depends on the availability of ontologies as well as on the proliferation of web pages annotated with metadata conforming to these ontologies. Thus, a crucial question is where to acquire these metadata from. In this paper wepropose PANKOW (Pattern-based Annotation through Knowledge on theWeb), a method which employs an unsupervised, pattern-based approach to categorize instances with regard to an ontology. The approach is evaluated against the manual annotations of two human subjects. The approach is implemented in OntoMat, an annotation tool for the Semantic Web and shows very promising results.

Journal ArticleDOI
01 Sep 2004
TL;DR: This paper surveys this relatively unexplored frontier of the deep Web, measuring characteristics pertinent to both exploring and integrating structured Web sources, to conclude with several implications which, while necessarily subjective, might help shape research directions and solutions.
Abstract: The Web has been rapidly "deepened" by the prevalence of databases online. With the potentially unlimited information hidden behind their query interfaces, this "deep Web" of searchable databses is clearly an important frontier for data access. This paper surveys this relatively unexplored frontier, measuring characteristics pertinent to both exploring and integrating structured Web sources. On one hand, our "macro" study surveys the deep Web at large, in April 2004, adopting the random IP-sampling approach, with one million samples. (How large is the deep Web? How is it covered by current directory services?) On the other hand, our "micro" study surveys source-specific characteristics over 441 sources in eight representative domains, in December 2002. (How "hidden" are deep-Web sources? How do search engines cover their data? How complex and expressive are query forms?) We report our observations and publish the resulting datasets to the research community. We conclude with several implications (of our own) which, while necessarily subjective, might help shape research directions and solutions.

Patent
19 Oct 2004
TL;DR: In this paper, a system, method and computer program product that combines techniques in the fields of search, data mining, collaborative filtering, user ratings and referral mappings into a system for intelligent web-based help for task or transaction oriented web based systems.
Abstract: A system, method and computer program product that combines techniques in the fields of search, data mining, collaborative filtering, user ratings and referral mappings into a system for intelligent web-based help for task or transaction oriented web based systems. The system makes use of a service oriented architecture based on metadata and web services to locate, categorize and provide relevant context sensitive help, including found help not available when the web based system or application was first developed. As part of the inventive system, there is additionally provided a system for providing an integrated information taxonomy which combines automatically, semi-automatically, and manually generated taxonomies and applies them to help systems. This aspect of the invention is applicable to the fields of online self-help systems for web sites and software applications as well as to customer, supplier and employee help desks.

Journal ArticleDOI
TL;DR: It is shown that in order to allow people to profit from all this visual information, there is a need to develop tools that help them to locate the needed images with good precision in a reasonable time and that such tools are useful for many applications and purposes.
Abstract: With the explosive growth of the World Wide Web, the public is gaining access to massive amounts of information. However, locating needed and relevant information remains a difficult task, whether the information is textual or visual. Text search engines have existed for some years now and have achieved a certain degree of success. However, despite the large number of images available on the Web, image search engines are still rare. In this article, we show that in order to allow people to profit from all this visual information, there is a need to develop tools that help them to locate the needed images with good precision in a reasonable time, and that such tools are useful for many applications and purposes. The article surveys the main characteristics of the existing systems most often cited in the literature, such as ImageRover, WebSeek, Diogenes, and Atlas WISE. It then examines the various issues related to the design and implementation of a Web image search engine, such as data gathering and digestion, indexing, query specification, retrieval and similarity, Web coverage, and performance evaluation. A general discussion is given for each of these issues, with examples of the ways they are addressed by existing engines, and 130 related references are given. Some concluding remarks and directions for future research are also presented.

Book ChapterDOI
27 Sep 2004
TL;DR: This paper presents AO4BPEL, an aspect-oriented extension to BPEL4WS that captures web service composition in a modular way and the composition becomes more open for dynamic change.
Abstract: Web services have become a universal technology for integration of distributed and heterogeneous applications over the Internet. Many recent proposals such as the Business Process Modeling Language (BPML) and the Business Process Execution Language for Web Services (BPEL4WS) focus on combining existing web services into more sophisticated web services. However, these standards exhibit some limitations regarding modularity and flexibility. In this paper, we advocate an aspect-oriented approach to web service composition and present AO4BPEL, an aspect-oriented extension to BPEL4WS. With aspects, we capture web service composition in a modular way and the composition becomes more open for dynamic change.

Proceedings ArticleDOI
15 Nov 2004
TL;DR: A framework for the design and the verification of WSs using process algebras and their tools is presented and a two-way mapping between abstract specifications written using calculi and executable Web services written in BPEL4WS is defined.
Abstract: It is now well-admitted that formal methods are helpful for many issues raised in the Web service area. In this paper we present a framework for the design and the verification of WSs using process algebras and their tools. We define a two-way mapping between abstract specifications written using these calculi and executable Web services written in BPEL4WS; the translation includes also compensation, event, and fault handlers. The following choices are available: design and verification in BPEL4WS, using process algebra tools, or design and verification in process algebra and automatically obtaining the corresponding BPEL4WS code. The approaches can be combined. Process algebras are not useful only for temporal logic verification: we remark the use of simulation/bisimulation for verification, for the hierarchical refinement design method, for the service redundancy analysis in a community, and for replacing a service with another one in a composition.

Proceedings ArticleDOI
19 May 2004
TL;DR: This paper investigates build time and runtime issues related to decentralized orchestration of composite web services, using BPEL4WS to describe the composite web Services and BPWS4J as the underlying runtime environment to orchestrate them.
Abstract: Web services make information and software available programmatically via the Internet and may be used as building blocks for applications. A composite web service is one that is built using multiple component web services and is typically specified using a language such as BPEL4WS or WSIPL. Once its specification has been developed, the composite service may be orchestrated either in a centralized or in a decentralized fashion. Decentralized orchestration offers performance improvements in terms of increased throughput and scalability and lower response time. However, decentralized orchestration also brings additional complexity to the system in terms of error recovery and fault handling. Further, incorrect design of a decentralized system can lead to potential deadlock or non-optimal usage of system resources. This paper investigates build time and runtime issues related to decentralized orchestration of composite web services. We support our design decisions with performance results obtained on a decentralized setup using BPEL4WS to describe the composite web services and BPWS4J as the underlying runtime environment to orchestrate them.

Journal ArticleDOI
TL;DR: A Peer-to-Peer (P2P) indexing system and associated P2P storage that supports large-scale, decentralized, real-time search capabilities and guarantees that all existing data elements matching a query will be found with bounded costs.
Abstract: Web Services are emerging as a dominant paradigm for constructing and composing distributed business applications and enabling enterprise-wide interoperability. A critical factor to the overall utility of Web Services is a scalable, flexible and robust discovery mechanism. This paper presents a Peer-to-Peer (P2P) indexing system and associated P2P storage that supports large-scale, decentralized, real-time search capabilities. The presented system supports complex queries containing partial keywords and wildcards. Furthermore, it guarantees that all existing data elements matching a query will be found with bounded costs in terms of number of messages and number of nodes involved. The key innovation is a dimension reducing indexing scheme that effectively maps the multidimensional information space to physical peers. The design and an experimental evaluation of the system are presented.

Proceedings ArticleDOI
06 Jun 2004
TL;DR: It is argued that essential facets of Web services, and especially those useful to understand their interaction, can be described using process-algebraic notations and claimed that process algebras provide a very complete and satisfactory assistance to the whole process of Web service development.
Abstract: We argue that essential facets of Web services, and especially those useful to understand their interaction, can be described using process-algebraic notations. Web service description and execution languages such as BPEL are essentially process description languages; they are based on primitives for behaviour description and message exchange which can also be found in more abstract process algebras. One legitimate question is therefore whether the formal approach and the sophisticated tools introduced for process algebra can be used to improve the effectiveness and the reliability of Web service development. Our investigations suggest a positive answer, and we claim that process algebras provide a very complete and satisfactory assistance to the whole process of Web service development. We show on a case study that readily available tools based on process algebra are effective at verifying that Web services conform to their requirements and respect properties. We advocate their use both at the design stage and for reverse engineering issues. More prospectively, we discuss how they can be helpful to tackle choreography issues.

Journal ArticleDOI
TL;DR: This paper shows how to check whether two or more Web services are compatible to interoperate or not, and, if not, whether the specification of adaptors that mediate between them can be automatically generated, enabling the communication of (a priori) incompatible Web services.

Journal ArticleDOI
01 Jul 2004
TL;DR: In this work, ontologies are proposed for modeling the high-level security requirements and capabilities of Web services and clients and helps to match a client's request with appropriate services-those based on security criteria as well as functional descriptions.
Abstract: Web services will soon handle users' private information. They'll need to provide privacy guarantees to prevent this delicate information from ending up in the wrong hands. More generally, Web services will need to reason about their users' policies that specify who can access private information and under what conditions. These requirements are even more stringent for semantic Web services that exploit the semantic Web to automate their discovery and interaction because they must autonomously decide what information to exchange and how. In our previous work, we proposed ontologies for modeling the high-level security requirements and capabilities of Web services and clients.1 This modeling helps to match a client's request with appropriate services-those based on security criteria as well as functional descriptions.

Journal ArticleDOI
TL;DR: Spinning the Semantic Web is based on papers presented in a seminar in Germany in 2000, and sketches the vario elements of semantic Web, the issues in realising it as well as some visions of the future.
Abstract: From the quiet new born days of early 1990s, the World Wide Web has had an exponential growth in the last decade or so. From the original goal of sharing research resources, Web today portrays a virtual world spanning from research to entertainment and e‐commerce. This growth has necessitated substantial changes in the Web model. From the purely syntactic and relatively static framework of HTML, we have moved through DHTML and XML incorporating dynamicity and extensibility, and are now en route semantic frameworks starting with RDF. These allow Web documents to be comprehensible to machines (and not just to humans) allowing software agents to access and process such information on the Web. This leads us to semantic Web, and thus to a generation of Web applications based on Web services, adaptive content delivery, etc. Spinning the Semantic Web is based on papers presented in a seminar in Germany in 2000, and sketches the vario elements of semantic Web, the issues in realising it as well as some visions of the future. The stimulating forward to the book by Tim Berners‐Lee, recently Knighted and widely regarded as the father of the Web, portrays his vision of semantic Web. The chapters explore specific issues such as ontologies, schema languages, annotations, applications, etc. The chapters are largely unorganised and presented without any cross‐linking and most chapters use a fair amount of domain jargon. The book will be of value to those seriously interested in the field.

Proceedings ArticleDOI
D. Skogan1, R. Groenmo1, I. Solheim1
20 Sep 2004
TL;DR: This work proposes a method that uses UML Activity models to design Web service compositions, and OMG's Model Driven Architecture (MDA) to generate executable specifications in different composition languages.
Abstract: As the number of available Web services is steadily increasing, there is a growing interest for reusing basic Web services in new, composite Web services. Several organizations have proposed composition languages (BPML, BPMN, BPEL4WS, BPSS, WSCI), but no winner has been declared so far. This work proposes a method that uses UML Activity models to design Web service compositions, and OMG's Model Driven Architecture (MDA) to generate executable specifications in different composition languages. The method utilizes standard UML constructs with a minimal set of extensions for Web services. An important step in the method is the transformation of WSDL descriptions into UML This information is used to complete the composition models. Another key aspect of the method is its independence of the Web service composition language. The user can thus select his preferred composition language - and execution engine - for realizing the composite Web service. Currently, the method has been implemented to support two executable composition languages BPEL4WS and WorkSCo, with corresponding execution engines. WorkSco is a Web service enabled workflow composition language. The method is illustrated with an example from a crisis management scenario.

Journal ArticleDOI
TL;DR: This paper shows how the ecological approach shows promise not only to allow information about learners actual interactions with learning objects to be naturally captured but also to allow it to be used in a multitude of ways to support learners and teachers in achieving their goals.
Abstract: The semantic web movement has grown around the need to add semantics to the web in order to make it more usable by people and by information systems. In this paper I argue that even more important than semantics is pragmatics; that is, to really enhance web usability it is critical to capture and react to aspects of the end use context. Most centrally, to make the web truly responsive to human needs, we need to understand the "users" of the web and their purposes for using it. In this paper I elaborate this argument in the context of of e-learning systems. I propose an approach to the design of e-learning systems that I call the ecological approach. Moving from the open web to repositories of learning objects, I show how the ecological approach shows promise not only to allow information about learners actual interactions with learning objects to be naturally captured but also to allow it to be used in a multitude of ways to support learners and teachers in achieving their goals. In a phrase, the approach involves attaching models of learners to the learning objects they interact with, and then mining these models for patterns that are useful for various purposes. The ecological approach turns out to be highly suited to e-learning applications. It also has interesting implications for e-learning research, and perhaps even for research directions for semantic web research. Invited Commentary: Ramondt, L., Smith, T and Bradshaw, P. (2004) Commentary on: McCalla, G. (2004). The Ecological Approach to the Design of E-Learning Environments: Purpose-based Capture and Use of Information About Learners [PDF] Editors: Terry Anderson and Denise Whitelock.

Book ChapterDOI
31 Aug 2004
TL;DR: A novel schema model is proposed that distinguishes the interface and the result schema of a Web database in a specific domain and addresses two significant Web database schema-matching problems: intra-site and inter-site.
Abstract: In a Web database that dynamically provides information in response to user queries, two distinct schemas, interface schema (the schema users can query) and result schema (the schema users can browse), are presented to users. Each partially reflects the actual schema of the Web database. Most previous work only studied the problem of schema matching across query interfaces of Web databases. In this paper, we propose a novel schema model that distinguishes the interface and the result schema of a Web database in a specific domain. In this model, we address two significant Web database schema-matching problems: intra-site and inter-site. The first problem is crucial in automatically extracting data from Web databases, while the second problem plays a significant role in meta-retrieving and integrating data from different Web databases. We also investigate a unified solution to the two problems based on query probing and instance-based schema matching techniques. Using the model, a cross validation technique is also proposed to improve the accuracy of the schema matching. Our experiments on real Web databases demonstrate that the two problems can be solved simultaneously with high precision and recall.

Journal ArticleDOI
TL;DR: A new approach to testing Web services based on data perturbation, restricted to peer-to-peer interactions, is presented and preliminary empirical evidence of its usefulness is presented.
Abstract: Web services have the potential to dramatically reduce the complexities and costs of software integration projects. The most obvious and perhaps most significant difference between Web services and traditional applications is that Web services use a common communication infrastructure, XML and SOAP, to communicate through the Internet. The method of communication introduces complexities to the problems of verifying and validating Web services that do not exist in traditional software. This paper presents a new approach to testing Web services based on data perturbation. Existing XML messages are modified based on rules defined on the message grammars, and then used as tests. Data perturbation uses two methods to test Web services: data value perturbation and interaction perturbation. Data value perturbation modifies values according to the data type. Interaction perturbation classifies the communication messages into two categories: RPC communication and data communication. At present, this method is restricted to peer-to-peer interactions. The paper presents preliminary empirical evidence of its usefulness.

Book ChapterDOI
TL;DR: The state of the art of current enabling technologies for Semantic Web Services is surveyed, and the infrastructure of SemanticWeb Services is characterized along three orthogonal dimensions: activities, architecture and service ontology.
Abstract: The next Web generation promises to deliver Semantic Web Services (SWS); services that are self-described and amenable to automated discovery, composition and invocation. A prerequisite to this, however, is the emergence and evolution of the Semantic Web, which provides the infrastructure for the semantic interoperability of Web Services. Web Services will be augmented with rich formal descriptions of their capabilities, such that they can be utilized by applications or other services without human assistance or highly con-strained agreements on interfaces or protocols. Thus, Semantic Web Services have the potential to change the way knowledge and business services are consumed and provided on the Web. In this paper, we survey the state of the art of current enabling technologies for Semantic Web Services. In addition, we characterize the infrastructure of Semantic Web Services along three orthogonal dimensions: activities, architecture and service ontology. Further, we examine and contrast three current approaches to SWS according to the proposed dimensions.