scispace - formally typeset
Search or ask a question

Showing papers by "Carole Goble published in 2007"


Journal ArticleDOI
TL;DR: A recent National Science Foundation workshop brought together domain, computer, and social scientists to discuss requirements of future scientific applications and the challenges they present to current workflow technologies.
Abstract: Workflows have emerged as a paradigm for representing and managing complex distributed computations and are used to accelerate the pace of scientific progress. A recent National Science Foundation workshop brought together domain, computer, and social scientists to discuss requirements of future scientific applications and the challenges they present to current workflow technologies.

563 citations


Proceedings ArticleDOI
25 Jun 2007
TL;DR: It is argued that actively engaging with a scientist's needs, fears and reward incentives is crucial for success and a rich ecosystem of tools that support the scientists are needed.
Abstract: We present the Taverna workflow workbench and argue that scientific workflow environments need a rich ecosystem of tools that support the scientists. experimental lifecycle. Workflows are scientific objects in their own right, to be exchanged and reused. myExperiment is a new initiative to create a social networking environment for workflow workers. We present the motivation for myExperiment and sketch the proposed capabilities and challenges. We argue that actively engaging with a scientist's needs, fears and reward incentives is crucial for success.

152 citations


Proceedings ArticleDOI
10 Dec 2007
TL;DR: The ability to automatically compile a simple domain-specific process description into Taverna facilitates its adoption by e-scientists who are not expert workflow developers, and is demonstrated through a practical use case.
Abstract: This paper presents the formal syntax and the operational semantics of Taverna, a workflow management system with a large user base among the e-Science community. Such formal foundation, which has so far been lacking, opens the way to the translation between Taverna workflows and other process models. In particular, the ability to automatically compile a simple domain-specific process description into Taverna facilitates its adoption by e-scientists who are not expert workflow developers. We demonstrate this potential through a practical use case.

110 citations


Journal ArticleDOI
TL;DR: The (my)Grid ontology is one component in a larger semantic discovery framework for the identification of the highly distributed and heterogeneous bioinformatics services in the public domain and adopt a spectrum of expressivity and reasoning for different tasks in service annotation and discovery.
Abstract: myGrid supports in silico experiments in the life sciences, enabling the design and enactment of workflows as well as providing components to assist service discovery, data and metadata management. The myGrid ontology is one component in a larger semantic discovery framework for the identification of the highly distributed and heterogeneous bioinformatics services in the public domain. From an initial model of formal OWL-DL semantics throughout, we now adopt a spectrum of expressivity and reasoning for different tasks in service annotation and discovery. Here, we discuss the development and use of the myGrid ontology and our experiences in semantic service discovery.

96 citations


01 Jan 2007
TL;DR: Bioinformatics is a discipline that uses computational and mathematical techniques to store, manage, and analyze biological data in order to answer biological questions as mentioned in this paper, and is an in silico science discipline.
Abstract: Bioinformatics is a discipline that uses computational and mathematical techniques to store, manage, and analyze biological data in order to answer biological questions. Bioinformatics has over 850 databases [154] and numerous tools that work over those databases and local data to produce even more data themselves. In order to perform an analysis, a bioinformatician uses one or more of these resources to gather, filter, and transform data to answer a question. Thus, bioinformatics is an in silico science.

94 citations


Proceedings ArticleDOI
10 Dec 2007
TL;DR: It is argued that the tremendous scientific potential of workflows will be achieved through mechanisms for sharing and collaboration - empowering the scientist to spread their experimental protocols and to benefit from the protocols of others.
Abstract: Many scientific workflow systems have been developed and are serving to benefit science. In this paper we look beyond individual systems to consider the use of workflows within scientific practice, and we argue that the tremendous scientific potential of workflows will be achieved through mechanisms for sharing and collaboration - empowering the scientist to spread their experimental protocols and to benefit from the protocols of others. We discuss issues in workflow sharing, propose a set of design principles for collaborative e-Science software, and illustrate these principles in action through the design of the myExperiment Virtual Research Environment for collaboration and sharing of experiments.

86 citations


Journal ArticleDOI
TL;DR: An approach called Dante is proposed in which Web pages are annotated with semantic information to make their traversal properties explicit and, in tests with users, document objects transcoded with Dante have a tendency to be much easier for visually disabled users to interact with when traversing Web pages.
Abstract: The importance of the World Wide Web for information dissemination is indisputable. However, the dominance of visual design on the Web leaves visually disabled people at a disadvantage. Although assistive technologies, such as screen readers, usually provide basic access to information, the richness of the Web experience is still often lost. In particular, traversing the Web becomes a complicated task since the richness of visual objects presented to their sighted counterparts are neither appropriate nor accessible to visually disabled users. To address this problem, we have proposed an approach called Dante in which Web pages are annotated with semantic information to make their traversal properties explicit. Dante supports usage of different annotation techniques and as a proof-of-concept in this article, pages are annotated manually which when transcoded become rich. We first introduce Dante and then present a user evaluation which compares how visually disabled users perform certain travel-related tasks on original and transcoded versions of Web pages. We discuss the evaluation methodology in detail and present our findings, which provide useful insights into the transcoding process. Our evaluation shows that, in tests with users, document objects transcoded with Dante have a tendency to be much easier for visually disabled users to interact with when traversing Web pages.

73 citations


Book ChapterDOI
01 Jan 2007
TL;DR: Bioinformatics is a discipline that uses computational and mathematical techniques to store, manage, and analyze biological data in order to answer biological questions as mentioned in this paper, and is an in silico science discipline.
Abstract: Bioinformatics is a discipline that uses computational and mathematical techniques to store, manage, and analyze biological data in order to answer biological questions. Bioinformatics has over 850 databases [154] and numerous tools that work over those databases and local data to produce even more data themselves. In order to perform an analysis, a bioinformatician uses one or more of these resources to gather, filter, and transform data to answer a question. Thus, bioinformatics is an in silico science.

72 citations


Journal ArticleDOI
TL;DR: In reviewing provenance support, one of the important knowledge management issues in bioinformatics is reviewed and it is suggested that in Silico experimental protocols should themselves be a form of managing the knowledge of how to perform bioinformics analyses.
Abstract: This article offers a briefing in one of the knowledge management issues of in silico experimentation in bioinformatics. Recording of the provenance of an experiment-what was done; where, how and why, etc. is an important aspect of scientific best practice that should be extended to in silico experimentation. We will do this in the context of eScience which has been part of the move of bioinformatics towards an industrial setting. Despite the computational nature of bioinformatics, these analyses are scientific and thus necessitate their own versions of typical scientific rigour. Just as recording who, what, why, when, where and how of an experiment is central to the scientific process in laboratory science, so it should be in silico science. The generation and recording of these aspects, or provenance, of an experiment are necessary knowledge management goals if we are to introduce scientific rigour into routine bioinformatics. In Silico experimental protocols should themselves be a form of managing the knowledge of how to perform bioinformatics analyses. Several systems now exist that offer support for the generation and collection of provenance information about how a particular in silico experiment was run, what results were generated, how they were generated, etc. In reviewing provenance support, we will review one of the important knowledge management issues in bioinformatics.

45 citations


Book ChapterDOI
27 May 2007
TL;DR: This paper explains how MoCs are combined in Kepler and Ptolemy II and analyzes which combinations of MoC are currently possible and useful and demonstrates the approach by combining MoCs involving dataflow and finite state machines.
Abstract: A model of computation (MoC) is a formal abstraction of execution in a computer. There is a need for composing MoCs in e-science. Kepler, which is based on Ptolemy II, is a scientific workflow environment that allows for MoC composition. This paper explains how MoCs are combined in Kepler and Ptolemy II and analyzes which combinations of MoCs are currently possible and useful. It demonstrates the approach by combining MoCs involving dataflow and finite state machines. The resulting classification should be relevant to other workflow environments wishing to combine multiple MoCs.

45 citations


Journal ArticleDOI
TL;DR: Within the $^{my}$Grid project, key resources that can be shared including complete workflows, fragments of workflows and constituent services are identified and a unified descriptive model to support their later discovery is developed.
Abstract: Scientific workflows are becoming a valuable tool for scientists to capture and automate e-Science procedures. Their success brings the opportunity to publish, share, reuse and re-purpose this explicitly captured knowledge. Within the (my)Grid project, we have identified key resources that can be shared including complete workflows, fragments of workflows and constituent services. We have examined the alternative ways that these resources can be described by their authors (and subsequent users) and developed a unified descriptive model to support their later discovery. By basing this model on existing standards, we have been able to extend existing Web service and Semantic Web service infrastructure whilst still supporting the specific needs of the e-Scientist. The (my)Grid components enable a workflow lifecycle that extends beyond execution to include the discovery of previous relevant designs, the reuse of those designs and their subsequent publication. Experience with example groups of scientists indicates that this cycle is valuable. The growing number of workflows and services mean more work is needed to support the user in effective ranking of search results and to support the re-purposing process. Copyright (c) 2006 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A semantic Web-based approach is proposed to tackle the six challenges of the knowledge lifecycle - namely, those of acquiring, modeling, retrieving, reusing, publishing, and maintaining knowledge.
Abstract: Knowledge has become increasingly important to support intelligent process automation and collaborative problem solving in large-scale science over the Internet. This paper addresses distributed knowledge management, its approach and methodology, in the context of grid application. We start by analyzing the nature of grid computing and its requirements for knowledge support; then, we discuss knowledge characteristics and the challenges for knowledge management on the grid. A semantic Web-based approach is proposed to tackle the six challenges of the knowledge lifecycle - namely, those of acquiring, modeling, retrieving, reusing, publishing, and maintaining knowledge. To facilitate the application of the approach, a systematic methodology is conceived and designed to provide a general implementation guideline. We use a real-world Grid application, the GEODISE project, as a case study in which the core semantic Web technologies such as ontologies, semantic enrichment, and semantic reasoning are used for knowledge engineering and management. The case study has been fully implemented and deployed through which the evaluation and validation for the approach and methodology have been performed

Journal ArticleDOI
Paolo Missier1, Pinar Alper1, Oscar Corcho1, Ian Dunlop1, Carole Goble1 
TL;DR: This paper identifies general requirements for metadata management and describes a simple model and service that focuses on RDF metadata to address these requirements.
Abstract: Knowledge-intensive applications pose new challenges to metadata management, including distribution, access control, uniformity of access, and evolution in time. This paper identifies general requirements for metadata management and describes a simple model and service that focuses on RDF metadata to address these requirements.

01 May 2007
TL;DR: The myExperiment Virtual Research Environment is building to support scientists in sharing and collaboration with workflows and other objects, and draws upon the social software techniques characterised as Web 2.0.
Abstract: e-Science has given rise to new forms of digital object in the Virtual Research Environment which can usefully be shared amongst collaborating scientists to assist in generating new scientific results. In particular, descriptions of Scientific Workflows capture pieces of scientific knowledge which may transcend their immediate application and can be shared and reused in other experiments. We are building the myExperiment Virtual Research Environment to support scientists in sharing and collaboration with workflows and other objects. Rather than adopting traditional methods prevalent in the e-Science developer community, our approach draws upon the social software techniques characterised as Web 2.0. In this paper we report on the preliminary design work of myExperiment.

Book ChapterDOI
01 Jan 2007
TL;DR: Taverna as mentioned in this paper is a workbench for building, running and sharing workflows that link third party bioinformatics services, such as databases, analytic tools and applications, using Semantic Web metadata technologies.
Abstract: Life Science research has extended beyond in vivo and in vitro bench-bound science to incorporate in silico knowledge discovery, using resources that have been developed over time by different teams for different purposes and in different forms. The myGrid project has developed a set of software components and a workbench, Taverna, for building, running and sharing workflows that link third party bioinformatics services, such as databases, analytic tools and applications. Intelligently discovering prior services, workflow or data is aided by a Semantic Web of annotations, as is the building of the workflows themselves. Metadata associated with the workflow experiments, the provenance of the data outcomes and the record of the experimental process need to be flexible and extensible. Semantic Web metadata technologies would seem to be well-suited to building a Semantic Web of provenance. We have the potential to integrate and aggregate workflow outcomes, and reason over provenance logs to identify new experimental insights, and to build and export a Semantic Web of experiments that contributes to Knowledge Discovery for Taverna users and for the scientific community as a whole.

Proceedings ArticleDOI
10 Dec 2007
TL;DR: This paper implements a prototype plug-in for the Taverna workflow environment and shows how this can promote the creation of workflow fragments by automatically converting the users' interactions with data and Web services into a more conventional workflow specification.
Abstract: Workflows systems are steadily finding their way into the work practices of scientists. This is particularly true in the in silico science of bioinformatics, where biological data can be processed by Web services. In this paper we investigate the potential of evolving the users' interaction with workflow environments so that it more closely relates to the mode in which their day to day work is carried out. We present the Data Playground, an environment designed to encourage the uptake of workflow systems in bioinformatics through more intuitive interaction by focusing the user on their data rather than on the processes. We implement a prototype plug-in for the Taverna workflow environment and show how this can promote the creation of workflow fragments by automatically converting the users' interactions with data and Web services into a more conventional workflow specification.

Proceedings ArticleDOI
19 Sep 2007
TL;DR: This paper analyzes issues of uniform and secure access to distributed and independently maintained metadata repositories, as well as management of metadata lifecycle, and presents a service-oriented architecture for metadata management, called S-OGSA, that addresses them in a systematic way.
Abstract: Metadata annotations of grid resources can potentially be used for a number of purposes, including accurate resource allocation to jobs, discovery of services, and precise retrieval of information resources. In order to realize this potential on a large scale, various aspects of metadata must be managed. These include uniform and secure access to distributed and independently maintained metadata repositories, as well as management of metadata lifecycle. In this paper we analyze these issues and present a service-oriented architecture for metadata management, called S-OGSA, that addresses them in a systematic way.

Proceedings Article
01 Aug 2007
TL;DR: This paper has developed an information service that aggregates metadata available in hundreds of information services of the EGEE Grid infrastructure, and uses an information cache that works with an update-on-demand policy to deal with the main challenges addressed.
Abstract: In this paper we describe an ontology-based information integration approach that is suitable for highly dynamic distributed information sources, such as those available in Grid systems. The main challenges addressed are: 1) information changes frequently and information requests have to be answered quickly in order to provide up-to-date information; and 2) the most suitable information sources have to be selected from a set of different distributed ones that can provide the information needed. To deal with the first challenge we use an information cache that works with an update-on-demand policy. To deal with the second we add an information source selection step to the usual architecture used for ontology-based information integration. To illustrate our approach, we have developed an information service that aggregates metadata available in hundreds of information services of the EGEE Grid infrastructure.

Book ChapterDOI
27 Jun 2007
TL;DR: Two complimentary software frameworks are described that address this problem in a principled manner; myGrid/Taverna, a workflow design and enactment environment enabling coherent experiments to be built, and UTOPIA, a flexible visualisation system to aid in examining experimental results.
Abstract: In silico experiments have hitherto required ad hoc collections of scripts and programs to process and visualise biological data, consuming substantial amounts of time and effort to build, and leading to tools that are difficult to use, are architecturally fragile and scale poorly. With examples of the systems applied to real biological problems, we describe two complimentary software frameworks that address this problem in a principled manner; myGrid/Taverna, a workflow design and enactment environment enabling coherent experiments to be built, and UTOPIA, a flexible visualisation system to aid in examining experimental results.

Journal ArticleDOI
TL;DR: On‐going efforts in designing and implementing a framework to facilitate multi‐level and multi‐factor adaptive authentication and authentication strength linked fine‐grained access control are reported.
Abstract: In a virtual organization environment, where services and data are provided and shared among organizations from different administrative domains and protected with dissimilar security policies and measures, there is a need for a flexible authentication framework that supports the use of various authentication methods and tokens. The authentication strengths derived from the authentication methods and tokens should be incorporated into an access-control decision-making process, so that more sensitive resources are available only to users authenticated with stronger methods. This paper reports our ongoing efforts in designing and implementing such a framework to facilitate multi-level and multi-factor adaptive authentication and authentication strength linked fine-grained access control. The proof-of-concept prototype is designed and implemented in the Shibboleth and PERMIS infrastructures, which specifies protocols to federate authentication and authorization information and provides a policy-driven, role-based, access-control decision-making capability. Copyright (c) 2006 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The Third International Cross-Disciplinary Work-shop on Web Accessibility (W4A 2006) was targeted to bring together different communities working on similar problems to share ideas, discuss overlaps, and suggest cross-pollinated solutions.
Abstract: After the launch of the Mobile Web Initiative at the WorldWide Web Conference 2005, awareness is emerging that,today, mobile Web access suffers from interoperability andusability problems that make the Web difficult to use. Withthe move to small screen size, low bandwidth, and differentoperating modalities, technology is in effect simulating thesensory and cognitive impairments experienced by dis-abled users within the wider population of mobile deviceusers. The Third International Cross-Disciplinary Work-shop on Web Accessibility (W4A 2006) was targeted tobring together different communities working on similarproblems to share ideas, discuss overlaps, and make thefledging mobile Web community aware of accessibilitywork that may have been overlooked. The main questionasked was:‘‘Is engineering, designing, and building for theMobile Web just a rehash of the same old Webaccessibility problems?’’This lead to addressing issues such as:• Are the same solutions required for the Mobile Weband for accessibility and can the two communities worktogether to solve these problems?• What can the Mobile Web learn from the AccessibleWeb and what resources created to support theAccessible Web can be used by designers in theirsupport of the Mobile Web?• To cross-pollinate do we need to rethink the currentview of accessibility?Therefore, the workshop brought together a cross sectionof designers, engineers, and practitioners working on boththe Accessible and Mobile Webs; to report on develop-ments, discuss the issues, and suggest cross-pollinatedsolutions.The W4A 2006 was held on Monday the 22nd andTuesday the 23rd May 2006 as part of the FifteenthInternational World Wide Web Conference (WWW2006),running over 2 days, with 73 attendees and 20 papersaccepted for presentation. This special issue is an addi-tional outcome of the W4A 2006 Workshop, and consistsof the revised and extended version of seven papers of thepapers presented at the Workshop, selected on the basis ofthe review results. The articles presented here focus onmajor issues of the Accessible and Mobile Webs thatadvance the implementation of universal access.The first article in this special issue is entitled CapabilitySurvey of User Agents with the UAAG 1.0 Test Suite and ItsImpact on Web Accessibility by Watanabe, T. and Ume-gaki, M. This article discusses capabilities of a number ofJapanese user agents with respect to the User AgentAccessibility Guidelines (UAAG 1.0). This article high-lights that in order to promote Web accessibilityinternationally, the focus should not only be on contentaccessibility but also on user agent accessibility.It is a common belief that ‘‘A picture is worth a thou-sand words’’. That might be true for someone who issighted, but visually disabled users or users who work inenvironments where visual representations are inappropri-ate cannot access information contained in graphics, unlessalternative descriptions are included. The second article,which is entitled GraSSML: Accessible Smart SchematicDiagrams for All, by Fredj, Z.B. and Duce, D.A., investi-gates accessibility of diagrams. This article presents anapproach called Graphical Structure Semantic Markup

01 Jan 2007
TL;DR: Bioinformatics is a discipline that uses computational and mathematical techniques to store, manage, and analyze biological data in order to answer biological questions.

Journal IssueDOI
TL;DR: The $^{my}$Grid project has identified key resources that can be shared including complete workflows, fragments of workflows and constituent services and developed a unified descriptive model to support their later discovery.
Abstract: Scientific workflows are becoming a valuable tool for scientists to capture and automate e-Science procedures. Their success brings the opportunity to publish, share, reuse and re-purpose this explicitly captured knowledge. Within the $^{my}$Grid project, we have identified key resources that can be shared including complete workflows, fragments of workflows and constituent services. We have examined the alternative ways that these resources can be described by their authors (and subsequent users) and developed a unified descriptive model to support their later discovery. By basing this model on existing standards, we have been able to extend existing Web service and Semantic Web service infrastructure whilst still supporting the specific needs of the e-Scientist. The $^{my}$Grid components enable a workflow life-cycle that extends beyond execution to include the discovery of previous relevant designs, the reuse of those designs and their subsequent publication. Experience with example groups of scientists indicates that this cycle is valuable. The growing number of workflows and services mean more work is needed to support the user in effective ranking of search results and to support the re-purposing process. Copyright © 2006 John Wiley & Sons, Ltd.

01 Jan 2007
TL;DR: TAMBIS uses an ontology of biological terms to transform the declarative, source-independent query into an optimised, ordered sequence of source dependent requests, that is then executed against the individual sources.
Abstract: Biologists increasingly need to ask complex questions over the large quantity of data and analysis tools that now exist. To do this, the individual sources need to be made to work together. The knowledge needed to accomplish this places a barrier between the bench biologist and the question he or she wishes to ask. The TAMBIS project (Transparent Access to Multiple Bioinformatics Information Sources) has sought to remove these barriers – thereby making the process of asking questions against multiple sources transparent. Central to the TAMBIS system is an ontology of biological terms. This allows TAMBIS to be used to formulate rich, complex queries over multiple sources. The ontology is constructed in a manner that ensures only biologically meaningful queries can be posed. TAMBIS then uses the ontology to transform the declarative, source-independent query into an optimised, ordered sequence of source dependent requests, that is then executed against the individual sources.

Proceedings ArticleDOI
10 Sep 2007
TL;DR: It is argued that the Semantic Web and Web 2.0 herald a return to hypertext's original visions and provide a means and an opportunity to bring full hypermedia capability to the Web.
Abstract: We argue that the Semantic Web and Web 2.0 herald a return to hypertext's original visions and provide a means and an opportunity to bring full hypermedia capability to the Web.

Proceedings ArticleDOI
19 Sep 2007
TL;DR: The trend in recent years in distributed computing and distributed information systems has been to open up: to expose interfaces and content outside the bounds of the originating application, resource or middleware; to simplify access to third party resources, data and capability.
Abstract: The trend in recent years in distributed computing and distributed information systems has been to open up: to expose interfaces and content outside the bounds of the originating application, resource or middleware; to simplify access to third party resources, data and capability; and to actively encourage and support creativity through the reuse and combination of already available components and content, be they ours or others. The ubiquity of the Service Oriented Architecture (SOA) is testament to the driver, in both industry and scientific research, for more agile solutions, more rapid development, more flexibility and more opportunity for effective use of what has gone before. The rise of the web service and its adoption for Grids are examples. In the sciences the web service has become established as the delivery mechanism for publicly available data sets and tools. Designing reusable components and enabling content to be reusable is tough; finding it. and correctly understanding and using it is even tougher, especially when the consumer is not the producer. Another concern is the gap between the infrastructure and resource provider and the application developer and user. Infrastructure has no value other than to enable applications. In the Grid we seem to have done a good job enabling Virtual Organisations of resource providers through virtualisation and provisioning.

Proceedings Article
01 Sep 2007
TL;DR: An information service that aggregates metadata available in hundreds of information sources of the EGEE Grid infrastructure uses an ontology-based information integration architecture (ActOn), which is suitable the highly dynamic distributed information sources available in Grid systems.
Abstract: We describe an information service that aggregates metadata available in hundreds of information sources of the EGEE Grid infrastructure. It uses an ontology-based information integration architecture (ActOn), which is suitable the highly dynamic distributed information sources available in Grid systems, where information changes frequently and where the information of distributed sources has to be aggregated in order to solve complex queries. These two challenges are addressed by a metadata cache that works with an update-on-demand policy and by an information source selection module that selects the most suitable source at a given point in time, respectively. We have evaluated the quality of this information service, and compared it with other similar services from the EGEE production testbed, with promising results

Proceedings ArticleDOI
19 Sep 2007
TL;DR: An information service that aggregates metadata available in hundreds of information sources of the EGEE Grid infrastructure uses an ontology-based information integration architecture (ActOn), which is suitable the highly dynamic distributed information sources available in Grid systems.
Abstract: We describe an information service that aggregates metadata available in hundreds of information sources of the EGEE Grid infrastructure. It uses an ontology-based information integration architecture (ActOn), which is suitable the highly dynamic distributed information sources available in Grid systems, where information changes frequently and where the information of distributed sources has to be aggregated in order to solve complex queries. These two challenges are addressed by a metadata cache that works with an update-on-demand policy and by an information source selection module that selects the most suitable source at a given point in time, respectively. We have evaluated the quality of this information service, and compared it with other similar services from the EGEE production testbed, with promising results.

Book ChapterDOI
01 Aug 2007
TL;DR: This paper proposes an evaluation framework for these information services and uses it to evaluate two deployed information services (BDII and RGMA) and one prototype that is under development (ActOn) and thinks that it can be helpful for information service developers, who can use them as a benchmark suite, and for developers of information-intensive applications that make use of these services.
Abstract: The quality of the information provided by information services deployed in the EGEE production testbed differs from one system to another. Under the same conditions, the answers provided for the same query by different information services can be different. Developers of these services and of other services that are based on them must be aware of this fact and understand the capabilities and limitations of each information service in order to make appropriate decisions about which and how to use a specific information service. This paper proposes an evaluation framework for these information services and uses it to evaluate two deployed information services (BDII and RGMA) and one prototype that is under development (ActOn). We think that these experiments and their results can be helpful for information service developers, who can use them as a benchmark suite, and for developers of information-intensive applications that make use of these services.

Book ChapterDOI
01 Jan 2007
TL;DR: This paper describes the dynamic aspects of S-OGSA by presenting the typical patterns of interaction among semantic provisioning services and semantically aware Grid services that are able to exploit those annotations in various ways.
Abstract: The Semantic Grid reference architecture, S-OGSA, includes semantic provisioning services that are able to produce semantic annotations of Grid resources, and semantically aware Grid services that are able to exploit those annotations in various ways. In this paper we describe the dynamic aspects of S-OGSA by presenting the typical patterns of interaction among these services. A use case for a Grid meta-scheduling service is used to illustrate how the patterns are applied in practice.