scispace - formally typeset
Search or ask a question

Showing papers by "Carole Goble published in 2005"


Proceedings ArticleDOI
10 May 2005
TL;DR: The proposed extraction method is a helpful tool to support the process of building domain ontologies for web service descriptions and is conducted in the field of bioinformatics by learning an ontology from the documentation of the web services used in myGrid, a project that supports biology experiments on the Grid.
Abstract: The reasoning tasks that can be performed with semantic web service descriptions depend on the quality of the domain ontologies used to create these descriptions. However, building such domain ontologies is a time consuming and difficult task.We describe an automatic extraction method that learns domain ontologies for web service descriptions from textual documentations attached to web services. We conducted our experiments in the field of bioinformatics by learning an ontology from the documentation of the web services used in myGrid, a project that supports biology experiments on the Grid. Based on the evaluation of the extracted ontology in the context of the project, we conclude that the proposed extraction method is a helpful tool to support the process of building domain ontologies for web service descriptions.

143 citations


Book ChapterDOI
29 May 2005
TL;DR: This paper describes the requirements from the bioinformatics domain which demand technically simpler descriptions, involving the user community at all levels, and describes the data model and light-weight semantic discovery architecture.
Abstract: Semantic Web Services offer the possibility of highly flexible web service architectures, where new services can be quickly discovered, orchestrated and composed into workflows. Most existing work has, however, focused on complex service descriptions for automated composition. In this paper, we describe the requirements from the bioinformatics domain which demand technically simpler descriptions, involving the user community at all levels. We describe our data model and light-weight semantic discovery architecture. We explain how this fits in the larger architecture of the myGrid project, which overall enables interoperability and composition across, disparate, autonomous, third-party services. Our contention is that such light-weight service discovery provides a good fit for user requirements of bioinformatics and possibly other domains.

129 citations


Journal ArticleDOI
TL;DR: This paper developed a framework for (semi-)automatic ontology learning from textual sources attached to Web services that exploits the fact that these sources are expressed in a specific sublanguage, making them amenable to automatic analysis.

124 citations


Proceedings ArticleDOI
10 May 2005
TL;DR: The Dante approach with a web design method, WSDM, is combined to fully automate the generation of the semantic annotation for visually impaired users, where the semantic knowledge gathered during the design process is exploited, and the annotations are generated as a by-product of theDesign process, requiring no extra effort from the designer.
Abstract: Currently, the vast majority of web sites do not support accessibility for visually impaired users. Usually, these users have to rely on screen readers: applications that sequentially read the content of a web page in audio. Unfortunately, screen readers are not able to detect the meaning of the different page objects, and thus the implicit semantic knowledge conveyed in the presentation of the page is lost. One approach described in literature to tackle this problem, is the Dante approach, which allows semantic annotation of web pages to provide screen readers with extra (semantic) knowledge to better facilitate the audio presentation of a web page. Until now, such annotations were done manually, and failed for dynamic pages. In this paper, we combine the Dante approach with a web design method, WSDM, to fully automate the generation of the semantic annotation for visually impaired users. To do so, the semantic knowledge gathered during the design process is exploited, and the annotations are generated as a by-product of the design process, requiring no extra effort from the designer.

92 citations


Book ChapterDOI
06 Nov 2005
TL;DR: Based on a comparison of e-Science middleware projects, this paper identifies seven bottlenecks to scalable reuse and repurposing, and includes some thoughts on the applicability of using OWL for two bott lenecks: workflow fragment discovery and the ranking of fragments.
Abstract: To date on-line processes (i.e. workflows) built in e-Science have been the result of collaborative team efforts. As more of these workflows are built, scientists start sharing and reusing stand-alone compositions of services, or workflow fragments. They repurpose an existing workflow or workflow fragment by finding one that is close enough to be the basis of a new workflow for a different purpose, and making small changes to it. Such a “workflow by example” approach complements the popular view in the Semantic Web Services literature that on-line processes are constructed automatically from scratch, and could help bootstrap the Web of Science. Based on a comparison of e-Science middleware projects, this paper identifies seven bottlenecks to scalable reuse and repurposing. We include some thoughts on the applicability of using OWL for two bottlenecks: workflow fragment discovery and the ranking of fragments.

73 citations


Journal ArticleDOI
01 Sep 2005
TL;DR: Business-oriented workflows have been studied since the 70's under various names (office automation, workflow management, business process management) and by different communities, including the database community.
Abstract: Business-oriented workflows have been studied since the 70's under various names (office automation, workflow management, business process management) and by different communities, including the database community. Much basic and applied research has been conducted over the years, e.g. theoretical studies of workflow languages and models (based on Petri-nets or process calculi), their properties, transactional behavior, etc.

57 citations


Journal ArticleDOI
23 Mar 2005
TL;DR: A semantics‐based approach to problem solving, which exploits the rich semantic information of grid resource descriptions for resource discovery, instantiation, and composition, is presented.
Abstract: In this paper we propose a distributed knowledge management framework for semantics and knowledge creation, population and reuse on the Grid. Its objective is to evolve the Grid towards the Semantic Grid with the ultimate purpose of facilitating problem solving in e-Science. The framework uses ontology as the conceptual backbone and adopts the service-oriented computing paradigm for information-level and knowledge-level computation. We further present a semantics-based approach to problem solving, which exploits the rich semantic information of grid resource descriptions for resource discovery, instantiation and composition. The framework and approach has been applied to a UK e-Science project - Grid Enabled Engineering Design Search and Optimisation in Engineering (GEODISE). An ontology-enabled Problem Solving Environment (PSE) has been developed in GEODISE to leverage the semantic content of GEODISE resources and the Semantic Grid infrastructure for engineering design. Implementation and initial experimental results are reported.

26 citations


Journal ArticleDOI
TL;DR: This paper proposes an efficient security protocol for certified e-goods delivery with the following features: strong fairness for the exchange of e- goods and proof of reception, which achieves these features with less computational and communicational overheads than related protocols.
Abstract: Delivering electronic goods over the Internet is one of the e-commerce applications that will proliferate in the coming years. Certified e-goods delivery is a process where valuable e-goods are exchanged for an acknowledgement of their reception. This paper proposes an efficient security protocol for certified e-goods delivery with the following features: (1) it ensures strong fairness for the exchange of e-goods and proof of reception, (2) it ensures non- repudiation of origin and non-repudiation of receipt for the delivered e-goods, (3) it allows the receiver of e-goods to verify, during the exchange process, that the e-goods to be received are the one he is signing the receipt for, (4) it uses an off-line and transparent semi-trusted third party (STTP) only in cases when disputes arise, (5) it provides the confidentiality protection for the exchanged items from the STTP, and (6) achieves these features with less computational and communicational overheads than related protocols.

23 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: The functional requirements that have become apparent over the last year of working with domain scientists are outlined, along with the solutions implemented in both the Taverna workbench and the Freefluo enactment engine to address concerns relating to workflow construction and enactment.
Abstract: The Taverna e-Science Workbench is a central component of myGrid, a loosely coupled suite of middleware services designed to support in silico experiments in biology. Taverna enables the construction and enactment of complex workflows over resources on local and remote machines, allowing the automation of otherwise labour-intensive multi-step bioinformatics tasks. As the Taverna user community has grown, so has the demand for new features and additions. This paper outlines the functional requirements that have become apparent over the last year of working with domain scientists, along with the solutions implemented in both the Taverna workbench and the Freefluo enactment engine to address concerns relating to workflow construction and enactment, respectively

20 citations


Journal Article
TL;DR: A novel algorithm is proposed to allow traceable/linkable identity privacy in dealing with de-identified medical records to achieve the desired security and privacy in the HealthGrid context.
Abstract: The issues of confidentiality and privacy have become increasingly important as Grid technology is being adopted in public sectors such as healthcare. This paper discusses the importance of protecting the confidentiality and privacy of patient health/medical records, and the challenges exhibited in enforcing this protection in a Grid environment. It proposes a novel algorithm to allow traceable/linkable identity privacy in dealing with de-identified medical records. Using the algorithm, de-identified health records associated to the same patient but generated by different healthcare providers are given different pseudonyms. However, these pseudonymised records of the same patient can still be linked by a trusted entity such as the NHS trust or HealthGrid manager. The paper has also recommended a security architecture that integrates the proposed algorithm with other data security measures needed to achieve the desired security and privacy in the HealthGrid context.

17 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: Given the past experiences with scientists, grid developers and semantic Web researchers, what are the prospects, and pitfalls, of putting semantics into e-Science applications and grid infrastructure?
Abstract: What is the semantic grid? How can e-Science benefit from the technologies of the semantic grid? Can we build a semantic Web for e-Science? Would that differ from a semantic grid? Given our past experiences with scientists, grid developers and semantic Web researchers, what are the prospects, and pitfalls, of putting semantics into e-Science applications and grid infrastructure?

Book ChapterDOI
06 Nov 2005
TL;DR: The importance of e-Science has been highlighted in the UK by an investment of over £240 million pounds over the past five years to specifically address the research and development issues that have to be tacked to develop a sustainable and effective e- Science e-Infrastructure.
Abstract: We are familiar with the idea of e-Commerce – the electronic trading between consumers and suppliers. In recent years there has been a commensurate paradigm shift in the way that science is conducted. e-Science is science performed through distributed global collaborations between scientists and their resources enabled by electronic means, in order to solve scientific problems. No one scientific laboratory has the resources or tools, the raw data or derived understanding or the expertise to harness the knowledge available to a scientific community. Real progress depends on pooling know-how and results. It depends on collaboration and making connections between ideas, people, and data. It depends on finding and interpreting results and knowledge generated by scientific colleagues you do not know and who do not know you, to be analysed in ways they did not anticipate, to generate new hypotheses to be pooled in their turn. The importance of e-Science has been highlighted in the UK, for example, by an investment of over £240 million pounds over the past five years to specifically address the research and development issues that have to be tacked to develop a sustainable and effective e-Science e-Infrastructure.

Journal ArticleDOI
TL;DR: Early experimental applications from the Life Science community indicate that the Semantic Web and the Knowledge Grid approaches have promise and suggest that this community be an appropriate nursery for grounding, developing and hardening the current, rather immature, machinery needed to deliver on the technological visions.

Journal ArticleDOI
TL;DR: By adding small amounts of information to existing Web pages (semi-) automatically, this paper can show significant improvements in the amount of information profoundly blind users are able to access in a given time; in effect ‘levelling the playing field’ with sighted users.
Abstract: Use the word 'accessibility' in the presence of any HCI specialist and they will immediately think of creating open interfaces that can be accessed both visually and audibly. Further, mention 'accessability' to any forward thinking group of Web developers and they will start to quote the Web Accessability Initiative Guidelines (WAI) and extol the virtues of accessability checking tools like 'Bobby'. Either way, both groups will focus on the obviously important area of 'sensory translation' but will miss one fundamental truth: profoundly blind people interact with their environment in a markedly different way from that of sighted individuals. We have realized that the case of movement (mobility) around systems and information space (the hypertext/Web docuverse) is central to good accessibility: and that to achieve this we require additional mobility semantics within systems and information as a way of enhancing the user experience. By adding small amounts of information to existing Web pages (semi-) automatically, we can show significant improvements in the amount of information profoundly blind users are able to access in a given time: in effect 'levelling the playing field' with sighted users. This paper discusses our work and demonstrates how we can make such at claim.

01 Sep 2005
TL;DR: Some of the earlier results in prototyping specific examples of proteomics data integration are described, and lessons are drawn about the kinds of domain-specific components that will be required.
Abstract: The aim of the ISPIDER project is to create a proteomics grid; that is, a technical platform that supports bioinformaticians in constructing, executing and evaluating in silico analyses of proteomics data. It will be constructed using a combination of generic e-science and Grid technologies, plus proteomics specific components and clients that embody knowledge of the proteomics domain and the available resources. In this paper, we describe some of our earlier results in prototyping specific examples of proteomics data integration, and draw from it lessons about the kinds of domain-specific components that will be required.

Proceedings Article
01 Jan 2005
TL;DR: The idea of repurposing is that a user looks for workflows that are close enough to the user's requirements so that these workflows can be fit to a new purpose, and the lifecycle of a repurposed workflow is shown.
Abstract: 1 Reuse and repurposing in e-Science Workflow techniques are an important part of in silico experimentation, potentially allowing a scientist to describe and enact their experimental processes in a structured, repeatable and verifiable way. The my Grid (www.mygrid.org.uk) workbench, a set of components to build workflows in bioinformatics, currently allows access to a thousand globally distributed services and a hundred work-flows, some of which orchestrate up to fifty services. Figure 2 shows the example of a my Grid workflow which gathers information about genetic sequences in support of research on Williams-Beuren syndrome [10]. Much of the research geared towards the construction of on-line processes (i.e. workflows) is led by a vision of automatic composition of services based on extensive formalisation (see for example www.daml.org/services/owl-s/pub-archive.html). Such research can be complemented with techniques that exploit those cases where existing workflows and fragments of workflows can be reused, thereby benefitting from hard-won human experience in composing services. A workflow fragment is a piece of an experimental description that is a coherent sub-workflow that makes sense to a domain specialist. Each fragment forms a useful resource in its own right and is identified and annotated at publication time. We distinguish between reuse, where workflows and workflow fragments created by one user might be used as is, and repurposing, where they are used as a starting point by others. The idea of repurposing is that a user looks for workflows that are close enough to the user's requirements so that these workflows can be fit to a new purpose. In Figure 1, we show the lifecycle of a repurposed workflow. 1. Before embarking on a new design the scientist consults a registry of existing workflows. Search facilities based on an ontology and a database repository identify any existing workflows that are relevant to them. 2. Workflows or their fragments are potentially edited; services are parame-terised or bound to end points but rarely altered. Other services, workflows or workflow fragments are sought, or new ones are created.


Proceedings ArticleDOI
13 Jun 2005
TL;DR: The ongoing efforts in designing and implementing a flexible authentication framework to facilitate multi-level and multi-factor authentication and authentication strength linked fine-grained access control in Shibboleth are reported.
Abstract: In a VO (virtual organization) environment where services are provided and shared by dissimilar organizations from different administrative domains and are protected with dissimilar security policies and measures, there is a need for a flexible authentication framework that supports the use of various authentication tokens. The authentication strengths derived from these tokens should be fed into an access control decision making process. This paper reports our ongoing efforts in designing and implementing such a framework to facilitate multi-level and multi-factor authentication and authentication strength linked fine-grained access control in Shibboleth. The proof-of-concept prototype using a Java smart card is reported.

Book ChapterDOI
29 May 2005
TL;DR: The requirements for an annotation tool for developing ontologies are examined, and the design and implementation of the Pedro Ontology Service Framework is described, which seeks to fulfill these requirements.
Abstract: Semantic Web technologies offer the possibility of increased accuracy and completeness in search and retrieval operations. In recent years, curators of data resources have begun favouring the use of ontologies over the use of free text entries. Generally this has been done by marking up existing database records with “annotations” that contain ontology term references. Although there are a number of tools available for developing ontologies, there are few generic resources for enabling this annotation process. This paper examines the requirements for such an annotation tool, and describes the design and implementation of the Pedro Ontology Service Framework, which seeks to fulfill these requirements.

Book ChapterDOI
15 Jan 2005
TL;DR: TAMBIS frees a biologist from needing informatics knowledge to concentrate upon the biological question by cap turing knowledge about molecular biology and bioinformatics tasks in an ontology.
Abstract: Transparent Access to Multiple Bioinformatics Information Sources (TAMBIS) addresses the perennial problem of heterogeneity and distribution of bioinformatics resources in performing bioinformatics analyses. Asking questions of these resources usually requires multiple resources to be used and data transferred between those resources. A biologist using these resources needs much knowledge of which resources to use, where they are to be found, in which order they should be used, and how to overcome the heterogeneity between those resources. TAMBIS seeks to make this knowledge burden transparent by cap turing knowledge about molecular biology and bioinformatics tasks in an ontology. The TAMBIS ontology acts as a global schema over diverse resources and drives a query formulation interface offering a common language over those resources. High-level, conceptual, source-independent queries are rewritten to concrete query plans. As a result of its transparency, TAMBIS frees a biologist from needing informatics knowledge to concentrate upon the biological question. Keywords: transparent access; ontology; mediation; semantic heterogeneity; distribution

10 May 2005
TL;DR: This workshop is decidedly cross disciplinary in nature and brings together users, accessibility experts, graphic designers, and technologists from academia and industry to discuss how accessibility can be supported.
Abstract: Previous engineering approaches seem to have precluded the engineering of accessible systems. This is plainly unsatisfactory. Designers, authors, and technologist are at present playing 'catch-up' with a continually moving target in an attempt to retrofit systems. In-fact engineering accessible interfaces is as important as their functionality's and should be an indivisible part of the development. We should be engineering accessibility as part of the development and not as afterthought or because government restrictions and civil law requires us to. These proceedings bring together a cross section of the web design and engineering communities. The papers included here report on developments, discuss the issues, and suggest cross-pollinated solutions.Conventional workshops on accessibility tend to be single disciplinary in nature. However, we are concerned that this focus on a single participant group prevents the cross-pollination of ideas, needs, and technologies from other related but separate fields. As with the first, this second workshop is decidedly cross disciplinary in nature and brings together users, accessibility experts, graphic designers, and technologists from academia and industry to discuss how accessibility can be supported. We also encourage the participation of users and other interested parties as an additional balance to the discussion. Our aim is to focus on accessibility by encouraging participation from many disciplines. Views often bridge academia, commerce, and industry and arguments encompass a range of beliefs across the designaccessibility spectrum.

Proceedings ArticleDOI
29 Mar 2005
TL;DR: A new method for verifiable and recoverable encryption of DSA signatures is presented, and this cryptographic primitive is applied in the design of a novel certified e-goods delivery (DSA-CEGD) protocol.
Abstract: We present a new method for verifiable and recoverable encryption of DSA signatures, and apply this cryptographic primitive in the design of a novel certified e-goods delivery (DSA-CEGD) protocol. The DSA-CEGD protocol has the following features: (1) ensures strong fairness, (2) ensures non-repudiation of origin and non-repudiation of receipt, (3) allows the receiver of an e-goods to verify, during the protocol execution, that the e-goods he is about to receive is the one he is signing the receipt for, (4) does not require the on-line involvement of a fully trusted third party (TTP), but rather an off-line and transparent semi-trusted third party (STTP), and (5) provides the confidentiality protection for the exchanged items from the STTP.

Journal ArticleDOI
TL;DR: In this paper, a framework for semi-automatic ontology learning from textual sources attached to Web services is presented, which exploits the fact that these sources are expressed in a specific sublanguage, making them amenable to automatic analysis.
Abstract: High quality domain ontologies are essential for successful employment of semantic Web services. However, their acquisition is difficult and costly, thus hampering the development of this field. In this paper we report on the first stage of research that aims to develop (semi-)automatic ontology learning tools in the context of Web services that can support domain experts in the ontology building task. The goal of this first stage was to get a better understanding of the problem at hand and to determine which techniques might be feasible to use. To this end, we developed a framework for (semi-)automatic ontology learning from textual sources attached to Web services. The framework exploits the fact that these sources are expressed in a specific sublanguage, making them amenable to automatic analysis. We implement two methods in this framework, which differ in the complexity of the employed linguistic analysis.We evaluate the methods in two different domains, verifying the quality of the extracted ontologies against high quality hand-built ontologies of these domains. Our evaluation lead to a set of valuable conclusions on which further work can be based. First, it appears that our method, while tailored for the Web services context, might be applicable across different domains. Second, we concluded that deeper linguistic analysis is likely to lead to better results. Finally, the evaluation metrics indicate that good results can be achieved using only relatively simple, off the shelf techniques. Indeed, the novelty of our work is not in the used natural language processing methods but rather in the way they are put together in a generic framework specialized for the context of Web services.

Proceedings ArticleDOI
04 Apr 2005
TL;DR: Two variant protocols for certified e-mail delivery with DSA receipts are presented, based on a cryptographic primitive called Verifiable and Recoverable Encryption of a Signature (VRES), capable of achieving non-repudiation and strong fairness security properties.
Abstract: In this paper we present two variant protocols DSA-CEMD1 and DSA-CEMD2 for certified e-mail delivery with DSA receipts. The protocols are based on a cryptographic primitive called Verifiable and Recoverable Encryption of a Signature (VRES) and are capable of achieving non-repudiation and strong fairness security properties. The novel design of the VRES primitive allows efficiency improvements in comparison with the related certified e-mail delivery protocols based on similar primitives. The protocols employ the services of an off-line and invisible trusted third party (TTP) only in case of dispute. In DSA-CEMD1 the content of the e-mail message is not revealed to the TTP during possible recovery and this is achieved at the cost of some additional cryptographic operations. In DSA-CEMD2 the confidentiality of the message is not protected from the TTP, but the protocol is slightly more efficient.

Journal ArticleDOI
TL;DR: The workshop brought together a cross section of the Web design and engineering communities; to report on developments, discuss the issues, and suggest cross-pollinated solutions on accessibility by encouraging participation from many disciplines.
Abstract: Previous engineering approaches seem to have precluded the engineering of accessible systems. This is plainly unsatisfactory. Designers, authors, and technologist are at present playing 'catch-up' with a continually moving target in an attempt to retrofit systems. Infact engineering accessible interfaces is as important as their functionality's and should be an indivisible part of the development. We should be engineering accessibility as part of the development and not as afterthought or because government restrictions and civil law requires us to. Our workshop brought together a cross section of the Web design and engineering communities; to report on developments, discuss the issues, and suggest cross-pollinated solutions.Conventional workshops on accessibility tended to be single disciplinary in nature. However, we were concerned that a single disciplinary approach prevents the cross-pollination of ideas, needs, and technologies from other related but separate fields. The workshop was therefore, decidedly cross disciplinary in nature and brought together users, accessibility experts, graphic designers, and technologists from academia and industry to discuss how accessibility could be supported. We also encouraged the participation of users and other interested parties as an additional balance to the discussion. Views often bridged academia, commerce, and industry and arguments encompassed a range of beliefs across the design-accessibility spectrum. Our aim was to focus on accessibility by encouraging participation from many disciplines; represented in the following discussion and paper abstracts.

Book ChapterDOI
12 Dec 2005
TL;DR: The ODESGS Framework is presented, which is the result of having applied the extensions identified to the aforementioned Semantic Web Services description framework.
Abstract: The convergence of the Semantic Web and Grid technologies has resulted in the Semantic Grid. The great effort devoted in by the Semantic Web community to achieve the semantic markup of Web services (what we call Semantic Web Services) has yielded many markup technologies and initiatives, from which the Semantic Grid technology should benefit as, in recent years, it has become Web service-oriented. Keeping this fact in mind, our first premise in this work is to reuse the ODESWS Framework for the Knowledge-based markup of Grid services. Initially ODESWS was developed to enable users to annotate, design, discover and compose Semantic Web Services at the Knowledge Level. But at present, if we want to reuse it for annotating Grid services, we should carry out a detailed study of the characteristics of Web services and Grid services and thus, we will learn where they differ and why. Only when this analysis is performed should we know how to extend our theoretical framework for describing Grid services. Finally, we present the ODESGS Framework, which is the result of having applied the extensions identified to the aforementioned Semantic Web Services description framework.


Proceedings Article
13 Sep 2005
TL;DR: A Reference Semantic Grid Architecture is needed that extends the Open Grid Services Architecture by explicitly defining the mechanisms that will allow for the explicit use of semantics and the associated knowledge to support a spectrum of service capabilities.
Abstract: In the last few years, several projects have embraced this vision and there are already successful pioneering applications that combine the strengths of the Grid and of semantic technologies [2]. However, the Semantic Grid currently lacks a reference architecture, or a systematic approach for designing Semantic Grid components or applications. We need a Reference Semantic Grid Architecture that extends the Open Grid Services Architecture by explicitly defining the mechanisms that will allow for the explicit use of semantics and the associated knowledge to support a spectrum of service capabilities. An architecture would have (at least) three major components: (a) a definition of the semantic entities that are passed amongst the services, as an extension of the model of a Virtual Organisation. Grid entities are anything that carries an identity on the Grid, including resources and services [3]. These will acquire and discard associations with knowledge entities. We identify common forms of knowledge entities and discuss the life cycle and consequences of a Grid entity being tagged and stripped of its associations with knowledge entities; (b) services that provision semantic entities by supporting the creation, storage and access of different forms of knowledge entities and bidding Grid entities with knowledge. For example: ontology services; metadata services, for accessing and storing bindings of Grid entities with knowledge entities; and annotation services for generating metadata from different types of information sources, like databases, files or provenance logs (c) a framework for evolving existing Grid entities (services and resources) to become semantically aware, able to consume and produce semantic entities and process them to add value to their functionality. Two evolutionary mechanisms include: (i) semantically annotating existing entities that could facilitate dynamic discovery, dynamic composition or in general the development of “smarter” clients; and (ii) re-factoring existing services to become (Semantic Grid) Services capable of dealing with knowledge explicitly.

Book ChapterDOI
15 Jan 2005
TL;DR: In this chapter, Description Logics are introduced and it is explained how the rich expressivity of OWL can be used to model the complexities of biology and bioinformatics.
Abstract: In this chapter, we introduce Description Logics. These logics have achieved mainstream credibility as ontology languages by forming the basis of the W3C Web Ontology Language OWL, and its predecessor, DAML + OIL. From a case study, we explain how the rich expressivity of OWL can be used to model the complexities of biology and bioinformatics. We discuss automated reasoning technologies and the roles that they can play in supporting the process of building ontologies. Keywords: ontology; description logic; OWL; modelling; oiled; protege

Proceedings ArticleDOI
02 Oct 2005
TL;DR: This paper presents the approach for the annotation of all the aspects of a GS and the design, discovery and composition Semantic Grid Services (SGS) in the ODESGS Framework.
Abstract: The convergence of the Semantic Web and Grid technologies has resulted in the Semantic Grid. The Semantic Grid should be service-oriented, as the Grid is, so the formal description of Grid Services (GS) turns to be a crucial issue. In this paper we present our approach for this issue. ODESGS Framework will enable the annotation of all the aspects of a GS and the design, discovery and composition Semantic Grid Services (SGS).