scispace - formally typeset
N

Nigel Shadbolt

Researcher at University of Oxford

Publications -  589
Citations -  21792

Nigel Shadbolt is an academic researcher from University of Oxford. The author has contributed to research in topics: Semantic Web & Ontology (information science). The author has an hindex of 65, co-authored 564 publications receiving 20635 citations. Previous affiliations of Nigel Shadbolt include Open University & University of Edinburgh.

Papers
More filters

Monitoring research collaborations using semantic web technologies Conference or Workshop Item

TL;DR: Semantic Web technologies are used to construct a flexible application framework to provide multiple complementary visualisations of the data, while separating the issues of knowledge acquisition and curation from the more user-centric interface requirements.

An Investigation into Automatically Captured Autobiographical Metadata, and the Support for Autobiographical Narrative Generation. Mini-Thesis: PhD upgrade report

TL;DR: An infrastructure for the capturing and exploitation of personal metadata to drive research into context aware systems and how the autobiographical context captured will be evaluated is presented to provide insight into the utility of the metadata harnessed to aid human memory management.

Towards Ontology Mapping: DL View or Graph View?

TL;DR: It is argued that a combination of the DL (Description Logic) and graph view on ontology can lead to a better solution in ontology mapping.

A Semantic Matching Approach for Distributed RDF Data Query on a Knowledge Bus

TL;DR: This paper presents a knowledge bus infrastructure - a general solution of locating and extracting knowledge elements from distributed sources on-demand rather than loading all of RDF triples into a large central triple store in advance.
Journal ArticleDOI

Trust Explanations to Do What They Say

TL;DR: In this paper , the authors propose that developers of algorithms explaining AI outputs (xAI algorithms) should provide similar contracts, which should specify use cases in which an explanation can and cannot be trusted.