scispace - formally typeset
Search or ask a question

Showing papers by "Nigel Shadbolt published in 1994"


Journal ArticleDOI
TL;DR: Laddering is a structured questioning technique derived from the repertory grid technique, enabling a hierarchy of concepts to be established, and the potential for synergy between the laddering tool and other knowledge acquisition techniques implemented within KEW is explored.

99 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a theory of actual and possible architectures for a complete agent with its own mind, driven by its own desires, which is a self-modifying control system, with a hierarchy of levels of control and a different hierarchy of level of implementation.
Abstract: Much research on intelligent systems has concentrated on low level mechanisms or sub-systems of restricted functionality. We need to understand how to put all the pieces together in an \ul{architecture} for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy of levels of control, and a different hierarchy of levels of implementation. AI needs to explore alternative control architectures and their implications for human, animal, and artificial minds. Only within the framework of a theory of actual and possible architectures can we solve old problems about the concept of mind and causal roles of desires, beliefs, intentions, etc. The high level ``virtual machine'' architecture is more useful for this than detailed mechanisms. E.g. the difference between connectionist and symbolic implementations is of relatively minor importance. A good theory provides both explanations and a framework for systematically generating concepts of possible states and processes. Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems. The paper sketches some requirements for such architectures, and analyses an idea shared between engineers and philosophers: the concept of ``semantic information''.

25 citations


Journal ArticleDOI
TL;DR: A "walkthrough" of the Sisyphus solution from analysis to implementation in detail is presented, so that all acquisition, modelling and design decisions can be seen in context.
Abstract: In this paper a solution to the Sisyphus room allocation problem is discussed which uses the generalized directive model (GDM) methodology developed in the ACKnowledge project, together with the knowledge engineering methodology developed in the VITAL project. After briefly introducing these methodologies, the paper presents a "walkthrough" of the Sisyphus solution from analysis to implementation in detail, so that all acquisition, modelling and design decisions can be seen in context. The selection of a reusable off-the-shelf model from the GDM library is presented, together with a discussion of the ways in which this selection process can drive the knowledge acquisition process. Next, there is an account of the instantiation of the GDM and the imposition of a control regime over the dataflow structure; we show how this process uncovers hidden constraints and inconsistencies in Siggi's account of his own problem solving. The output of this KA phase consists of a conceptual model of the problem which is discussed in detail and formalized in terms of the VITAL conceptual modelling language. From this analysis of the problem, we move on to discussion of the issues concerning the design and implementation of a system, and we show how our implementation satisfies the specification of the Sisyphus problem.

20 citations


Proceedings ArticleDOI
24 Oct 1994
TL;DR: The hardware architecture of the research's autonomous, mobile agents with sufficiently wide behavioural repertoires to be able to tackle complex tasks in real-world domains is detailed and progress in the reactive components of the system is described.
Abstract: We are exploring the extent to which we can build autonomous, mobile agents with sufficiently wide behavioural repertoires to be able to tackle complex tasks in real-world domains. Our research is developing a layered architecture within which to characterise and implement such agents. The central problem we have to tackle is the ability to build agents that can perceive, plan and act under pressure of time. Our architecture tackles the planning problem by incorporating reflective or deliberative behaviour with reactive or nondeliberative ones. The balance between these varieties of behaviour is the tricky thing to get right. In this paper we will detail our hardware architecture and describe progress in the reactive components of our system. We will outline a number of behaviours that we have obtained. Finally we will outline our approach to the incorporation of deliberative planning and indicate the task environment in which we intend to evaluate our approach.

7 citations



Journal ArticleDOI
TL;DR: This paper presents solutions to both problems which allow the axioms defining the reified logic to be eliminated from the database during theorem proving hence reducing the search space while retaining completeness.
Abstract: This paper is concerned with the application of the resolution theorem proving method to reified logics. The logical systems treated include the branching temporal logics and logics of belief based on K and its extensions. Two important problems concerning the application of the resolution rule to reified systems are identified. The first is the redundancy in the representation of truth functional relationships and the second is the axiomatic reasoning about modal structure. Both cause an unnecessary expansion in the search space. We present solutions to both problems which allow the axioms defining the reified logic to be eliminated from the database during theorem proving hence reducing the search space while retaining completeness. We describe three theorem proving methods which embody our solutions and support our analysis with empirical results.

6 citations


Book ChapterDOI
26 Sep 1994
TL;DR: This paper outlines a principled methodology, based on KADS, for generating runnable expert system knowledge-bases from the output of high-level knowledge acquisition tools, based upon a synthesis of earlier work arising from the UK CONSENSUS, ESPRIT P2576 ACKNOWLEDGE and P5365 VITAL projects.
Abstract: This paper outlines a principled methodology, based on KADS, for generating runnable expert system knowledge-bases from the output of high-level knowledge acquisition tools This methodology is based upon a synthesis of earlier work arising from the UK CONSENSUS, ESPRIT P2576 ACKNOWLEDGE and P5365 VITAL projects REKAP integrates knowledge elicitation techniques, real-time structured analysis and a model of the the desired run-time architecture within a common framework based upon extensions to the original KADS four-layer model of expertise The methodology has been realised as a compiler between ProtoKEW, a knowledge acquisition toolkit, structured task-analysis tools and MUSE, a real-time expert system shell The paper focuses on a particular example of the use of the methodology, in the domain of situation assessment

5 citations


01 Jan 1994
TL;DR: It is shown how sound formalisms such as a temporal logic can be used for knowledge representation at the domain level and the implications of this on the choice of domain representation formalism are explored.
Abstract: Abs t r ac t . This paper describes how a proven planning technique can be employed within a structured approach to knowledge based system development (KADS), via the definition of a mode! of expertise. This paper proposes a KADS model of expertise for hierarchical skeletal plan refinement. This model is based on a conceptual analysis by Kiihn and Schrnalhofer (1992) of skeletal planning as described in Friedland and Iwasaki (1985). The separation of domain and control knowledge is well defined in the KADS problem solving model and we explore the implications of this on the choice of domain representation formalism. We show how sound formalisms such as a temporal logic can be used for knowledge representation at the domain level.

5 citations




Proceedings ArticleDOI
05 Aug 1994
TL;DR: In developing this RST-based method, Ho W has discovered that RST relations are a powerful tool for planning pa,'agraphs, and builds a tree structure that represents the internal organisation and rhetorical dependencies between clauses in a text.
Abstract: 1. Introduction Over the last decade, the research goals in natural language generation have shi fred fi'om the generation of isolated sentences to the production of coherent multi-sentence paragraphs. Two major aspects of the generation process have been focused on: deciding 'what to say' (the strategic level) and deciding 'how to say it' (the tactical level). In 1985, McKeown designed one of the first systems to produce paragraphs using so-called sche-mata to describe conventional text structures in terms of patterns. Schemata are used to determine the content and order of the clauses in paragraphs (McKeown, 1985). However, these structures have a major limitation (Moore and Paris, 1988): schemata do not contain a description of tim intentional and rhetorical role that each part of the paragraph plays with respect to the whole paragraph. In 1988, How first employed RST (Pdmtorical Structure Theory) relations, which state the relationships between individual elements of a text, to control the construction of texts (How, 1988). In developing this RST-based method, Ho W has discovered that RST relations are a powerful tool for planning pa,'agraphs. They support reasoning about the intentions of writers and readers in a very natural way. Planning with rhetorical relations affords more flexibility than schcnmta. This method of planning paragraphs builds a tree structure that represents the internal organisation and rhetorical dependencies between clauses in a text. P, nt there is a cost: it is more difficult to assemble an RST paragraph tree from a set of independent relations than it is to instantiate and traverse a schema (Hovy, 1991). In 1992, Hovy et. al. described a new text planner (Ho Wet. al., 1992) that identifies the distinct types of knowledge necessary to generate coherent discourse in a text generation system. These knowledge resources are integrated under a planning process that draws from appropriate resources whatever knowledge is needed to construct a text. Though Itovy et. al. do not claim to have identified all the knowledge sources required to produce coherent discourse, their planner sets a trend for applying multi-knowledge resources for more complete and flexible phmning of text.

Proceedings Article
13 Jun 1994
TL;DR: A modelling procedure is described which can incorporate context-dependent effects and demons respectively and the update method is automatically derived from an axiomatisation of the domain in temporal calculus, allowing the system designer great flexibility in choosing the trade-off point between representational expressivity against computational speed.
Abstract: Al planning systems must be able to update their internal model of the domain in order to determine the effects of a possible action. The most widely used update method is the STRIPS operator. STRIPS operators are extremely efficient. However, the use of STRIPS operators requires that one specify all the consequences of an action beforehand. This becomes almost impossible for complex domains. Work by Pednault (1989) and Wilkins (1988) overcomes the problems associated with ordinary STRIPS operators by enhancing them with context-dependent effects and demons respectively. In this paper we describe a modelling procedure which can incorporate both these features. Moreover, the update method is automatically derived from an axiomatisation of the domain in temporal calculus. This allows the system designer great flexibility in choosing the trade-off point between representational expressivity against computational speed.