scispace - formally typeset
Search or ask a question

Showing papers in "New Generation Computing in 1991"


Journal ArticleDOI
TL;DR: It is shown that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.
Abstract: An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available. Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.

2,451 citations


Journal ArticleDOI
TL;DR: Thestable model semantics for disjunctive logic programs and deductive databases is introduced, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs, and can be generalized to the class of all disjunctions logic programs.
Abstract: We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties: • For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics. • For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics. • For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics. • The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs. • Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation. • After translation of the programP into a suitable autoepistemic theory\( \hat P \) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of\( \hat P \) .

330 citations


Journal ArticleDOI
TL;DR: The &-Prolog system, a practical implementation of a parallel execution model for Prolog exploiting strict and non-strict independent and-parallelism, is described and shows significant speed advantages over state-of-the-art sequential systems.
Abstract: The &-Prolog system, a practical implementation of a parallel execution model for Prolog exploiting strict and non-strict independent and-parallelism, is described. Both automatic and manual parallelization of programs are supported. This description includes a summary of the system’s language and architecture, some details of its execution model (based on the RAP-WAM model), and data on its performance on sequential workstations and shared memory multiprocessors, which is compared to that of current Prolog systems. The results to date show significant speed advantages over state-of-the-art sequential systems.

95 citations


Journal ArticleDOI
TL;DR: This paper describes how one such default and abductive reasoning system (namely Theorist) can be translated into Horn clauses, so that it can use the clarity of abducted reasoning systems and the efficiency of Horn clause deduction systems.
Abstract: Artificial intelligence researchers have been designing representation systems for default and abductive reasoning. Logic Programming researchers have been working on techniques to improve the efficiency of Horn clause deduction systems This paper describes how one such default and abductive reasoning system (namelyTheorist) can be translated into Horn clauses (with negation as failure), so that we can use the clarity of abductive reasoning systems and the efficiency of Horn clause deduction systems. We thus show how advances in expressive power that artificial intelligence workers are working on can directly utilise advances in efficiency that logic programming researchers are working on. Actual code from a running system is given.

57 citations


Journal ArticleDOI
TL;DR: This paper presents mutually recursive versions of the update procedures for programs and proves various properties of the procedures including their correctness, as well as generalising the procedures so that they can update an (arbitrary) program with an (ARBitrary) formula.
Abstract: We consider the problem of updating a knowledge base, where a knowledge base is realised as a (logic) program. In a previous paper, we presented procedures for deleting an atom from a normal program and inserting an atom into a normal program, concentrating particularly on the case when negative literals appear in the bodies of program clauses. We also proved various properties of the procedures including their correctness. Here we present mutually recursive versions of the update procedures and prove their correctness and other properties. We then generalise the procedures so that we can update an (arbitrary) program with an (arbitrary) formula. The correctness of the update procedures for programs is also proved.

46 citations


Journal ArticleDOI
TL;DR: This paper presents a scheme to deal efficiently with incremental search problems that allows the incremental addition and deletion of constraints and is based on re-execution, using parts of computation paths stored during previous computations.
Abstract: Incremental search consists of adding new constraints or deleting old ones once a solution to a search problem has been found. Although incremental search is of primary importance in application areas such as scheduling, planning, trouble shooting, and interactive problem-solving, it is not presently supported by logic programming languages and little research has been devoted to this topic. This paper presents a scheme to deal efficiently with incremental search problems. The scheme allows the incremental addition and deletion of constraints and is based on re-execution, using parts of computation paths stored during previous computations. The scheme has been implemented as part of the constraint logic programming language CHIP and applied to practical problems. It has shown arbitrarily large (i.e. unbounded) speedups compared with previous approaches on practical problems.

40 citations


Journal ArticleDOI
TL;DR: A different reference theory that is based on a program transformation that given any program transforms it into a strict one and the usual notion of program completion is proposed, which is a reasonable reference theory to discuss program semantics and completeness results.
Abstract: The paper presents a new approach to the problem of completeness of the SLDNF-resolution. We propose a different reference theory that we call strict completion. This new concept of completion (comp*(P)) is based on a program transformation that given any program transforms it into a strict one (with the same computational behaviour) and the usual notion of program completion. We consider it a reasonable reference theory to discuss program semantics and completeness results. The standard 2-valued logic is used. The new comp*(P) is always consistent and the completeness of all allowed programs and goals w.r.t. comp*(P) is proved.

24 citations


Journal ArticleDOI
TL;DR: A simple transformation of logic programs capable of inverting the order of computation is investigated, which may serve such purposes as left-recursion elimination, loop-elimination, simulation of forward reasoning, isotopic modification of programs and simulation of abductive reasoning.
Abstract: We investigate a simple transformation of logic programs capable of inverting the order of computation. Several examples are given which illustrate how this transformation may serve such purposes as left-recursion elimination, loop-elimination, simulation of forward reasoning, isotopic modification of programs and simulation of abductive reasoning.

14 citations


Journal ArticleDOI
Henry Tirri1
TL;DR: The synergy of rules and detector predicates combines the advantages of both worlds: it maintains the clarity of the rule-based knowledge representation at the higher reasoning levels without sacrificing the power of noise-tolerant pattern association offered by neural computing methods.
Abstract: The relation of subsymbolic (neural computing) and symbolic computing has been a topic of intense discussion. We address some of the drawbacks of current expert system technology and study the possibility of using neural computing principles to improve their competence. In this paper we focus on the problem of using neural networks to implement expert system rule conditions. Our approach allows symbolic inference engines to make direct use of complex sensory input via so called detector predicates. We also discuss the use of self organizing Kohonen networks as a means to determine those attributes (properties) of data that reflect meaningful statistical relationships in the expert system input space. This mechanism can be used to address the defficult problem of conceptual clustering of information. The concepts introduced are illustrated by two application examples: an automatic inspection system for circuit packs and an expert system for respiratory and anesthesia monitoring. The adopted approach differs from the earlier research on the use of neural networks as expert systems, where the only method to obtain knowledge is learning from training data. In our approach the synergy of rules and detector predicates combines the advantages of both worlds: it maintains the clarity of the rule-based knowledge representation at the higher reasoning levels without sacrificing the power of noise-tolerant pattern association offered by neural computing methods.

11 citations


Journal ArticleDOI
TL;DR: This paper presents some benchmark timings from an optimising Prolog compiler using global analysis for a RISC workstation, the MIPS R2030, and suggests that global analysis is a fruitful source of information for an optimisation prolog compiler and that the performance of special purpose Prolog hardware can be at least matched by the code from a compiler using such information.
Abstract: This paper presents some benchmark timings from an optimising Prolog compiler using global analysis for a RISC workstation, the MIPS R2030. These results are extremely promising. For example, the infamous naive reverse benchmark runs at 2 mega LIPS. We compare these timings with those for other Prolog implementations running on the same workstation and with published timings for the KCM, a recent piece of special purpose Prolog hardware. The comparison suggests that global analysis is a fruitful source of information for an optimising Prolog compiler and that the performance of special purpose Prolog hardware can be at least matched by the code from a compiler using such information. We include some analysis of the sources of the improvement global analysis yields. An overview of the compiler is given and some implementation issues are discussed. This paper is an extended version of Ref. 15)

4 citations


Journal ArticleDOI
TL;DR: A multi-ring dataflow machine to support the OR-parallelism and the argument parallelism of logic programs is proposed and a new scheme is suggested for handling the deferred read mechanism of the dataflow architecture.
Abstract: Logic programming languages have gained wide acceptance because of two reasons. First is their clear declarative semantics and the second is the wide scope for parallelism they provide which can be exploited by building suitable parallel architectures. In this paper, we propose a multi-ring dataflow machine to support theOR-parallelism and theArgument parallelism of logic programs. A new scheme is suggested for handling the deferred read mechanism of the dataflow architecture. The required data structures, the dataflow actors and the builtin dataflow procedures for OR-parallel execution are discussed. Multiple binding environments arising in the OR-parallel execution are handled by a new scheme called thetagged variable scheme. Schemes for constrained OR-parallel execution are also discussed.

Journal ArticleDOI
TL;DR: The introduction and specification of Paragon, a parallel object-oriented language based on graph rewriting and message passing principles, and an illustration of the approach at work in the design of a parallel supercombinator graph reduction machine.
Abstract: The need to design and verify architectures to support parallel implementations of declarative languages has led to the development of a novel language, called Paragon, which bridges the gap between the top-level specification of the abstract machine, and its detailed implementation in terms of parallel processes and message passing.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a method to solve the problem of "uniformity" and "uncertainty" in the context of health care.w.r.w
Abstract: w

Journal ArticleDOI
TL;DR: A method to detect the generation of garbage cells, by analyzing a source text of functional programming languages, and the garbage cells whose generation is expected are reclaimed immediately with very little overhead at the execution time is proposed.
Abstract: We have proposed a method to detect the generation of garbage cells, by analyzing a source text of functional programming languages.15) The garbage cells whose generation is expected are reclaimed immediately with very little overhead at the execution time. We call this methodcompile-time GC. To investigate the effects of the compile-time GC, an experimental LISP interpreter has been implemented, and several sample programs are executed. We knew that for most programs, many of the garbage cells are detected and reclaimed by the compile-time GC. Programming techniques to improve the reclaimability are also studied.

Journal ArticleDOI
TL;DR: This paper illustrates the superiority of the partitioned representations model over a standard logic-based model in processing semantically complex discourse in accordance with the principles ofpartitioned representations.
Abstract: A logic-based system of knowledge representation for natural language discourse has three primary advantages: On the other hand, a standard logic-based system has the following disadvantages: Spaceprobe5) is a non-standard logic-based system that supports a powerful model of discourse processing in which discourse content is distributed appropriately over multiplespaces, each representing some aspect of (a possible) reality, in accordance with the principles ofpartitioned representations.6,12) It retains the advantages of the standard logic-based representation, while overcoming the disadvantages. In addition, it can be used to account for a large number of discourse-level phenomena in a simple and uniform way. Among these are presupposition and the semantics of temporal expressions. This paper illustrates the superiority of the partitioned representations model over a standard logic-based model in processing semantically complex discourse.

Journal ArticleDOI
TL;DR: This paper presents an implementation scheme based on kernel support, applicable to both uniprocessor and multiprocesser architectures, that is more efficient than equivalent program transformations and imposes little communication and computation overhead.
Abstract: Many interesting applications of concurrent logic languages require the ability to initiate, monitor, and control computations. Two approaches to the implementation of these control functions have been proposed: one based on kernel support and the other on program transformation. The efficiency of the two approaches has not previously been compared. This paper presents an implementation scheme based on kernel support, applicable to both uniprocessor and multiprocessor architectures. Experimental studies on a uniprocessor show the scheme to be more efficient than equivalent program transformations. Analysis of a multiprocessor implementation shows that the scheme imposes little communication and computation overhead.

Journal ArticleDOI
TL;DR: A new implementation technique called preliminary arrangements of arguments for lazy functional languages where the evaluator with preliminary arrangements partly processes every argument before calling functions works in a lazy way with less memory cells than conventional methods.
Abstract: This paper describes a new implementation technique calledpreliminary arrangements of arguments for lazy functional languages. Unlike conventional lazy evaluators, the evaluator with preliminary arrangements partly processes every argument before calling functions. It works in a lazy way with less memory cells than conventional methods. The practical importance of this technique is demonstrated by some benchmark results.

Journal ArticleDOI
TL;DR: The memory access characteristics in KL1 parallel execution and a locally parallel cache mechanism with hardware lock are described and new software controlled memory access commands are introduced, named DW, ER, and RP.
Abstract: The parallel inference machine (PIM) is now being developed at ICOT. It consists of a dozen or more clusters, each of which is a tightly coupled multiprocessor (comprising about eight processing elements) with shared global memory and a common bus. Kernel language 1 (KL1), a parallel logic programming language based on Guarded Horn Clauses (GHC), is executed on each PIM cluster.

Journal ArticleDOI
TL;DR: A new searching approach, Selecten Jumping Searching (SJS), in which the length of every seaching step from a node to another node along a path is much longer than one, has been proposed this paper.
Abstract: A new searching approach, Selecten Jumping Searching (SJS), in which the length of every seaching step from a node to another node along a path is much longer than one, has been proposed this paper. In addition to all the problems which can be solved by GPS or MPS, SJS can also solve other problems such as theN Queens problems, theN Puzzle problems, etc., which GPS and MPS fail whenN is large and whose computational complexity are exponential by the general searching approach or MPS. The searching algorithms of SJS, algorithmsC,C 0 (orC 0 ′ ) andC * whose computational complexity is only polynomial and linear respectively have been proposed also in this paper. Finally, the experimental results of the Five Hundred Queens problem, more than two Thousand Queens problem and theN Puzzle problem (whereN is more than one thousand) are given. In order to get the first some solutions of the Fifty Queens problem and to build the 352−1 Puzzle problem’s Macro Table of MPS, both of them would take 1025 years even using a 1015 ops supercomputer by the general searching approach. But using proposed approach and algorithms (whose computational complexity isO(N) andO(N 3/2) respectively), 4000 solutions of the Five Hundred Queens problem have been got when program runs about 227 minutes on HP 9000/835 and the average solution time to solve the 352−1 Puzzle problem with arbitrary problem state is less than one minute on HP 9000/300. SJS is a searching approach as a result mapped from Macro Transformation approach.