# Showing papers in "Artificial Intelligence in 1986"

••

TL;DR: It is shown that if the network is singly connected (e.g. tree-structured), then probabilities can be updated by local propagation in an isomorphic network of parallel and autonomous processors and that the impact of new information can be imparted to all propositions in time proportional to the longest path in the network.

Abstract: Belief networks are directed acyclic graphs in which the nodes represent propositions (or variables), the arcs signify direct dependencies between the linked propositions, and the strengths of these dependencies are quantified by conditional probabilities. A network of this sort can be used to represent the generic knowledge of a domain expert, and it turns into a computational architecture if the links are used not merely for storing factual knowledge but also for directing and activating the data flow in the computations which manipulate this knowledge. The first part of the paper deals with the task of fusing and propagating the impacts of new information through the networks in such a way that, when equilibrium is reached, each proposition will be assigned a measure of belief consistent with the axioms of probability theory. It is shown that if the network is singly connected (e.g. tree-structured), then probabilities can be updated by local propagation in an isomorphic network of parallel and autonomous processors and that the impact of new information can be imparted to all propositions in time proportional to the longest path in the network. The second part of the paper deals with the problem of finding a tree-structured representation for a collection of probabilistically coupled propositions using auxiliary (dummy) variables, colloquially called "hidden causes." It is shown that if such a tree-structured representation exists, then it is possible to uniquely uncover the topology of the tree by observing pairwise dependencies among the available propositions (i.e., the leaves of the tree). The entire tree structure, including the strengths of all internal relationships, can be reconstructed in time proportional to n log n, where n is the number of leaves.

2,266 citations

••

TL;DR: It is shown that, given a setting in which purposeful dialogues occur, this model of cooperative behavior can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts.

Abstract: This paper describes a model of cooperative behavior and describes how such a model can be applied in a natural language understanding system. We assume that agents attempt to recognize the plans of other agents and, then, use this plan when deciding what response to make. In particular, we show that, given a setting in which purposeful dialogues occur, this model can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts.

735 citations

••

TL;DR: New algorithms for arc and path consistency are presented and it is shown that the arc consistency algorithm is optimal in time complexity and of the same-order space complexity as the earlier algorithms.

Abstract: Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms [5]. We present here new algorithms for arc and path consistency and show that the arc consistency algorithm is optimal in time complexity and of the same-order space complexity as the earlier algorithms. A refined solution for the path consistency problem is proposed. However, the space complexity of the path consistency algorithm makes it practicable only for small problems. These algorithms are the result of the synthesis techniques used in Image (a general constraint satisfaction system) and local consistency methods [3].

734 citations

••

TL;DR: This work presents a representation that has proven competent to accurately describe an extensive variety of natural forms, as well as man-made forms, in a succinct and natural manner, and shows that the primitive elements of such descriptions may be recovered in an overconstrained and therefore reliable manner.

Abstract: To support our reasoning abilities perception must recover environmental regularities—eg, rigidity, “objectness,” axes of symmetry—for later use by cognition To create a theory of how our perceptual apparatus can produce meaningful cognitive primitives from an array of image intensities we require a representation whose elements may be lawfully related to important physical regularities, and that correctly describes the perceptual organization people impose on the stimulus Unfortunately, the representations that are currently available were originally developed for other purposes (eg, physics, engineering) and have so far proven unsuitable for the problems of perception or common-sense reasoning In answer to this problem we present a representation that has proven competent to accurately describe an extensive variety of natural forms (eg, people, mountains, clouds, trees), as well as man-made forms, in a succinct and natural manner The approach taken in this representational system is to describe scene structure at a scale that is similar to our naive perceptual notion of “a part,” by use of descriptions that reflect a possible formative history of the object, eg, how the object might have been constructed from lumps of clay For this representation to be useful it must be possible to recover such descriptions from image data; we show that the primitive elements of such descriptions may be recovered in an overconstrained and therefore reliable manner We believe that this descriptive system makes an important contribution towards solving current problems in perceiving and reasoning about natural forms by allowing us to construct accurate descriptions that are extremely compact and that capture people's intuitive notions about the part structure of three-dimensional forms

637 citations

••

PARC

^{1}TL;DR: GUS (Genial Understander System) as mentioned in this paper is the first of a series of experimental computer systems that are intended to engage a sympathetic and highly cooperative human in an English dialog, directed towards a specific goal within a very restricted domain of discourse.

Abstract: GUS is the first of a series of experimental computer systems that we intend to construct as part of a program of research on language understanding. In large measure, these systems will fill the role of periodic progress reports, summarizing what we have learned, assessing the mutual coherence of the various lines of investigation we have been following, and suggesting where more emphasis is needed in future work. GUS (Genial Understander System) is intended to engage a sympathetic and highly cooperative human in an English dialog, directed towards a specific goal within a very restricted domain of discourse. As a starting point, GUS was restricted to the role of a travel agent in a conversation with a client who wants to make a simple return trip to a single city in California.
There is good reason for restricting the domain of discourse for a computer system which is to engage in an English dialog. Specializing the subject matter that the system can talk about permits it to achieve some measure of realism without encompassing all the possibilities of human knowledge or of the English language. It also provides the user with specific motivation for participating in the conversation, thus narrowing the range of expectations that GUS must have about the user's purposes. A system restricted in this way will be more able to guide the conversation within the boundaries of its competence.

366 citations

••

PARC

^{1}TL;DR: A set of concerns for interfacing with the ATMS, an interface protocol, and an example of a constraint language based on the protocol are presented, which concludes with a comparison of the AT MS and the view of problem solving it entails with other approaches.

Abstract: An assumption-based truth maintenance system provides a very general facility for all types of default reasoning. However, the ATMS is only one component of an overall reasoning system. This paper presents a set of concerns for interfacing with the ATMS, an interface protocol, and an example of a constraint language based on the protocol. The paper concludes with a comparison of the ATMS and the view of problem solving it entails with other approaches.

267 citations

••

[...]

PARC

^{1}TL;DR: This paper shows how the basic ATMS is extended to handle defaults and disjunctions of assumptions, which are used to encode disjunction of nodes, nonmonotonic justifications, normal defaults, nonnormal defaults, and arbitrary propositional formulas.

Abstract: The basic assumption-based truth maintenance (ATMS) architecture provides a foundation for implementing various kinds of default reasoning. This paper shows how the basic ATMS is extended to handle defaults and disjunctions of assumptions. These extensions are used to encode disjunctions of nodes, nonmonotonic justifications, normal defaults, nonnormal defaults, and arbitrary propositional formulas.

257 citations

••

TL;DR: This chapter discusses the knowledge-based systems or KBS, the idea is not just to construct systems that exhibit knowledge, but to represent that knowledge somehow in the data structures of the program, and to have the system perform whatever it is doing by manipulating that knowledge explicitly.

Abstract: Publisher Summary This chapter discusses the knowledge-based systems or KBS. The usual picture of logic is that it involves a expressive language, coupled with a sound and complete inference regime. On closer examination, the idea of a KBS is not totally vacuous. The idea is not just to construct systems that exhibit knowledge, but to represent that knowledge somehow in the data structures of the program, and to have the system perform whatever it is doing—diagnosing diseases, controlling a power plant, explaining its behaviour, or whatever—by manipulating that knowledge explicitly. Knowledge-based systems need to apply large amounts of knowledge. It must be possible at some level to apply knowledge without first requiring more knowledge to be applied at a higher level. Certain forms of knowledge are inherently intractable, and cannot be fully applied within reasonable resource bounds. A special kind of knowledge can be fully applied. The application of knowledge can also be made computationally tractable by making it logically unsound and incomplete in a principled way.

244 citations

••

PARC

^{1}TL;DR: This paper explores the relationship between causal ordering and propagation of constraints upon which the methods of qualitative physics are based and criticizes de Kleer and Brown.

Abstract: This paper is a response to Iwasaki and Simon [14] which criticizes de Kleer and Brown [8]. We argue that many of their criticisms, particularly concerning causality, modeling and stability, originate from the difference of concerns between engineering and economics. Our notion of causality arises from considering the interconnections of components, not equations. When no feedback is present, the ordering produced by our qualitative physics is similar to theirs. However, when feedback is present, our qualitative physics determines a causal ordering around feedback loops as well. Causal ordering is a general technique not only applicable to qualitative reasoning. Therefore we also explore the relationship between causal ordering and propagation of constraints upon which the methods of qualitative physics are based.

168 citations

••

TL;DR: The 3D Mosaic system is a vision system that incrementally reconstructs complex 3D scenes from a sequence of images obtained from multiple viewpoints, and the various components of the system are described, including stereo analysis, monocular analysis, and constructing and updating the scene model.

Abstract: The 3D Mosaic system is a vision system that incrementally reconstructs complex 3D scenes from a sequence of images obtained from multiple viewpoints. The system encompasses several levels of the vision process, starting with images and ending with symbolic scene descriptions. This paper describes the various components of the system, including stereo analysis, monocular analysis, and constructing and updating the scene model. In addition, the representation of the scene model is described. This model is intended for tasks such as matching, display generation, planning paths through the scene, and making other decisions about the scene environment. Examples showing how the system is used to interpret complex aerial photographs of urban scenes are presented. Each view of the scene, which may be either a single image or a stereo pair, undergoes analysis which results in a 3D wire-frame description that represents portions of edges and vertices of objects. The model is a surface-based description constructed from the wire frames. With each successive view, the model is incrementally updated and gradually becomes more accurate and complete. Task-specific knowledge, involving block-shaped objects in an urban scene, is used to extract the wire frames and construct and update the model. The model is represented as a graph in terms of symbolic primitives such as faces, edges, vertices, and their topology and geometry. This permits the representation of partially complete, planar-faced objects. Because incremental modifications to the model must be easy to perform, the model contains mechanisms to (1) add primitives in a manner such that constraints on geometry imposed by these additions are propagated throughout the model, and (2) modify and delete primitives if discrepancies arise between newly derived and current information. The model also contains mechanisms that permit the generation, addition, and deletion of hypotheses for parts of the scene for which there is little data.

146 citations

••

TL;DR: This work addresses primarily certainty and employs censored production rules as an underlying representational and computational mechanism for handling trade-offs between the precision of inferences and the computational efficiency of deriving them.

Abstract: Variable precision logic is concerned with problems of reasoning with incomplete information and resource constraints. It offers mechanisms for handling trade-offs between the precision of inferences and the computational efficiency of deriving them. Two aspects of precision are the specificity of conclusions and the certainty of belief in them; we address primarily certainty and employ censored production rules as an underlying representational and computational mechanism. These censored production rules are created by augmenting ordinary production rules with an exception condition and are written in the form “if A then B unless C”, where C is the exception condition. From a control viewpoint, censored production rules are intended for situations in which the implication A ⇒ B holds frequently and the assertion C holds rarely. Systems using censored production rules are free to ignore the exception conditions when resources are tight. Given more time, the exception conditions are examined, lending credibility to high-speed answers or changing them. Such logical systems, therefore, exhibit variable certainty of conclusions, reflecting variable investment of computational resources in conducting reasoning. From a logical viewpoint, the unless operator between B and C acts as the exclusive-or operator. From an expository viewpoint, the “if A then B” part of censored production rule expresses important information (e.g., a causal relationship), while the “unless C” part acts only as a switch that changes the polarity of B to ¬B when C holds. Expositive properties are captured quantitatively by augmenting censored rules with two parameters that indicate the certainty of the implication “if A then B”. Parameter δ is the certainty when the truth value of C is unknown, and γ is the certainty when C is known to be false.

••

TL;DR: The proposed method might offer a plausible cognitive model of classification processes as well as an engineering solution to the problems of automatic classification generation.

Abstract: Conceptual clustering is concerned with problems of grouping observed entities into conceptually simple classes. Earlier work on this subject assumed that the entities and classes are described in terms of a priori given multi-valued attributes. This research extends the previous work in three major ways: • - entities are characterized as compound objects requiring structural descriptions. • - relevant descriptive concepts (attributes and relations) are not necessarily given a priori but can be determined through reasoning about the goals of classification. • - inference rules are used to derive useful high-level descriptive concepts from the initially provided low-level concepts. The created classes are described using Annotated Predicate Calculus (APC), which is a typed predicate calculus with additional operators. Relevant descriptive concepts appropriate for characterizing entities are determined by tracing links in a Goal Dependency Network (GDN) that represents relationships between goals, subgoals, and related attributes. An experiment comparing results from the program cluster/s that implements the classification generation process and results obtained from people indicates that the proposed method might offer a plausible cognitive model of classification processes as well as an engineering solution to the problems of automatic classification generation.

••

TL;DR: This paper concentrates on the practical aspects of a program transformation system being developed and describes the present performance of the system and outlines the techniques and heuristics used.

Abstract: This paper concentrates on the practical aspects of a program transformation system being developed. It describes the present performance of the system and outlines the techniques and heuristics used.

••

[...]

TL;DR: The fractal surface model [Pentland 83] provides a formalism that is competent to describe such natural 3-D surfaces and is able to predict human perceptual judgments of smoothness versus roughness — thus allowing the reliable application of shape estimation techniques that assume smoothness.

Abstract: Current shape-from-shading and shape-from-texture methods are applicable only to smooth surfaces, while real surfaces are often rough and crumpled. To extend such methods to real surfaces we must have a model that also applies to rough surfaces. The fractal surface model [6] provides a formalism that is competent to describe such natural 3-D surfaces and, in addition, is able to predict human perceptual judgments of smoothness versus roughness. We have used this model of natural surface shapes to derive a technique for 3-D shape estimation that treats shaded and textured surfaces in a unified manner.

••

TL;DR: A program which uses aggregation to perform causal simulation in the domain of molecular genetics and a detailed analysis of aggregation indicates the requirements and limitations of the technique as well as problems for future research.

Abstract: Aggregation is an abstraction technique for dynamically creating new descriptions of a system's behavior. Aggregation works by detecting repeating cycles of processes and creating a continuous process description of the cycle's behavior. Since this behavioral abstraction results in a continuous process, the powerful transition analysis technique may be applied to determine the system's final state. This paper reports on a program which uses aggregation to perform causal simulation in the domain of molecular genetics. A detailed analysis of aggregation indicates the requirements and limitations of the technique as well as problems for future research.

••

Yale University

^{1}TL;DR: PECOS as discussed by the authors is a system for implementing abstract algorithms, including simple symbolic programming, sorting, graph theory, and even simple number theory, using a set of about four hundred detailed rules.

Abstract: Human programmers seem to know a lot about programming. This suggests a way to try to build automatic programming systems: encode this knowledge in some machine-usable form. In order to test the viability of this approach, knowledge about elementary symbolic programming has been codified into a set of about four hundred detailed rules, and a system, called PECOS, has been built for applying these rules to the task of implementing abstract algorithms. The implementation techniques covered by the rules include the representation of mappings as tables, sets of pairs, property list markings, and inverted mappings, as well as several techniques for enumerating the elements of a collection. The generality of the rules is suggested by the variety of domains in which PECOS has successfully implemented abstract algorithms, including simple symbolic programming, sorting, graph theory, and even simple number theory. In each case, PECOS's knowledge of different techniques enabled the construction of several alternative implementations. In addition, the rules can be used to explain such programming tricks as the use of property list markings to perform an intersection of two linked lists in linear time. Extrapolating from PECO's knowledge-based approach and from three other approaches to automatic programming (deductive, transformational, high level language), the future of automatic programming seems to involve a changing role for deduction and a range of positions on the generality-power spectrum.

••

BBN Technologies

^{1}TL;DR: This paper applies the assumption that beliefs are sentences of first-order logic stored in an agent's head to the standard logical problems about belief, and uses it to describe the connections between belief and planning.

Abstract: If we assume that beliefs are sentences of first-order logic stored in an agent's head, we can build a simple and intuitively clear formalism for reasoning about beliefs. I apply this formalism to the standard logical problems about belief, and use it to describe the connections between belief and planning.

••

TL;DR: Etherington, Mercer and Reiter showed that circumscription cannot lead to inconsistency for universal formulas, and this result is extended in three directions: to formulas of a more general syntactic form, to circumscription with some predicate symbols allowed to vary, and to prioritized circumscription.

Abstract: Etherington, Mercer and Reiter showed, on the basis of ideas of Bossu and Siegel, that circumscription cannot lead to inconsistency for universal formulas. We extend this result in three directions: to formulas of a more general syntactic form, to circumscription with some predicate symbols allowed to vary, and to prioritized circumscription.

••

TL;DR: The source of the structural equations in the causal ordering approach is discussed, and the claim that there are inherent differences between the “engineer's” and the "economist's" approach to the study of system behavior is challenged.

Abstract: In their reply to our paper, “Causality in Device Behavior,” de Kleer and Brown seek to establish a clear product differentiation between the well-known concepts of causal ordering and comparative statics, on the one side, and their “mythical causality” and qualitative physics, on the other. Most of the differences they see, however, are invisible to our eyes. Contrary to their claim, the earlier notion of causality, quite as much as the later one, is qualitative and “derives from the relationship between the equations and their underlying components which comprise the modeled system.” The concepts of causal ordering and comparative statics offer the advantage of a formal foundation that makes clear exactly what is being postulated. Hence, they can contribute a great deal to the clarification of the causal approaches to system analysis that de Kleer and Brown are seeking to develop. In this brief response to their comments, we discuss the source of the structural equations in the causal ordering approach, and we challenge more generally the claim that there are inherent differences (e.g., in the case of feedback) between the “engineer's” and the “economist's” approach to the study of system behavior.

••

TL;DR: It is shown that DB∗ is consistent iff DB is consistent and if P is the set of all predicates from DB and DB does not contain functional symbols, thenDB∗ coincides with Minker's GCWA.

Abstract: The aim of this paper is a modification of Minker's Generalized Closed World Assumption that would allow application of the “negation as failure rule” with respect to a set P of (not necessarily all) predicates of a database DB . A careful closure procedure is introduced which, when applied to a database DB , produces a new database DB∗ , that is used to answer queries about predicates from DB . It is shown that DB∗ is consistent iff DB is consistent. If P is the set of all predicates from DB and DB does not contain functional symbols, then DB∗ coincides with Minker's GCWA. The soundness and completeness of the careful closure procedure with respect to a minimal model style semantic is shown. As an inference engine associated with DB∗ we propose a query evaluation procedure QEP∗ which is a combination of a method of splitting an indefinite database DB into a disjunction of Horn databases and Clark's query evaluation procedure QEP . Soundness of QEP∗ with respect to DB∗ is shown for a broad class of databases.

••

TL;DR: It is shown how knowledge of the properties of the relations involved and knowledge about the contents of the system's database can be used to prove that portions of a search space will not contribute any new answers.

Abstract: Loosely speaking, recursive inference occurs when an inference procedure generates an infinite sequence of similar subgoals In general, the control of recursive inference involves demonstrating that recursive portions of a search space will not contribute any new answers to the problem beyond a certain level We first review a well-known syntactic method for controlling repeating inference (inference where the conjuncts processed are instances of their ancestors), provide a proof that it is correct, and discuss the conditions under which the strategy is optimal We also derive more powerful pruning theorems for cases involving transitivity axioms and cases involving subsumed subgoals The treatment of repeating inference is followed by consideration of the more difficult problem of recursive inference that does not repeat Here we show how knowledge of the properties of the relations involved and knowledge about the contents of the system's database can be used to prove that portions of a search space will not contribute any new answers

••

TL;DR: In this paper, the authors present a framework for using analysis and searching knowledge to guide program synthesis in a stepwise refinement paradigm, and a particular implementation of the framework, called libra, is described.

Abstract: Efficiency is a problem in automatic programming—both in the programs produced and in the synthesis process itself. The efficiency problem arises because many target-language programs (which vary in their time and space performance) typically satisfy one abstract specification. This paper presents a framework for using analysis and searching knowledge to guide program synthesis in a stepwise refinement paradigm. A particular implementation of the framework, called libra , is described. Given a program specification that includes size and frequency notes, the performance measure to be minimized, and some limits on synthesis resources, libra selects algorithms and data representations and decides whether to use ‘optimizing’ transformations. By applying incremental, algebraic program analysis, explicit rules about plausible implementations, and resource allocation on the basis of decision importance, libra has guided the automatic implementation of a number of programs in the domain of symbolic processing.

••

TL;DR: In this paper, the authors investigate the model theory of the notion of circumscription, and find completeness theorems that provide a partial converse to a result of McCarthy, and they show that the circumscriptive theoremiques are precisely the truths of the minimal models, in the case of various classes of theories, and for various versions of circumcription.

Abstract: We investigate the model theory of the notion of circumscription, and find completeness theorems that provide a partial converse to a result of McCarthy. We show that the circumscriptive theorems are precisely the truths of the minimal models, in the case of various classes of theories, and for various versions of circumscription. We also present an example of commonsense reasoning in which first-order circumscription does not achieve the intuitive and desired minimization.

••

TL;DR: Glymour as mentioned in this paper showed that under the conditions assumed by Pednault et al., at most one of the items of evidence can alter the probability of any given hypothesis; thus, although updating is possible, multiple updating for any of the hypotheses is precluded.

Abstract: Duda, Hart, and Nilsson have set forth a method for rule-based inference systems to use in updating the probabilities of hypotheses on the basis of multiple items of new evidence. Pednault, Zucker, and Muresan claimed to give conditions under which independence assumptions made by Duda et al. preclude updating—that is, prevent the evidence from altering the probabilities of the hypotheses. Glymour refutes Pednault et al.'s claim with a counterexample of a rather special form (one item of evidence is incompatible with all but one of the hypotheses); he raises, but leaves open, the question whether their result would be true with an added assumption to rule out such special cases. We show that their result does not hold even with the added assumption, but that it can nevertheless be largely salvaged. Namely, under the conditions assumed by Pednault et al., at most one of the items of evidence can alter the probability of any given hypothesis; thus, although updating is possible, multiple updating for any of the hypotheses is precluded.

••

TL;DR: A simple language for examples for examples of how an algorithm behaves on particular input are considered as possible means of describing the algorithm and an algorithm for the synthesis of a procedure from a set of such example computations is described.

Abstract: In this paper, examples of how an algorithm behaves on particular input are considered as possible means of describing the algorithm. In particular, a simple language for examples (a Computational Description Language) is presented and an algorithm for the synthesis of a procedure from a set of such example computations is described. The algorithm makes use of knowledge about variables, inputs, instructions and procedures during the synthesis process to guide the formation of a procedure. Several examples of procedures actually synthesized are discussed.

••

TL;DR: This program uses a series of processing stages which progressively transform an English input into a form usable by the computer simulation of paranoia, and performs aspects of traditional parsing which greatly facilitate the overall language recognition process.

Abstract: One of the major problems in natural language understanding by computer is the frequent use of patterned or idiomatic phrases in colloquial English dialogue. Traditional parsing methods typically cannot cope with a significant number of idioms. A more general problem is the tendency of a speaker to leave the meaning of an utterance ambiguous or partially implicit, to be filled in by the hearer from a shared mental context which includes linguistic, social, and physical knowledge. The appropriate representation for this knowledge is a formidable and unsolved problem. We present here an approach to natural language understanding which addresses itself to these problems. Our program uses a series of processing stages which progressively transform an English input into a form usable by our computer simulation of paranoia. Most of the processing stages involve matching the input to an appropriate stored pattern and performing the associated transformation. However, a few key stages perform aspects of traditional parsing which greatly facilitate the overall language recognition process.

••

TL;DR: A heuristic search strategy via islands is suggested to significantly decrease the number of nodes expanded to ensure a suboptimal cost path (which may be optimal) and in extreme cases falls back to A∗.

Abstract: A heuristic search strategy via islands is suggested to significantly decrease the number of nodes expanded. Algorithm I, which searches through a set of island nodes (“island set”), is presented assuming that the island set contains at least one node on an optimal cost path. This algorithm is shown to be admissible and expands no more nodes than A∗. For cases where the island set does not contain an optimal cost path (or any path). Algorithm I', a modification of Algorithm I, is suggested. This algorithm ensures a suboptimal cost path (which may be optimal) and in extreme cases falls back to A∗.

••

TL;DR: It was found that a one-page description of two common sorting algorithms or of some common approximation problems was sufficient for the computer to understand and analyze a wide variety of programs and identify and describe almost all errors.

Abstract: In order to examine the possibilities of using a computer as an aid to teaching programming, a prototype intelligent program analyzer has been constructed. Its design assumes that a system cannot analyze a program unless it can "understand" it; understanding being based on a knowledge of what must be accomplished and how code is used to express the intentions.
It was found that a one-page description of two common sorting algorithms or of some common approximation problems was sufficient for the computer to understand and analyze a wide variety of programs and identify and describe almost all errors.

••

Kyoto University

^{1}TL;DR: A general framework is provided, within which various conventional procedures including alpha-beta and SSS∗ can be naturally generalized to the informed model, which permits the usage of heuristic information pertaining to nonterminal nodes.

Abstract: Search procedures, such as alpha-beta and SSS∗, are used to solve minimax game trees. With a notable exception of B∗, most of these procedures assume the static model, i.e., the computation is done solely on the basis of static values given to terminal nodes. The first goal of this paper is to generalize these to the informed model, which permits the usage of heuristic information pertaining to nonterminal nodes, such as upper and lower bounds, and estimates, of the exact values realizable from the corresponding game positions. We provide a general framework, within which various conventional procedures including alpha-beta and SSS∗ can be naturally generalized to the informed model. For the static model, it is known that SSS∗ surpasses alpha-beta in the sense that it explores only a subset of the nodes which are explored by alpha-beta. The second goal of this paper is, assuming the informed model, to develop a precise characterization of the class of search procedures that surpass alpha-beta. It turns out that the class contains many search procedures other than SSS∗ (even for the static model). Finally some computational comparison among these search procedures is made by solving the 4 × 4 Othello game.

••

TL;DR: In this article, a computer program for modelling a child between the ages of 1 and 2 years is described, which is based on observations of the knowledge this child had at age 1, the comprehension abilities he had at 2, and the language experiences he had between these ages.

Abstract: A computer program modelling a child between the ages of 1 and 2 years is described. This program is based on observations of the knowledge this child had at age 1, the comprehension abilities he had at age 2, and the language experiences he had between these ages. The computer program described begins at the age 1 level, is given similar language experiences, and uses inference and learning rules to acquire comprehension at the age 2 level.