Showing papers in "Artificial Intelligence in 1983"
••
TL;DR: A framework called the contract net is presented that specifies communication and control in a distributed problem solver, and comparisons with planner, conniver, hearsay-ii, and pup 6 are used to demonstrate that negotiation is a natural extension to the transfer of control mechanisms used in earlier problem-solving systems.
1,305 citations
••
TL;DR: The XPLAIN system uses an automatic programmer to generate a consulting program by refinement from abstract goals and provides justifications of the code by examining the refinement structure created by the automatic programmer.
383 citations
••
TL;DR: Two constructive methods for determining the spatial orientation of curves and surfaces appearing in an image by 'backprojection' of two intrinsic properties of contours: angle magnitude and curvature are presented.
354 citations
••
TL;DR: The EURISKO program embodies this language, and it is described in this paper, along with its results in eight task domains: design of naval fleets, elementary set theory and number theory, LISP programming, biological evolution, games in general, the design of three-dimensional VLSI devices, the discovery of heuristics which help the system discover heuristic, and theiscovery of appropriate new types of 'slots' in each domain.
258 citations
••
TL;DR: A system that uses a representation of prototypical knowledge to guide computer consultations, and to focus the application of production rules used to represent inferential knowledge in the domain is presented.
191 citations
••
TL;DR: It is proved that there is no major advantage in using search rearrangement backtracking outside of the difficult region, and it is conjectured that search rearranged backtracking is exponentially faster than ordinary backtracking for nearly all of theicult region.
180 citations
••
TL;DR: Expected-complexity analyses for the traditional backtrack algorithm as well as for two more recent algorithms that have been found empirically to be significant improvements: forward checking and word-wise forward checking are carried out.
171 citations
••
TL;DR: This paper presents a meta-modelling framework for solving problem solving in AI by postulating a system of logic that allows us to deduce new statements from axioms and previously deduced statements, and some examples of this system include a simple theorem-proving program and a number of others.
146 citations
••
TL;DR: SeeK as discussed by the authors is a system that provides a unified design framework for building and empirically verifying an expert system knowledge base using case experience, in the form of stored cases with known conclusions, to interactively guide the expert in refining the rules of a model.
144 citations
••
TL;DR: Examining and extending the existing search methods, and reports on empirical performance studies on trees with useful size and ordering properties on trees that are strongly ordered, i.e., similar to those produced by many current game-playing programs.
125 citations
••
TL;DR: An extention of the alpha-beta tree pruning strategy to game trees with 'probability' nodes, whose values are defined as the (possibly weighted) average of their successors' values, is developed.
••
TL;DR: The accretion model of theory formation is presented, and many examples of its use in producing new discoveries in various fields are given, drawn from runs of a program called eurisko, the successor to am, that embodies the Accretion model and uses a corpus of heuristics to guide its behavior.
••
TL;DR: A new basis for state-space learning systems is described which centers on a performance measure localized in feature space, and despite the absence of any objective function the parameter vector is locally optimal.
••
TL;DR: It is shown that whenever the evaluation function satisfies certain properties, pathology will occur on any game tree of high enough constant branching factor, and an investigation is made of a possible cure for pathology: a probabilistic decision procedure which does not use minimaxing.
••
TL;DR: This report focuses on how the boris program handles a complex story involving a divorce.
••
TL;DR: It is shown that for p 1/2, every algorithm which guarantees finding an exact cheapest path, or even a path within a fixed cost ratio of the cheapest, must run in exponential average time.
••
TL;DR: The utility of the general formulation of B&B is illustrated by showing that through it some apparently very different algorithms for searching And/Or trees reveal the specific nature of their similarities and differences.
••
[...]
TL;DR: This paper describes an experiment in extending the parsing methods used by the Integrated Partial Parser, a computer system designed to read and generalize from large numbers of news stories.
••
TL;DR: The nature of this distortion, quantifies its magnitude, determines the conditions when its damage is curtailed and explains why search-depth pathology (the extreme manifestation of minimax distortion) is rarely observed in common games.
••
TL;DR: A∗ is shown to make much greater use of its heuristic knowledge than a backtracking procedure would under similar conditions, and a necessary and sufficient condition for maintaining polynomial search complexity is that A∗ be guided by heuristics with logarithmic precision.
••
TL;DR: Numerical comparison of the expected complexities of the two algorithms is finally carried out over a wide spectrum of search depths and branching degrees and shows that the savings in the number of positions evaluated by SSS^* relative to that of @a-@b is rather limited and is not enough to offset the increase in other computational resources.
••
TL;DR: Algorithms for performing strategic search using both deterministic and non-deterministic (multiple) strategies are examined and some examples are given which indicate that strategic search can outperform standard heuristic search methods.
••
TL;DR: The vertex partitioning method using the 2-superline graph transform preceeding the squeeze tree search is so powerful that for all the graphs in the catalog it produces the automorphism partition, thereby making the tree search nothing more than a verification that the initial partition is indeed the autom Morphism partition.
••
TL;DR: The entire project is presented as illustrative of the nature and complexity of the text analysis process, rather than as providing definitive or optimal solutions to any aspects of the task.
••
TL;DR: This paper presents a quantitative study of the algorithm deriving estimates for its efficiency based on the scoring scheme suggested by Newborn, and concludes that the algorithm should be improved on the basis of these estimates.