scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Pattern Analysis and Machine Intelligence in 1985"


Journal ArticleDOI
TL;DR: The adaptive noise smoothing filter is a systematic derivation of Lee's algorithm with some extensions that allow different estimators for the local image variance and its easy extension to deal with various types of signal-dependent noise.
Abstract: In this paper, we consider the restoration of images with signal-dependent noise. The filter is noise smoothing and adapts to local changes in image statistics based on a nonstationary mean, nonstationary variance (NMNV) image model. For images degraded by a class of uncorrelated, signal-dependent noise without blur, the adaptive noise smoothing filter becomes a point processor and is similar to Lee's local statistics algorithm [16]. The filter is able to adapt itself to the nonstationary local image statistics in the presence of different types of signal-dependent noise. For multiplicative noise, the adaptive noise smoothing filter is a systematic derivation of Lee's algorithm with some extensions that allow different estimators for the local image variance. The advantage of the derivation is its easy extension to deal with various types of signal-dependent noise. Film-grain and Poisson signal-dependent restoration problems are also considered as examples. All the nonstationary image statistical parameters needed for the filter can be estimated from the noisy image and no a priori information about the original image is required.

1,475 citations


Journal ArticleDOI
TL;DR: This paper presents a stereo matching algorithm using the dynamic programming technique that uses edge-delimited intervals as elements to be matched, and employs the above mentioned two searches: one is inter-scanline search for possible correspondences of connected edges in right and left images and the other is intra-scanlines search for correspondence of edge-Delimited interval on each scanline pair.
Abstract: This paper presents a stereo matching algorithm using the dynamic programming technique. The stereo matching problem, that is, obtaining a correspondence between right and left images, can be cast as a search problem. When a pair of stereo images is rectified, pairs of corresponding points can be searched for within the same scanlines. We call this search intra-scanline search. This intra-scanline search can be treated as the problem of finding a matching path on a two-dimensional (2D) search plane whose axes are the right and left scanlines. Vertically connected edges in the images provide consistency constraints across the 2D search planes. Inter-scanline search in a three-dimensional (3D) search space, which is a stack of the 2D search planes, is needed to utilize this constraint. Our stereo matching algorithm uses edge-delimited intervals as elements to be matched, and employs the above mentioned two searches: one is inter-scanline search for possible correspondences of connected edges in right and left images and the other is intra-scanline search for correspondences of edge-delimited intervals on each scanline pair. Dynamic programming is used for both searches which proceed simultaneously: the former supplies the consistency constraint to the latter while the latter supplies the matching score to the former. An interval-based similarity metric is used to compute the score. The algorithm has been tested with different types of images including urban aerial images, synthesized images, and block scenes, and its computational requirement has been discussed.

913 citations


Journal ArticleDOI
TL;DR: A new approach for the interpretation of optical flow fields is presented, where the flow field is partitioned into connected segments of flow vectors, where each segment is consistent with a rigid motion of a roughly planar surface.
Abstract: A new approach for the interpretation of optical flow fields is presented. The flow field, which can be produced by a sensor moving through an environment with several independently moving, rigid objects, is allowed to be sparse, noisy, and partially incorrect. The approach is based on two main stages. In the first stage, the flow field is partitioned into connected segments of flow vectors, where each segment is consistent with a rigid motion of a roughly planar surface. In the second stage, segments are grouped under the hypothesis that they are induced by a single, rigidly moving object. Each hypothesis is tested by searching for three-dimensional (3-D) motion parameters which are compatible with all the segments in the corresponding group. Once the motion parameters are recovered, the relative environmental depth can be estimated as well. Experiments based on real and simulated data are presented.

902 citations


Journal ArticleDOI
TL;DR: A version of the Marr-Poggio-Grimson algorithm that embodies modifications to the model, and its performance on a series of natural images is illustrated.
Abstract: Computational models of the human stereo system can provide insight into general information processing constraints that apply to any stereo system, either artificial or biological. In 1977 Marr and Poggio proposed one such computational model, which was characterized as matching certain feature points in difference-of-Gaussian filtered images and using the information obtained by matching coarser resolution representations to restrict the search space for matching finer resolution representations. An implementation of the algorithm and its testing on a range of images was reported in 1980. Since then a number of psychophysical experiments have suggested possible refinements to the model and modifications to the algorithm. As well, recent computational experiments applying the algorithm to a variety of natural images, especially aerial photographs, have led to a number of modifications. In this paper, we present a version of the Marr-Poggio-Grimson algorithm that embodies these modifications, and we illustrate its performance on a series of natural images.

601 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the organization of a rule-based system, SPAM, that uses map and domain-specific knowledge to interpret airport scenes, and the results of the system's analysis are characterized by the labeling of individual regions in the image and the collection of these regions into consistent interpretations of the major components of an airport model.
Abstract: In this paper, we describe the organization of a rule-based system, SPAM, that uses map and domain-specific knowledge to interpret airport scenes. This research investigates the use of a rule-based system for the control of image processing and interpretation of results with respect to a world model, as well as the representation of the world model within an image/map database. We present results on the interpretation of a high-resolution airport scene wvhere the image segmentation has been performed by a human, and by a region-based image segmentation program. The results of the system's analysis is characterized by the labeling of individual regions in the image and the collection of these regions into consistent interpretations of the major components of an airport model. These interpretations are ranked on the basis of their overall spatial and structural consistency. Some evaluations based on the results from three evolutionary versions of SPAM are presented.

420 citations


Journal ArticleDOI
TL;DR: A general purpose performance measurement scheme for image segmentation algorithms that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation.
Abstract: This paper introduces a general purpose performance measurement scheme for image segmentation algorithms. Performance parameters that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation. A low level, context independent definition of segmentation is used to obtain a set of optimization criteria for evaluating performance. Uniformity within each region and contrast between adjacent regions serve as parameters for region analysis. Contrast across lines and connectivity between them represent measures for line analysis. Texture is depicted by the introduction of focus of attention areas as groups of regions and lines. The performance parameters are then measured separately for each area. The usefulness of this approach lies in the ability to adjust the strategy of a system according to the varying characteristics of different areas. This feedback path provides the means for more efficient and error-free processing. Results from areas with dissimilar properties show a diversity in the measurements that is utilized for dynamic strategy setting.

384 citations


Journal ArticleDOI
TL;DR: The procedure the authors propose allows one to speed up the thinning transformation, and to get a well-shaped skeleton, and the use of the proposed algorithm turns out to be more advantageous the greater the width of the figure to be thinned.
Abstract: The skeleton of a digital figure can often be regarded as a convenient alternative to the figure itself. It is useful both to diminish drastically the amount of data to be handled, and to simplify the computational procedures required for description and classification purposes. Thinning a digital figure down to its skeleton is a time-consuming process when conventional sequential computers are employed. The procedure we propose allows one to speed up the thinning transformation, and to get a well-shaped skeleton. After cleaning of the input picture has been performed, the pixels of the figure are labeled according to their distance from the background, and a set, whose pixels are symmetrically placed with respect to distinct contour parts of the figure, is found. This set is then given a linear structure by applying topology preserving removal operations. Finally, a pruning step, regarding branches not relevant in the framework of the problem domain, completes the process. The resulting skeleton is a labeled set of pixels which is shown to possess all the required properties, particularly those concerning connectedness, topology, and shape. Moreover, the original figure can almost completely be recovered by means of a reverse distance transformation. Only a fixed and small number of sequential passes through the picture is necessary to achieve the goal. The computational effort is rather modest, and the use of the proposed algorithm turns out to be more advantageous the greater the width of the figure to be thinned.

333 citations


Journal ArticleDOI
TL;DR: To synthesize an ensemble of attributed graphs into the distribution of a random graph (or a set of distributions), this work proposes a distance measure between random graphs based on the minimum change of entropy before and after their merging.
Abstract: The notion of a random graph is formally defined. It deals with both the probabilistic and the structural aspects of relational data. By interpreting an ensemble of attributed graphs as the outcomes of a random graph, we can use its lower order distribution to characterize the ensemble. To reflect the variability of a random graph, Shannon's entropy measure is used. To synthesize an ensemble of attributed graphs into the distribution of a random graph (or a set of distributions), we propose a distance measure between random graphs based on the minimum change of entropy before and after their merging. When the ensemble contains more than one class of pattern graphs, the synthesis process yields distributions corresponding to various classes. This process corresponds to unsupervised learning in pattern classification. Using the maximum likelihood rule and the probability computed for the pattern graph, based on its matching with the random graph distributions of different classes, we can classify the pattern graph to a class.

307 citations


Journal ArticleDOI
TL;DR: The role of moments in image normalization and invariant pattern recognition is addressed, and the relationship between moment-based normalization, moment invariants, and circular harmonics is established.
Abstract: The role of moments in image normalization and invariant pattern recognition is addressed. The classical idea of the principal axes is analyzed and extended to a more general definition. The relationship between moment-based normalization, moment invariants, and circular harmonics is established. Invariance properties of moments, as opposed to their recognition properties, are identified using a new class of normalization procedures. The application of moment-based normalization in pattern recognition is demonstrated by experiment.

256 citations


Journal ArticleDOI
TL;DR: It is shown that the fuzzy perceptron, like its crisp counterpart, converges in the separable case.
Abstract: The perceptron algorithm, one of the class of gradient descent techniques, has been widely used in pattern recognition to determine linear decision boundaries. While this algorithm is guaranteed to converge to a separating hyperplane if the data are linearly separable, it exhibits erratic behavior if the data are not linearly separable. Fuzzy set theory is introduced into the perceptron algorithm to produce a ``fuzzy algorithm'' which ameliorates the convergence problem in the nonseparable case. It is shown that the fuzzy perceptron, like its crisp counterpart, converges in the separable case. A method of generating membership functions is developed, and experimental results comparing the crisp to the fuzzy perceptron are presented.

251 citations


Journal ArticleDOI
TL;DR: An efficient template matching algorithm using templates weighted by boundary segment saliency is presented and employed to recognize partially occluded parts and illustrates the effectiveness of the new technique.
Abstract: The problem of recognizing an object from a partially occluded boundary image is considered, and the concept of saliency of a boundary segment is introduced. Saliency measures the extent to which the boundary segment distinguishes the object to which it belongs from other objects which might be present. An algorithm is presented which optimally determines the saliency of boundary segments of one object with respect to those of a set of other objects. An efficient template matching algorithm using templates weighted by boundary segment saliency is then presented and employed to recognize partially occluded parts. The results of these experiments illustrate the effectiveness of the new technique.

Journal ArticleDOI
TL;DR: This paper is a tentative survey of quantitative approaches in the modeling of uncertainty and imprecision including recent theoretical proposals as well as more empirical techniques such as the ones developed in expert systems such as MYCIN or PROSPECTOR, the management of Uncertainty and Imprecision in reasoning patterns being a key issue in artificial intelligence.
Abstract: The intended purpose of this paper is twofold: proposing a common basis for the modeling of uncertainty and imprecision, and discussing various kinds of approximate and plausible reasoning schemes in this framework. Together with probability, different kinds of uncertainty measures (credibility and plausibility functions in the sense of Shafer, possibility measures in the sense of Zadeh and the dual measures of necessity, Sugeno's g?-fuzzy measures) are introduced in a unified way. The modeling of imprecision in terms of possibility distribution is then presented, and related questions such as the measure of the uncertainty of fuzzy events, the probability and possibility qualification of statements, the concept of a degree of truth, and the truth qualification of propositions, are discussed at length. Deductive inference from premises weighted by different kinds of measures by uncertainty, or by truth-values in the framework of various multivalued logics, is fully investigated. Then, deductive inferences from imprecise or fuzzy premises are dealt with; patterns of reasoning where both uncertainty and imprecision are present are also addressed. The last section is devoted to the combination of uncertain or imprecise pieces of information given by different sources. On the whole, this paper is a tentative survey of quantitative approaches in the modeling of uncertainty and imprecision including recent theoretical proposals as well as more empirical techniques such as the ones developed in expert systems such as MYCIN or PROSPECTOR, the management of uncertainty and imprecision in reasoning patterns being a key issue in artificial intelligence.

Journal ArticleDOI
TL;DR: In this paper, the authors describe knowledge acquisition strategies developed in the course of handcrafting a diagnostic system and reports on their consequent implementation in MORE, an automated knowledge acquisition system.
Abstract: This paper describes knowledge acquisition strategies developed in the course of handcrafting a diagnostic system and reports on their consequent implementation in MORE, an automated knowledge acquisition system. We describe MORE in some detail, focusing on its representation of domain knowledge, rule generation capabilities, and interviewing techniques. MORE's approach is shown to embody methods which may prove fruitful to the development of knowledge acquisition systems in other domains.

Journal ArticleDOI
TL;DR: The process of finding the correspondence is formalized by defining a general relational distance measure that computes a numeric distance between any two relational descriptions-a model and an image description, two models, or two image descriptions.
Abstract: Relational models are frequently used in high-level computer vision. Finding a correspondence between a relational model and an image description is an important operation in the analysis of scenes. In this paper the process of finding the correspondence is formalized by defining a general relational distance measure that computes a numeric distance between any two relational descriptions-a model and an image description, two models, or two image descriptions. The distance measure is proved to be a metric, and is illustrated with examples of distance between object models. A variant measure used in our past studies is shown not to be a metric.

Journal ArticleDOI
TL;DR: An edge detection algorithm sensitive to changes in flow fields likely to be associated with occlusion, patterned after the Marr-Hildreth zero-crossing detectors currently used to locate boundaries in scalar fields is derived.
Abstract: Optical flow can be used to locate dynamic occlusion boundaries in an image sequence. We derive an edge detection algorithm sensitive to changes in flow fields likely to be associated with occlusion. The algorithm is patterned after the Marr-Hildreth zero-crossing detectors currently used to locate boundaries in scalar fields. Zero-crossing detectors are extended to identify changes in direction and/or magnitude in a vector-valued flow field. As a result, the detector works for flow boundaries generated due to the relative motion of two overlapping surfaces, as well as the simpler case of motion parallax due to a sensor moving through an otherwise stationary environment. We then show how the approach can be extended to identify which side of a dynamic occlusion boundary corresponds to the occluding surface. The fundamental principal involved is that at an occlusion boundary, the image of the surface boundary moves with the image of the occluding surface. Such information is important in interpreting dynamic scenes. Results are demonstrated on optical flow fields automatically computed from real image sequences.

Journal ArticleDOI
TL;DR: A shape descriptor has been developed which can describe a shape independent of its translation, rotation, and scaling and, in this technique, shape discrimination is possible by a simple EXCLUSIVE-OR operation on shape descriptions.
Abstract: A shape descriptor has been developed which can describe a shape independent of its translation, rotation, and scaling. The description is in the form of a matrix and it is obtained by a polar quantization of a shape. The quantization process takes into consideration not only a shape's outer geometry but its inner geometry as well. The descriptor is information preserving and if the quantization parameters are selected properly, it is possible to reconstruct an original shape from its description. In this technique, shape discrimination is possible by a simple EXCLUSIVE-OR operation on shape descriptions.

Journal ArticleDOI
TL;DR: A rotationally invariant template matching using normalized invariant moments and a speedup technique based on the idea of two-stage template matching are described.
Abstract: A rotationally invariant template matching using normalized invariant moments is described. It is shown that if normalized invariant moments in circular windows are used, then template matching in rotated images becomes similar to template matching in translated images. A speedup technique based on the idea of two-stage template matching is also described. In this technique, the zeroth-order moment is used in the first stage to determine the likely match positions, and the second and third-order moments are used in the second stage to determine the best match position among the likely ones.

Journal ArticleDOI
TL;DR: A method is developed by which images resulting from orthogonal projection of rigid planar-patch objects arbitrarily oriented in three-dimensional (3-D) space may be used to form systems of linear equations which are solved for the affine transform relating the images.
Abstract: A method is developed by which images resulting from orthogonal projection of rigid planar-patch objects arbitrarily oriented in three-dimensional (3-D) space may be used to form systems of linear equations which are solved for the affine transform relating the images. The technique is applicable to complete images and to unlabeled feature sets derived from images, and with small modification may be used to transform images of unknown objects such that they represent images of those objects from a known orientation, for use in object identification. No knowledge of point correspondence between images is required. Theoretical development of the method and experimental results are presented. The method is shown to be computationally efficient, requiring O(N) multiplications and additions where, depending on the computation algorithm, N may equal the number of object or edge picture elements.

Journal ArticleDOI
TL;DR: Blum's two-dimensional shape description method based on the symmetric axis transform (SAT) is generalized to three dimensions and uniquely decomposes an object into a collection of sub-objects each drawn from three separate, but not completely independent, primitive sets defined in the paper.
Abstract: Blum's two-dimensional shape description method based on the symmetric axis transform (SAT) is generalized to three dimensions. The method uniquely decomposes an object into a collection of sub-objects each drawn from three separate, but not completely independent, primitive sets defined in the paper: width primitives, based on radius function properties; axis primitives, based on symmetric axis curvatures; and boundary primitives, based on boundary surface curvatures. Width primitives are themselves comprised of two components: slope districts and curvature districts. Visualizing the radius function as if it were the height function of some mountainous terrain, each slope district corresponds to a mountain face together with the valley below it. Curvature districts further partition each slope district into regions that are locally convex, concave, or saddle-like. Similarly, axis (boundary) primitives are regions of the symmetric surface where the symmetric surface (boundary surfaces) are locally convex, concave, or saddle-like. Relations among the primitive sets are discussed.

Journal ArticleDOI
TL;DR: In this article, knowledge is encoded within a set of problem spaces, which yields a system capable of reasoning from first principles using knowledge-intensive programming within a general problem-solving production-system architecture called Soar.
Abstract: This paper presents an experiment in knowledge-intensive programming within a general problem-solving production-system architecture called Soar. In Soar, knowledge is encoded within a set of problem spaces, which yields a system capable of reasoning from first principles. Expertise consists of additional rules that guide complex problem-space searches and substitute for expensive problem-space operators. The resulting system uses both knowledge and search when relevant. Expertise knowledge is acquired either by having it programmed, or by a chunking mechanism that automatically learns new rules reflecting the results implicit in the knowledge of the problem spaces. The approach is demonstrated on the computer-system configuration task, the task performed by the expert system R1.

Journal ArticleDOI
TL;DR: Experimental results prove the feasibility of the proposed approach for general shape recognition using attributed string matching with merging with some possible extensions of the approach also included.
Abstract: A new structural approach to shape recognition using attributed string matching with merging is proposed. After illustrating the disadvantages of conventional symbolic string matching using changes, deletions, and insertions, attributed strings are suggested for matching. Each attributed string is an ordered sequence of shape boundary primitives, each representing a basic boundary structural unit, line segment, with two types of numerical attributes, length and direction. A new type of primitive edit operation, called merge, is then introduced, which can be used to combine and then match any number of consecutive boundary primitives in one shape with those in another. The resulting attributed string matching with merging approach is shown useful for recognizing distorted shapes. Experimental results prove the feasibility of the proposed approach for general shape recognition. Some possible extensions of the approach are also included.

Journal ArticleDOI
TL;DR: In this article, a general algorithm to compute geometric image properties such as the perimeter, the Euler number, and the connected components of an image is developed and analyzed, which differs from the conventional approaches to images represented by quadtree in that it does not make use of neighbor finding methods that require the location of a nearest common ancestor.
Abstract: The region quadtree is a hierarchical data structure that finds use in applications such as image processing, computer graphics, pattern recognition, robotics, and cartography. In order to save space, a number of pointerless quadtree representations (termed linear quadtrees) have been proposed. One representation maintains the nodes in a list ordered according to a preorder traversal of the quadtree. Using such an image representation and a graph definition of a quadtree, a general algorithm to compute geometric image properties such as the perimeter, the Euler number, and the connected components of an image is developed and analyzed. The algorithm differs from the conventional approaches to images represented by quadtrees in that it does not make use of neighbor finding methods that require the location of a nearest common ancestor. Instead, it makes use of a staircase-like data structure to represent the blocks that have been already processed. The worst-case execution time of the algorithm, when used to compute the perimeter, is proportional to the number of leaf nodes in the quadtree, which is optimal. For an image of size 2n × 2n, the perimeter algorithm requires only four arrays of 2n positions each for working storage. This makes it well suited to processing linear quadtrees residing in secondary storage. Implementation experience has confirmed its superiority to existing approaches to computing geometric properties for images represented by quadtrees.

Journal ArticleDOI
TL;DR: This paper provides an integration of the underlying theories needed for modeling activities using the domain of large computer design projects as an example, and the semantics of activity modeling is described.
Abstract: Representation of activity knowledge is important to any application which must reason about activities such as new product management, factory scheduling, robot control, vehicle control, software engineering, and air traffic control. This paper provides an integration of the underlying theories needed for modeling activities. Using the domain of large computer design projects as an example, the semantics of activity modeling is described. While the past research in knowledge representation has discovered most of the underlying concepts, our attempt is toward their integration. This includes the epistemological concepts for erecting the required knowledge structure; the concepts of activity, state, goal, and manifestation for the adequate description of the plan and the progress; and the concepts of time and causality to infer the progression among the activities. We also address the issues which arise due to the integration of aggregation, time, and causality among activities and states.

Journal ArticleDOI
TL;DR: The paper focuses on the principles underlying the design of VEXED, and on several lessons and research issues that have arisen from implementing and experimenting with this prototype.
Abstract: A framework is presented for constructing knowledge-based aids for design problems. In particular, we describe the organization of an interactive knowledge-based consultant for VLSI design (called VEXED?an acronym for VLSI expert editor), and a prototype implementation of VEXED. The paper focuses on the principles underlying the design of VEXED, and on several lessons and research issues that have arisen from implementing and experimenting with this prototype.

Journal ArticleDOI
TL;DR: A general matching procedure for comparing semantic network descriptions of images is developed, and an automatic segmentation and description system is used to produce the image representations so that the matching procedures must cope with variations in feature values, missing objects, and possible multiple matches.
Abstract: Many different relaxation schemes have been proposed for image analysis tasks. We have developed a general matching procedure for comparing semantic network descriptions of images, and we have implemented a variety of relaxation techniques. An automatic segmentation and description system is used to produce the image representations so that the matching procedures must cope with variations in feature values, missing objects, and possible multiple matches. This environment is used to test different relaxation matching schemes under a variety of conditions. The best performance (of those we compared), in terms of the number of iterations and the number of errors, is for the gradient-based optimization approach of Faugeras and Price. The related optimization approach of Hummel and Zucker performed almost as well, with differences primarily in difficult matches (i.e., where much of the evidence is against the match, for instance, poor segmentations). The product combination rule proposed by Peleg was extremely fast, indeed, too fast to work when global context is needed. The classical Rosenfeld, Hummel, and Zucker method is included for historical comparisons and performed only adequately, producing fewer correct matches and taking more iterations.

Journal ArticleDOI
TL;DR: New optimal (in the O-notational sense) algorithms for computing several geometric properties of figures are presented by presenting new optimal algorithms for determining the extreme points of the convex hull of each component.
Abstract: Although mesh-connected computers are used almost exclusively for low-level local image processing, they are also suitable for higher level image processing tasks. We illustrate this by presenting new optimal (in the O-notational sense) algorithms for computing several geometric properties of figures. For example, given a black/white picture stored one pixel per processing element in an n × n mesh-connected computer, we give ?(n) time algorithms for determining the extreme points of the convex hull of each component, for deciding if the convex hull of each component contains pixels that are not members of the component, for deciding if two sets of processors are linearly separable, for deciding if each component is convex, for determining the distance to the nearest neighboring component of each component, for determining internal distances in each component, for counting and marking minimal internal paths in each component, for computing the external diameter of each component, for solving the largest empty circle problem, for determining internal diameters of components without holes, and for solving the all-points farthest point problem. Previous mesh-connected computer algorithms for these problems were either nonexistent or had worst case times of ?(n2). Since any serial computer has a best case time of ?(n2) when processing an n × n image, our algorithms show that the mesh-connected computer provides significantly better solutions to these problems.

Journal ArticleDOI
TL;DR: This paper presents a method of combining the two sensory sources, intensity and range, such that the time required for range sensing is considerably reduced and a graph structure representing the object in the scene is constructed.
Abstract: With the advent of devices that can directly sense and determine the coordinates of points in space, the goal of constructing and recognizing descriptors of three-dimensional (3-D) objects is attracting the attention of many researchers in the image processing community. Unfortunately, the time required to fully sense a range image is large relative to the time required to sense an intensity image. Conversely, a single intensity image lacks the depth information required to construct 3-D object descriptors. This paper presents a method of combining the two sensory sources, intensity and range, such that the time required for range sensing is considerably reduced. The approach is to extract potential points of interest from the intensity image and then selectively sense range at these feature points. After the range information is known at these points, a graph structure representing the object in the scene is constructed. This structure is compared to the stored graph models using an algorithm for partial matching. The results of applying the method to both synthetic data and real intensity/range images are presented.

Journal ArticleDOI
TL;DR: Experimental results are presented which show how such refinements as progressive deepening, narrow window searching, and the use of memory tables affect the performance of multiprocessor based chess playing programs.
Abstract: The design issues affecting a parallel implementation of the alpha-beta search algorithm are discussed with emphasis on a tree decomposition scheme that is intended for use on well ordered trees. In particular, the principal variation splitting method has been implemented, and experimental results are presented which show how such refinements as progressive deepening, narrow window searching, and the use of memory tables affect the performance of multiprocessor based chess playing programs. When dealing with parallel processing systems, communication delays are perhaps the greatest source of lost time. Therefore, an implementation of our tree decomposition based algorithm is presented, one that operates with a modest amount of message passing within a network of processors. Since our system has low search overhead, the principal basis for comparison is the communication overhead, which in turn is shown to have two components.

Journal ArticleDOI
TL;DR: Results show that the waveform correlation scheme has the capability of handling distortions that result from stretching or shrinking of intervals or from missing intervals.
Abstract: A waveform correlation scheme is presented. The scheme consists of four parts: 1) the representation of waveforms by trees, 2) the definition of basic operations on tree nodes and tree distance, 3) a tree matching algorithm, and 4) a backtracking procedure to find the best node-to-node correlation. This correlation scheme has been implemented. Results show that the scheme has the capability of handling distortions that result from stretching or shrinking of intervals or from missing intervals.

Journal ArticleDOI
TL;DR: The integrated diagnostic model (IDM) as discussed by the authors integrates two sources of knowledge, a shallow, reasoning-oriented, experiential knowledge base and a deep, functionally oriented, physical knowledge base.
Abstract: Existing expert systems have a high percentage agreement with experts in a particular field in many situations. However, in many ways their overall behavior is not like that of a human expert. These areas include the inability to give flexible, functional explanations of their reasoning processes, and the failure to degrade gracefully when dealing with problems at the periphery of their knowledge. These two important shortcomings can be improved when the right knowledge is available to the system. This paper presents an expert system design, called the integrated diagnostic model (IDM), that integrates two sources of knowledge, a shallow, reasoning-oriented, experiential knowledge base and a deep, functionally oriented, physical knowledge base. To demonstrate the IDM's usefulness in the problem area of diagnosis and repair, an implementation in the mechanical domain is described.