scispace - formally typeset
Search or ask a question

Showing papers on "3D single-object recognition published in 1992"


Journal ArticleDOI
TL;DR: Basic Cartesian moment theory is reviewed and its application to object recognition and image analysis is presented and the geometric properties of low-order moments are discussed along with the definition of several moment-space linear geometric transforms.

620 citations


ReportDOI
01 Feb 1992
TL;DR: It is proved that for any bilaterally symmetric 3D object one non- accidental 2D model view is sufficient for recognition and linear transformations can be learned exactly from a small set of examples in the case of "linear object classes".
Abstract: In this note we discuss how recognition can be achieved from a single 2D model view exploiting prior knowledge of an object''s structure (e.g. symmetry). We prove that for any bilaterally symmetric 3D object one non- accidental 2D model view is sufficient for recognition. Symmetries of higher order allow the recovery of structure from one 2D view. Linear transformations can be learned exactly from a small set of examples in the case of "linear object classes" and used to produce new views of an object from a single view.

192 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: This paper proposes a face recognition method which is characterized by structural simplicity, trainability and high speed, and linearly combined on the basis of multivariate analysis methods to provide new effective features for face recognition in learning from examples.
Abstract: Proposes a face recognition method which is characterized by structural simplicity, trainability and high speed. The method consists of two stages of feature extractions: first, higher order local autocorrelation features which are shift-invariant and additive are extracted from an input image; then those features are linearly combined on the basis of multivariate analysis methods so as to provide new effective features for face recognition in learning from examples. >

126 citations


Book
21 Aug 1992
TL;DR: A new family of computationally efficient algorithms, based on matrix computations, are presented for the evaluation of both Euclidean and affine algebraic moment invariants of data sets, reducing the computation required for the matching, and hence initial object recognition.
Abstract: Toward the development of an object recognition and positioning system, able to deal with arbitrary shaped objects in cluttered environments, we introduce methods for checking the match of two arbitrary curves in 2D or surfaces in 3D, when each of these subobjects (i.e., regions) is in arbitrary position, and we also show how to efficiently compute explicit expressions for the coordinate transformation which makes two matching subobjects (i.e., regions) coincide. This is to be used for comparing an arbitrarily positioned subobject of sensed data with objects in a data base, where each stored object is described in some “standard” position. In both cases, matching and positioning, results are invariant with respect to viewer coordinate system, i.e., invariant to the arbitrary location and orientation of the object in the data set, or, more generally, to affine transformations of the objects in the data set, which means translation, rotation, and different stretchings in two (or three) directions, and these techniques apply to both 2D and 3D problems. The 3D Euclidean case is useful for the recognition and positioning of solid objects from range data, and the 2D affine case for the recognition and positioning of solid objects from projections, e.g., from curves in a single image, and in motion estimation. The matching of arbitrarily shaped regions is done by computing for each region a vector of centered moments. These vectors are viewpointdependent, but the dependence on the viewpoint is algebraic and well known. We then compute moment invariants, i.e., algebraic functions of the moments that are invariant to Euclidean or affine transformations of the data set. We present a new family of computationally efficient algorithms, based on matrix computations, for the evaluation of both Euclidean and affine algebraic moment invariants of data sets. The use of moment invariants greatly reduces the computation required for the matching, and hence initial object recognition. The approach to determining and computing these moment invariants is different than those used by the vision community previously. The method for computing the coordinate transformation which makes the two matching regions coincide provides an estimate of object position. The estimation of the matching transformation is based on the same matrix computation techniques introduced for the computation of invariants, it involves simple manipulations of the moment vectors, it neither requires costly iterative methods, nor going back to the data set. The use of geometric invariants in this application is equivalent to specifying a center and an orientation for an arbitrary data constellation in a region. These geometric invariant methods appear to be very important for dealing with the situation of a large number of different possible objects in the presence of occlusion and clutter. As we point out in this paper, each moment invariant also defines an algebraic invariant, i.e., an invariant algebraic function of the coefficients of the best fitting polynomial to the data. Hence, this paper also introduces a new design and computation approach to algebraic invariants.

123 citations


Book ChapterDOI
19 May 1992
TL;DR: A canonical frame construction is presented for determining projectively invariant indexing functions for non-algebraic smooth plane curves that are semi-local rather than global, which promotes tolerance to occlusion.
Abstract: We present a canonical frame construction for determining projectively invariant indexing functions for non-algebraic smooth plane curves. These invariants are semi-local rather than global, which promotes tolerance to occlusion.

119 citations


Patent
08 May 1992
TL;DR: In this paper, the face of a person who is being fed by a robotic system is tracked by comparing a prestored object model image with the current image of the object using the square-distance criteria.
Abstract: Methods and apparatus for automatically tracking the position of a moving object in real time, particularly the face of a person who is being fed by a robotic system, is disclosed. The object can be tracked by comparing a prestored object model image with the current image of the object using the square-distance criteria. The search area can be limited to a region in which the face is most likely to be found and the prestored object model image can be limited to robust points. The method can include motion prediction, including both continuous motion and sudden motion, such as the motion cause by a person sneezing. Alternatively, a computationally efficient approach employing a one-dimensional algorithm can be used.

108 citations


Proceedings ArticleDOI
15 Jun 1992
TL;DR: The concept of active object recognition is introduced, and a proposal for its solution is described, which uses an efficient tree-based, probabilistic indexing scheme to find the model object that is likely to have generated the observed data.
Abstract: The concept of active object recognition is introduced, and a proposal for its solution is described. The camera is mounted on the end of a robot arm on a mobile base. The system exploits the mobility of the camera by using low-level image data to drive the camera to a standard viewpoint with respect to an unknown object. From such a viewpoint, the object recognition task is reduced to a two-dimensional pattern recognition problem. The system uses an efficient tree-based, probabilistic indexing scheme to find the model object that is likely to have generated the observed data, and for line tracking uses a modification of the token-based tracking scheme of J.L. Crowley et al. (1988). The system has been successfully tested on a set of origami objects. Given sufficiently accurate low-level data, recognition time is expected to grow only logarithmically with the number of objects stored. >

103 citations


Book ChapterDOI
19 May 1992
TL;DR: It is shown that every function that is invariant to viewing position of all objects is the trivial (constant) function, which means that every consistent recognition scheme for recognizing all 3-D objects must in general be model based.
Abstract: Different approaches to visual object recognition can be divided into two general classes: model-based vs. non model-based schemes. In this paper we establish some limitation on the class of non model-based recognition schemes. We show that every function that is invariant to viewing position of all objects is the trivial (constant) function. It follows that every consistent recognition scheme for recognizing all 3-D objects must in general be model based. The result is extended to recognition schemes that are imperfect (allowed to make mistakes) or restricted to certain classes of objects.

86 citations


Journal ArticleDOI
Sumio Watanabe1, M. Yoneyama1
01 Apr 1992
TL;DR: By combining ultrasonic imaging with neural networks, the authors have developed a 3-D object recognition system for use in robotic vision that shows that quick and accurate recognition can be achieved using only a small set of transducers.
Abstract: By combining ultrasonic imaging with neural networks, the authors have developed a 3-D object recognition system for use in robotic vision. Ultrasonic imaging is used to calculate the initial 3-D images of the objects. These images are then passed to neural networks that identify object categories, estimate object locations, and improve 3-D images. The authors explain the 3-D ultrasonic imaging method, propose three neural network structures for 3-D image analysis, and demonstrate the practicability of the system through experimental results, which show that quick and accurate recognition can be achieved using only a small set of transducers. >

74 citations


Proceedings ArticleDOI
01 Feb 1992
TL;DR: A novel recognition approach to human faces is proposed, which is based on the statistical model in the optimal discriminant space, which has very good recognition performance and recognition accuracies of 100 percent.
Abstract: Automatic recognition of human faces is a frontier topic in computer vision. In this paper, a novel recognition approach to human faces is proposed, which is based on the statistical model in the optimal discriminant space. Singular value vector has been proposed to represent algebraic features of images. This kind of feature vector has some important properties of algebraic and geometric invariance, and insensitiveness to noise. Because singular value vector is usually of high dimensionality, and recognition model based on these feature vectors belongs to the problem of small sample size, which has not been solved completely, dimensionality compression of singular value vector is very necessary. In our method, an optimal discriminant transformation is constructed to transform an original space of singular value vector into a new space in which its dimensionality is significantly lower than that in the original space. Finally, a recognition model is established in the new space. Experimental results show that our method has very good recognition performance, and recognition accuracies of 100 percent are obtained for all 64 facial images of 8 classes of human faces.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

62 citations


Book
29 Jul 1992
TL;DR: A new approach to visual recognition is offered that avoids limitations and has been used to recognize trees, bushes, grass, and trails in ground-level scenes of a natural environment and improves its recognition abilities by exploiting the context provided by what it has previously recognized.
Abstract: An autonomous vehicle that is to operate outdoors must be able to recognize features of the natural world as they appear in ground-level imagery. Geometric reconstruction alone is insufficient for an agent to plan its actions intelligently--objects in the world must be recognized, and not just located. Most work in visual recognition by computer has focused on recognizing objects by their geometric shape, or by the presence or absence of some prespecified collection of locally measurable attributes (e.g., spectral reflectance, texture, or distinguished markings). On the other hand, most entities in the natural world defy compact description of their shapes, and have no characteristic features with discriminatory power. As a result, image-understanding research has achieved little success towards recognizing natural scenes. In this thesis we offer a new approach to visual recognition that avoids these limitations and has been used to recognize trees, bushes, grass, and trails in ground-level scenes of a natural environment. Reliable recognition is achieved by employing an architecture with a number of innovative aspects. These include: context-controlled generation of hypotheses instead of universal partitioning; a hypothesis comparison scheme that allows a linear growth in computational complexity as the recognition vocabulary is increased; recognition at the level of complete contexts instead of individual objects; and provisions for contextual information to guide processing at all levels. Recognition results are added to a persistent, labeled, three-dimensional model of the environment which is used as context for interpreting subsequent imagery. In this way, the system constructs a description of the objects it sees, and, at the same time, improves its recognition abilities by exploiting the context provided by what it has previously recognized.

Book ChapterDOI
19 May 1992
TL;DR: This work presents a general formulation of the problem of object recognition via local geometric feature matching and a polynomial-time algorithm which guarantees finding all geometrically feasible interpretations of the data, modulo uncertainty, in terms of the model.
Abstract: We consider the problem of object recognition via local geometric feature matching in the presence of sensor uncertainty, occlusion, and clutter. We present a general formulation of the problem and a polynomial-time algorithm which guarantees finding all geometrically feasible interpretations of the data, modulo uncertainty, in terms of the model. This formulation applies naturally to problems involving both 2D and 3D objects.

Book ChapterDOI
19 May 1992
TL;DR: An approach that uses color as a cue to perform selection either based solely on image-data, or based on the knowledge of the color description of the model (model-driven).
Abstract: A key problem in model-based object recognition is selection, namely, the problem of determining which regions in an image are likely to come from a single object. In this paper we present an approach that uses color as a cue to perform selection either based solely on image-data (data-driven), or based on the knowledge of the color description of the model (model-driven). It presents a method of color specification by color categories which are used to design a fast segmentation algorithm to extract perceptual color regions. Data driven selection is then achieved by selecting salient color regions while model-driven selection is achieved by locating instances of the model in the image using the color region description of the model. The approach presented here tolerates some of the problems of occlusion, pose and illumination changes that make a model instance in an image appear different from its original description.

Patent
17 Nov 1992
TL;DR: In this article, a method and system in a data processing system for the establishment of relationships between reference objects in an object-oriented environment and an associated data object residing outside an object oriented environment is presented.
Abstract: A method and system in a data processing system for the establishment of relationships between reference objects in an object oriented environment and an associated data object residing outside an object oriented environment. A data object within an application outside an object oriented environment is identified. Multiple reference objects within an oriented environment are then established. Each reference object has a unique identifier and is associated with one of multiple users. Each reference object is associated with the identified data object so that multiple users may concurrently access the data object utilizing an associated reference object. The associated data object may then be modified in response to a modification of any reference object. Similarly, the reference objects may be modified in response to a modification of the associated data object.

Book ChapterDOI
19 May 1992
TL;DR: It is demonstrated the use of a Genetic Algorithm to match a flexible template model to image evidence and its performance has been assessed in quantitative terms.
Abstract: We demonstrate the use of a Genetic Algorithm (GA) to match a flexible template model to image evidence. The advantage of the GA is that plausible interpretations can be found in a relatively small number of trials; it is also possible to generate multiple distinct interpretation hypotheses. The method has been applied to the interpretation of ultrasound images of the heart and its performance has been assessed in quantitative terms.

Proceedings ArticleDOI
E.M. Petriu1, T. Bieseman, N. Trif, W.S. McMath, S.K. Yeung 
07 Jul 1992
TL;DR: This paper presents a new grid node indexing method based on "pseudo-random binary array" (PRBA) encoding that requires only one code bit per grid step, independent of the desired grid resolution.
Abstract: This paper presents a new grid node indexing method based on "pseudo-random binary array" (PRBA) encoding. This method requires only one code bit per grid step, independent of the desired grid resolution. Applications are discussed for 3-D object recognition and for 2-D absolute position recovery of a free- ranging mobile robot.

01 Jan 1992
TL;DR: An interpretation of geometric hashing is presented that allows the geometric hashing algorithm to be viewed as a Bayesian approach to model-based object recognition, and leads to natural, well-justified formulas.
Abstract: The problem of model-based object recognition is a fundamental one in the field of computer vision, and represents a promising direction for practical applications. We describe the design, analysis, implementation and testing of a system that employs geometric hashing techniques, and can recognize three-dimensional objects from two-dimensional grayscale images. We examine the exploitation of parallelism in object recognition, and analyze the performance and sensitivity of the geometric hashing method in the presence of noise. We also present a Bayesian interpretation of the geometric hashing approach. Two parallel algorithms are outlined: one algorithm is designed for an SIMD hypercube-based machine whereas the other algorithm is more general, and relies on data broadcast capabilities. The first of the two algorithms regards geometric hashing as a connectionist algorithm. The second algorithm is inspired by the method of inverse indexing for data retrieval. We also determine the expected distribution of computed invariants over the hash space: formulas for the distributions of invariants are derived for the cases of rigid, similarity and affine transformations, and for two different distributions (Gaussian and Uniform over a disc) of point features. Formulas describing the dependency of the geometric invariants on Gaussian positional error are also derived for the similarity and affine transformation cases. Finally, we present an interpretation of geometric hashing that allows the geometric hashing algorithm to be viewed as a Bayesian approach to model-based object recognition. This interpretation is a new form of Bayesian-based model matching, and leads to natural, well-justified formulas. The interpretation also provides a precise weighted-voting method for the evidence-gathering phase of geometric hashing. A prototype object recognition system using these ideas has been implemented on a CM-2 Connection Machine. The system is scalable and can recognize aircraft and automobile models subjected to 2D rotation, translation, and scale changes in real-world digital imagery. This system is the first of its kind that is scalable, uses large databases, can handle noisy input data, works rapidly on an existing parallel architecture, and exhibits excellent performance with real world, natural scenes.

Journal ArticleDOI
TL;DR: Geometrical criteria which define viewpoint-invariant features to be extracted from 2D line drawings of 3D objects are described, and examples of results obtained by applying these criteria to a typical line drawing are shown.

Proceedings ArticleDOI
30 Aug 1992
TL;DR: The construction of shape representations based on stored geometric constraints that are suitable for input to a 3D object recognition system are detailed together with an evaluation of their efficacy based on experimental findings.
Abstract: Representation of arbitrary shape for the purposes of visual object recognition is an unsolved problem. This paper outlines some theoretical properties of shape representations based on stored geometric constraints that make them suitable for input to a 3D object recognition system. The construction of such representations is detailed together with an evaluation of their efficacy based on experimental findings. >

Proceedings ArticleDOI
12 May 1992
TL;DR: The author proposes a method of feature-based camera-guided grasping of a known object by splitting up the 3D movement in several successive 1D or 2D movements.
Abstract: The author proposes a method of feature-based camera-guided grasping of a known object by splitting up the 3D movement in several successive 1D or 2D movements. Object recognition was achieved by extracting the 3D features of the object from image sequences while a camera mounted on a robots hand was moving toward the object. After the recognition of the object, the camera approached the object by several camera-guided steps: motion in the xy-plane, rotations around the z-axis, and movements along the z-axis. These movements were controlled by data which were derived from the image features. Finally, the gripper had to do a fine motion to reach the correct position for the grasping of the object. >

ReportDOI
01 Jul 1992
TL;DR: In this article, a weak-perspective projection is proposed to compute the pose of a model from three corresponding points under a weak perspective projection, and a new solution to the problem is proposed which, like previous solutions, involves solving a biquadratic equation.
Abstract: Model-based object recognition commonly involves using a minimal set of matched model and image points to compute the pose of a model in image coordinates. This paper discusses how to compute the pose of a model from three corresponding points under ``weak-perspective'''' projection. A new solution to the problem is proposed which, like previous solutions, involves solving a biquadratic equation. Here the biquadratic is motivated geometrically, and its solutions are interpreted graphically. The final equations take a new form, which lead to a simple expression for the image position of any unmatched model point.

Proceedings ArticleDOI
12 May 1992
TL;DR: A method for selecting viewpoints and sensing tasks to confirm by a multisensory perception machine, an identification hypothesis previously generated, and result of experiments integrating other modules for environment modeling, path planning and execution control, and object recognition are presented.
Abstract: The authors present a method for selecting viewpoints and sensing tasks to confirm by a multisensory perception machine, an identification hypothesis previously generated. The determination relies on the use of a compiled knowledge base that links object and sensor models, and defines a priori the best sensing tasks to be performed. The method is fully detailed, and its use under dynamic constraints due to a real environment is explained, along with a control strategy to activate the search. Result of experiments integrating other modules for environment modeling, path planning and execution control, and object recognition are presented. >

Proceedings ArticleDOI
30 Aug 1992
TL;DR: The authors present an approach to feature detection, which is a fundamental issue in many intermediate-level vision problems such as stereo, motion correspondence, image registration, etc, based on a scale-interaction model of the end-inhibition property exhibited by certain cells in the visual- cortex of mammals.
Abstract: The authors present an approach to feature detection, which is a fundamental issue in many intermediate-level vision problems such as stereo, motion correspondence, image registration, etc. The approach is based on a scale-interaction model of the end-inhibition property exhibited by certain cells in the visual- cortex of mammals. These feature detector cells are responsive to short lines, line endings, corners and other such sharp changes in curvature. In addition, this method also provides a compact representation of feature information which is useful in shape recognition problems. Application to face recognition and motion correspondence are illustrated. >

01 Jul 1992
TL;DR: The paradigm of appearance-based vision is discussed and two specific VACs are presented: one that computes feature values analytically, and a second that utilizes an appearance simulator to synthesize sample images.
Abstract: : The generation of recognition programs by hand is a time-consuming, labor-intensive task that typically results in a special purpose program for the recognition of a single object or a small set of objects. Recent work in automatic code generation has demonstrated the feasibility of automatically generating object recognition programs from CAD-based descriptions of objects. Many of the programs which perform automatic code generation employ a common paradigm of utilizing explicit object and sensor models to predict object appearances; we refer to the paradigm as appearance-based vision, and refer to the programs as vision algorithm compilers (VACs). A CAD-like object model augmented with sensor-specific information like color and reflectance, in conjunction with a sensor model, provides all the information needed to predict the appearance of an object under any specified set of viewing conditions. Appearances, characterized in terms of feature values, can be predicted in two ways: analytically, or synthetically. In relatively simple domains, feature values can be analytically determined from model information. However, in complex domains, the analytic prediction method is impractical. An alternative method for appearance prediction is to use an appearance simulator to generate synthetic im ages of objects which can then be processed to extract feature values. In this paper, we discuss the paradigm of appearance-based vision and present in detail two specific VACs: one that computes feature values analytically, and a second that utilizes an appearance simulator to synthesize sample images.

Journal ArticleDOI
TL;DR: It is shown how globally salient structures can be extracted from a contour image based on geometrical attributes, including smoothness and contour length, and how the problem can be overcome by using the linear combinations of a small number of two-dimensional object views.
Abstract: This paper discusses two problems related to three-dimensional object recognition. The first is segmentation and the selection of a candidate object in the image, the second is the recognition of a three-dimensional object from different viewing positions. Regarding segmentation, it is shown how globally salient structures can be extracted from a contour image based on geometrical attributes, including smoothness and contour length. This computation is performed by a parallel network of locally connected neuron-like elements. With respect to the effect of viewing, it is shown how the problem can be overcome by using the linear combinations of a small number of two-dimensional object views. In both problems the emphasis is on methods that are relatively low level in nature. Segmentation is performed using a bottom-up process, driven by the geometry of image contours. Recognition is performed without using explicit three-dimensional models, but by the direct manipulation of two-dimensional images.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: A new approach to shape matching and knowledge-based signal processing, called functional template correlation (FTC), is presented in the context of 3D object recognition, which defines, for each template point, arbitrarily complex tolerances of input image values.
Abstract: A new approach to shape matching and knowledge-based signal processing, called functional template correlation (FTC), is presented in the context of 3D object recognition. Incorporating aspects of fuzzy set theory, functional templates (FTs) define, for each template point, arbitrarily complex tolerances of input image values. With this approach, object- and sensor-dependent knowledge is easily encoded in FTs, making it possible to effectively deal with uncertain or variable object appearance noise occlusion, and articulation. The output of FTC is a map of match scores, reflecting pixel-by-pixel belief of whether or not an encoded shape is present. Such maps can be combined using simple rules of arithmetic and are used to guide selective attention. The use of FTC is illustrated with examples from work in automatic recognition. >

Book ChapterDOI
19 May 1992
TL;DR: An algorithm based on the hypothesize-and-verify paradigm to register two consecutive 3D frames and estimate their transformation/motion can be developed to reduce effectively the complexity of the hypothesis generation phase.
Abstract: We address in this paper how to find clusters based on proximity and planar facets based on coplanarity from 3D line segments obtained from stereo. The proposed methods are efficient and have been tested with many real stereo data. These procedures are indispensable in many applications including scene interpretation, object modeling and object recognition. We show their application to 3D motion determination. We have developed an algorithm based on the hypothesize-and-verify paradigm to register two consecutive 3D frames and estimate their transformation/motion. By grouping 3D line segments in each frame into clusters and planes, we can reduce effectively the complexity of the hypothesis generation phase.

Proceedings ArticleDOI
15 Jun 1992
TL;DR: A method of obtaining local projective and affine invariants that is more robust than existing methods is presented, and these shape descriptors are useful for object recognition because they eliminate the search for the unknown viewpoint.
Abstract: A method of obtaining local projective and affine invariants that is more robust than existing methods is presented. These shape descriptors are useful for object recognition because they eliminate the search for the unknown viewpoint. Being local, these invariants are much less sensitive to occlusion than the global ones used elsewhere. The basic ideas are (i) using an implicit curve representation without a curve parameter, thus increasing robustness; and (ii) using a canonical coordinate system which is defined by the intrinsic properties of the shape, regardless of any given coordinate system, and is thus invariant. Several configurations are treated: a general curve without any correspondence, and curves with known correspondence of feature points or lines. >

01 Jan 1992
TL;DR: An effective method of surface characterization of 3D objects using surface curvature properties and an efficient approach to recognizing and localizing multiple 3D free-form objects (free-form object recognition and localization) are presented.
Abstract: An effective method of surface characterization of 3D objects using surface curvature properties and an efficient approach to recognizing and localizing multiple 3D free-form objects (free-form object recognition and localization) are presented. The approach is surface based and is therefore not sensitive to noise and occlusion, forms hypothesis by local analysis of surface shapes, does not depend on the visibility of complete objects, and uses information from a CAD database in recognition and localization. A knowledge representation scheme for describing free-form surfaces is described. The data structure and procedures are well designed, so that the knowledge leads the system to intelligent behavior. Knowledge about surface shapes is abstracted from CAD models to direct the search in verification of vision hypotheses. The knowledge representation used eases processes of knowledge acquisition, information retrieval, modification of knowledge base, and reasoning for solution. >

Journal ArticleDOI
TL;DR: In this article, the authors developed parallel algorithms for dynamic control of both processing and communication complexity during execution for intermediate and high levels of computer vision systems, and implemented algorithms for plane detection and object recognition on a flexible transputer network.
Abstract: Developing parallel algorithms for intermediate and high levels of computer vision systems is addressed. Because the algorithms are complex and the nature and size of the input and output data sets vary for each application, the authors have directly developed parallel algorithms for dynamic control of both processing and communication complexity during execution. They have also examined the merits of functional prototyping and transforming programs into imperative execution code for final implementation. To evaluate and give direction to their work, they have implemented algorithms for plane detection and object recognition on a flexible transputer network. >