scispace - formally typeset
Search or ask a question

Showing papers on "Orientation (computer vision) published in 1986"


Journal ArticleDOI
TL;DR: The algorithm appears to be more effective than previous techniques for two key reasons: 1) the gradient orientation is used as the initial organizing criterion prior to the extraction of straight lines, and 2) the global context of the intensity variations associated with a straight line is determined prior to any local decisions about participating edge elements.
Abstract: This paper presents a new approach to the extraction of straight lines in intensity images. Pixels are grouped into line-support regions of similar gradient orientation, and then the structure of the associated intensity surface is used to determine the location and properties of the edge. The resulting regions and extracted edge parameters form a low-level representation of the intensity variations in the image that can be used for a variety of purposes. The algorithm appears to be more effective than previous techniques for two key reasons: 1) the gradient orientation (rather than gradient magnitude) is used as the initial organizing criterion prior to the extraction of straight lines, and 2) the global context of the intensity variations associated with a straight line is determined prior to any local decisions about participating edge elements.

742 citations


Journal ArticleDOI
TL;DR: This paper presents a comparative study and survey of model-based object-recognition algorithms for robot vision, and an evaluation and comparison of existing industrial part- recognition systems and algorithms is given, providing insights for progress toward future robot vision systems.
Abstract: This paper presents a comparative study and survey of model-based object-recognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the "bin-picking" problem, in which the parts to be recognized are presented in a jumbled bin. The paper is organized according to 2-D, 2½-D, and 3-D object representations, which are used as the basis for the recognition algorithms. Three central issues common to each category, namely, feature extraction, modeling, and matching, are examined in detail. An evaluation and comparison of existing industrial part-recognition systems and algorithms is given, providing insights for progress toward future robot vision systems.

656 citations


Journal ArticleDOI
TL;DR: In this article, a 2D Gabor filter was used for texture discrimination in the striate cortex of the human brain. And the performance of the computer models suggests that cortical neurons with Gabor like receptive fields may be involved in preattentive texture discrimination.
Abstract: A 2D Gabor filter can be realized as a sinusoidal plane wave of some frequency and orientation within a two dimensional Gaussian envelope. Its spatial extent, frequency and orientation preferences as well as bandwidths are easily controlled by the parameters used in generating the filters. However, there is an "uncertainty relation" associated with linear filters which limits the resolution simultaneously attainable in space and frequency. Daugman (1985) has determined that 2D Gabor filters are members of a class of functions achieving optimal joint resolution in the 2D space and 2D frequency domains. They have also been found to be a good model for two dimensional receptive fields of simple cells in the striate cortex (Jones 1985; Jones et al. 1985). The characteristic of optimal joint resolution in both space and frequency suggests that these filters are appropriate operators for tasks requiring simultaneous measurement in these domains. Texture discrimination is such a task. Computer application of a set of Gabor filters to a variety of textures found to be preattentively discriminable produces results in which differently textured regions are distinguished by first-order differences in the values measured by the filters. This ability to reduce the statistical complexity distinguishing differently textured region as well as the sensitivity of these filters to certain types of local features suggest that Gabor functions can act as detectors of certain "texton" types. The performance of the computer models suggests that cortical neurons with Gabor like receptive fields may be involved in preattentive texture discrimination.

525 citations


Journal ArticleDOI
TL;DR: This paper presents the first mathematically rigorous scheme for labelling line drawings of the class of scenes described, which is able to catalog all local labelling possibilities for the different types of junctions in a line drawing.
Abstract: In this thesis, we study the problem of interpreting line drawings of scenes composed or opaque regular solid objects bounded by piecewise smooth surfaces with no markings or texture on them. It is assumed that the line drawing has been formed by orthographic projection of such a scene under general viewpoint, that the line drawing is error free, and that there are no lines due to shadows or specularities. Our definition implicitly excludes laminae, wires and the apices of cones. A major component of the interpretation of line drawings is line labelling. By line labelling we mean (a) classification of each image curve as corresponding to either a depth or orientation discontinuity in the scene, and (b) further subclassification of each kind of discontinuity. For a depth discontinuity we determine whether it is a limb--a locus of points on the surface where the line of sight is tangent to the surface--or an occluding edge--a tangent plane discontinuity of the surface. For an orientation discontinuity we determine whether it corresponds to a convex or concave edge. This thesis presents the first mathematically rigorous scheme for labelling line drawings of the class of scenes described. By analysing the projection of the neighborhoods of different kinds of points on a piecewise smooth surface, we are able to catalog all local labelling possibilities for the different types of junctions in a line drawing. An algorithm is developed which utilizes this catalog to determine all legal labellings of the line drawing. A local minimum complexity rule--at each vertex select those labellings which correspond to the minimum number of faces meeting at the vertex--is used in order to prune highly counter-intuitive interpretations. The labelling scheme was implemented and tested on a number of line drawings. The labellings obtained are few and by and large in accordance with human interpretations.

345 citations


Journal ArticleDOI
TL;DR: Test results indicate the ability of the technique developed in this work to recognize partially occluded objects and Processing-speed measurements show that the method is fast in the recognition mode.
Abstract: In this paper, a method of classifying objects is reported that is based on the use of autoregressive (AR) model parameters which represent the shapes of boundaries detected in digitized binary images of the objects. The object identification technique is insensitive to object size and orientation. Three pattern recognition algorithms that assign object names to unlabelled sets of AR model parameters were tested and the results compared. Isolated object tests were performed on five sets of shapes, including eight industrial shapes (mostly taken from the recognition literature), and recognition accuracies of 100 percent were obtained for all pattern sets at some model order in the range 1 to 10. Test results indicate the ability of the technique developed in this work to recognize partially occluded objects. Processing-speed measurements show that the method is fast in the recognition mode. The results of a number of object recognition tests are presented. The recognition technique was realized with Fortran programs, Imaging Technology, Inc. image-processing boards, and a PDP 11/60 computer. The computer algorithms are described.

236 citations


Journal ArticleDOI
TL;DR: The necessary techniques for optimal local parameter estimation and primitive boundary or surface type recognition for each small patch of data are developed, and optimal combining of these inaccurate locally derived parameter estimates are combined to arrive at roughly globally optimum object-position estimation.
Abstract: New asymptotic methods are introduced that permit computationally simple Bayesian recognition and parameter estimation for many large data sets described by a combination of algebraic, geometric, and probabilistic models. The techniques introduced permit controlled decomposition of a large problem into small problems for separate parallel processing where maximum likelihood estimation or Bayesian estimation or recognition can be realized locally. These results can be combined to arrive at globally optimum estimation or recognition. The approach is applied to the maximum likelihood estimation of 3-D complex-object position. To this end, the surface of an object is modeled as a collection of patches of primitive quadrics, i.e., planar, cylindrical, and spherical patches, possibly augmented by boundary segments. The primitive surface-patch models are specified by geometric parameters, reflecting location, orientation, and dimension information. The object-position estimation is based on sets of range data points, each set associated with an object primitive. Probability density functions are introduced that model the generation of range measurement points. This entails the formulation of a noise mechanism in three-space accounting for inaccuracies in the 3-D measurements and possibly for inaccuracies in the 3-D modeling. We develop the necessary techniques for optimal local parameter estimation and primitive boundary or surface type recognition for each small patch of data, and then optimal combining of these inaccurate locally derived parameter estimates in order to arrive at roughly globally optimum object-position estimation.

171 citations


Journal ArticleDOI
TL;DR: This report describes a method of detecting the orientation of a set components which are located along parallel lines by using the histogram of nearest neighbor directions.

156 citations


Patent
19 May 1986
TL;DR: In this article, a method and system are provided for automatically locating an object at a vision station by performing an edge-detecting algorithm on at least a portion of the gray-scale digitized image of the object.
Abstract: A method and system are provided for automatically locating an object at a vision station by performing an edge-detecting algorithm on at least a portion of the gray-scale digitized image of the object. Preferably, the algorithm comprises an implementation of the Hough transform which includes the iterative application of a direction-sensitive, edge-detecting convolution to the digital image. Each convolution is applied with a different convolution mask or filter, each of which is calculated to give maximum response to an edge of the object in a different direction. The method and system have the ability to extract edges from low contrast images. Also, preferably, a systolic array processor applies the convolutions. The implementation of the Hough transform also includes the steps of shifting the resulting edge-enhanced images by certain amounts in the horizontal and vertical directions, summing the shifted images together into an accumulator buffer to obtain an accumulator image and detecting the maximum response in the accumulator image which corresponds to the location of an edge. If the object to be found is permitted to rotate, at least one other feature, such as another edge, must be located in order to specify the location and orientation of the object. The location of the object when correlated with the nominal position of the object at the vision station provides the position and attitude of the object. The resultant data may be subsequently transformed into the coordinate frame of a peripheral device, such as a robot, programmable controller, numerical controlled machine, etc. for subsequent use by a controller of the peripheral device.

91 citations


Patent
22 Dec 1986
TL;DR: In this paper, the structural identity of stored reference patterns with image contents or portions is determined, irrespective of the position of said image content or portion in the image to be analyzed, where the image is subjected to a two-dimensional Fourier transformation operation and the separated amplitude distribution or power distribution is compared to amplitude or power distributions in respect of the reference patterns in the Fourier range.
Abstract: A process for analyzing a two-dimensional image, wherein the structural identity of stored reference patterns with image contents or portions is determined, irrespective of the position of said image content or portion in the image to be analyzed. The image is subjected to a two-dimensional Fourier transformation operation and the separated amplitude distribution or power distribution is compared to amplitude or power distributions in respect of the reference patterns in the Fourier range, while determining the respective probability of identity, the twist angle and the enlargement factor as between the reference pattern and the image content or portion. Storage and processing of the image and the reference patterns or the Fourier transforms thereof are effected in digital form. In order to locate an image content or portion in the original image, which is identical with a reference pattern, the respective reference pattern or the Fourier transform thereof is assimilated to said image content or portion, in respect of size and orientation, by inverse rotary extension, with the ascertained twist angle and enlargement factor, and finally the position or positions at which the reference pattern when converted in that way has maximum identity with a section of the image is established.

88 citations


Journal ArticleDOI
TL;DR: It is shown that the interpretation for orientation and motion of planar surfaces is unique when either two successive image flows of one planar surface patch are given or one image flow of two planar patches moving as a rigid body is given and this is proved by deriving explicit expressions for the evolving solution of an image flow sequence with time.
Abstract: In this paper we consider the recovery of the 3-dimensional motion and orientation of a rigid planar surface from its image flow field. Closed form solutions are derived for the image flow equations formulated by Waxman and Ullman [1]. Also we give two important results relating to the uniqueness of solutions for the image flow equations. The first result concerns resolving the duality of interpretations that are generally associated with the instantaneous image flow of an evolving image sequence. It is shown that the interpretation for orientation and motion of planar surfaces is unique when either two successive image flows of one planar surface patch are given or one image flow of two planar patches moving as a rigid body is given. We have proved this by deriving explicit expressions for the evolving solution of an image flow sequence with time. These expressions can be used to resolve this ambiguity of interpretation in practical problems. The second result is the proof of uniqueness for the velocity of approach which satisfies the image flow equations for planar surfaces derived in [1]. In addition, it is shown that this velocity can be computed as the middle root of a cubic equation. These two results together suggest a new method for solving the image flow problem for planar surfaces in motion. We also describe a scheme to use first-order time derivatives of the image flow field in place of the second-order spatial derivatives to solve for the orientation and motion.

82 citations


Journal ArticleDOI
TL;DR: A 2-stage algorithm that recognizes one or more 3-dimensional objects in an image that contains the perspective projections of those object and a linear least squares algorithm is applied in order to compute a better estimate.
Abstract: This paper presents a 2-stage algorithm that recognizes one or more 3-dimensional objects in an image that contains the perspective projections of those object. In the first stage, the recognition scheme solves for estimates of the free rotational and translational parameters by first matching the individual edges, and then restricting these matches so that junctions are matched to vertices. A generalized Hough transform is used to record the computed matches. In the second stage, correspondences between model and image features are determined using the estimates of the first stage, and a linear least squares algorithm is applied in order to compute a better estimate. The effects of errors in the extraction of image data and in the computation of known parameters are considered. The technique is demonstrated with images containing single objects and multiple objects.

Patent
20 Mar 1986
TL;DR: In this paper, the first, second and third sensor lines consisting of parallel rows of photosensitive semiconductor elements arranged transversely of the flight path of the craft are arranged so that the lines are spaced apart from each other so that a first terrain image sensed by the lines during a first scanning period partially overlaps a second terrain image detected by the line during a second, successive scanning period.
Abstract: A device for use on aircraft or spacecraft provides data corresponding to the course and orientation of the craft, and a digital display of the terrain over which the craft is travelling. The device includes at least first, second and third sensor lines consisting of parallel rows of photosensitive semiconductor elements arranged transversely of the flight path of the craft. The sensor lines provide line images corresponding to terrain images directed onto the lines, and the lines are spaced apart from each other so that a first terrain image sensed by the lines during a first scanning period partially overlaps a second terrain image sensed by the lines during a second, successive scanning period. The device also includes a lens system for continuously directing the terrain images onto the lines, and systems for reading out and storing the line images from the sensor lines during each scanning period and for correlating certain picture reference points in the second terrain image with the same picture reference points in the first terrain image. A computer then operates to determine the orientation of the second terrain image in accordance with intersections, at each of the picture reference points on the terrain, of homologous image rays produced during the first and second scanning periods.

Patent
26 Jun 1986
TL;DR: In this article, an incremental surface is defined by at least three adjacent non-collinear pixel addresses from the transformed image, a circuit for determining whether or not a vector normal to the incremental surface points towards or away from a viewing point, and a device for selecting for display at one of the transformed pixel addresses the corresponding portion of the front or the back of the input image in dependence on whether the vector points towards the viewing point.
Abstract: A method of processing video signals to achieve a visual effect involving three-dimensional manipulation of an input image includes the use of a circuit for determining pixel address by pixel address whether in the resulting transformed image the front or the back of the input image should be displayed. The circuit comprises a device defining an incremental surface in the transformed image by at least three adjacent non-collinear pixel addresses from the transformed image, a circuit for determining from the transformed pixel addresses whether or not a vector normal to the incremental surface points towards or away from a viewing point, and a device for selecting for display at one of the transformed pixel addresses the corresponding portion of the front or the back of the input image in dependence on whether the vector points towards or away from the viewing point.

Journal ArticleDOI
TL;DR: A computationally efficient technique, based on the Hough transform, for the detection of straight edges in industrial images, which permits objects to be located within 1 pixel and orientated within ∼1°.

Journal ArticleDOI
TL;DR: In this study, the grey level index (GLI) is measured with a TV-based image analyzer from routine histological sections, which is a biased estimate of the volume density of Nissl-positive structures.

Patent
Nobuyuki Hirose1, Naoki Ota1
22 Apr 1986
TL;DR: In this paper, a histogram is used to identify the area of the image which contains the potentially interfering edge mark, and a detector is also provided responsive to the identification of that area for analyzing the image signal for only that portion of image signal which represents the image outside the area in which the edge mark was detected, to determine the orientation of the postal material.
Abstract: A postal material reading apparatus is provided with a mechanism for obtaining an image signal which represents a visual image of the surface of postal material. A detector is provided responsive to that image signal for identifying an area of that image which contains an edge mark that may interfere with orientation analysis of the image. A detector is also provided responsive to the identification of that area for analyzing the image signal for only that portion of the image signal which represents the image outside the area in which the edge mark was detected, to determine the orientation of the postal material. In the preferred embodiment, a histogram is used to identify the area of the image which contains the potentially interfering edge mark.

Journal ArticleDOI
TL;DR: This article describes some intial steps in the field of computer-aided neuroanatomy, an algorithm for unfolding and flattening cortical surfaces and a measurement of the differential geometric aspects of these surfaces are presented.
Abstract: In a variety of species including monkeys and humans, the surface of the retina is mapped in an accurate manner to the surface of primary visual cortex. In a real sense there is an image, expressed in the firing density of neurons, impressed on the surface of the brain. The various images found in the brain have complicated natures: They are ``distorted'' by nonlinear map functions, and contain submodality information expressed spatially in the form of columnar systems representing stereo, orientation, motion, and other forms of data. The detailed study of such maps represents a difficult series of problems in the areas of computer graphics, image processing, numerical analysis, and neuroanatomy. This article describes some intial steps in the field of computer-aided neuroanatomy. An algorithm for unfolding and flattening cortical surfaces and a measurement of the differential geometric aspects of these surfaces are presented. Models of the structure of images as they would appear mapped to the surface of primate striate cortex are also shown.

Patent
17 Mar 1986
TL;DR: In this article, a system for determining the attitude of an airborne platform such as a terrain image sensor includes a digital image correlator for comparing successive overlapping, instantaneous images of the terrain which are recorded by a second, two-dimensional image sensor whose image plane is oriented parallel to that of the first sensor.
Abstract: A system for determining the attitude of an airborne platform such as a terrain image sensor includes a digital image correlator for comparing successive overlapping, instantaneous images of the terrain which are recorded by a second, two-dimensional image sensor whose image plane is oriented parallel to that of the terrain image sensor. The second sensor generates an instantaneous master image and a subsequent slave image which at least partially overlaps the master image in terms of the terrain which is viewed. The master and slave images are approximately registered and a correlation is performed. A plurality of points on the slave image are correlated with the corresponding terrain points on the master image. The correlation is performed by selecting a plurality of spatially distributed patches of pixel arrays which are mathematically superimposed on and are moved about the slave image to determine the locations where the maximum of gray scale correlation occurs. These correlation points on the slave image are recorded and the coplanarity condition of photogrammetry determines the relative orientation of the slave with respect to the master. The relative orientation of the slave image with respect to the master image characterizes the attitude change of the platform. The technique also reveals changes in altitude and velocity of the platform when mean altitude and velocity are known. The image produced by the terrain image sensor can be altered on a real time basis using the information relating to the changes in platform attitude, or the master and slave image data can be recorded for subsequent use in modifying the image data recorded by the terrain image sensor.

Journal ArticleDOI
TL;DR: In this article, a Laue indexing method for diffractometers equipped with a position and energy-sensitive detector at a pulsed neutron source or synchrotron X-ray source is described.
Abstract: A Laue indexing method is described which employs an orientation-matrix approach; it is especially suited for use in the case of a diffractometer equipped with a position- and energy-sensitive detector at a pulsed neutron source or synchrotron X-ray source. The crystal can be in any orientation and lattice constants need not be known. Using this method positional and wavelength information can be easily converted into a form suitable for an automatic indexing procedure in order to obtain indices and an orientation matrix, or with knowledge of the latter to predict positions of reflections incident on a detector centered at any desired 2θ.

Journal ArticleDOI
TL;DR: The display system facilitates operator interactivity, e.g., the user can point at structures within the volume image, remove selected image regions to more clearly visualize underlying structure, and control the orientation of brightened oblique planes through the volume.
Abstract: Described is a system for the multidimensional display and analysis of tomographic images utilizing the principle of variable focal (varifocal) length optics. The display system uses a vibrating mirror in the form of an aluminized membrane stretched over a loudspeaker, coupled with a cathode ray tube (CRT) display monitor suspended face down over the mirror, plus the associated digital hardware to generate a space filling display. The mirror is made to vibrate back and forth, as a spherical cap, by exciting the loudspeaker with a 30 Hz sine wave. "Stacks" of 2-D tomographic images are displayed, one image at a time, on the CRT in synchrony with the mirror motion. Because of the changing focal length of the mirror and the integrating nature of the human eye-brain combination, the time sequence of 2-D images, displayed on the CRT face, appears as a 3-D image in the mirror. The system simplifies procedures such as: reviewing large amounts of 3-D image information, exploring volume images in three dimensions, and gaining an appreciation or understanding of three-dimensional shapes and spatial relationships. The display system facilitates operator interactivity, e.g., the user can point at structures within the volume image, remove selected image regions to more clearly visualize underlying structure, and control the orientation of brightened oblique planes through the volume.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: This report will review some of the algorithms presented earlier and show the latest experimental results on scenes that are more complex than before, as well as some manipulation experiments that for sensory feedback use 3-D vision data analyzed according to the algorithm presented here.
Abstract: In a recent report [7], we proposed algorithms for segmenting out the visible part of the topmost object from a pile of planar and curved objects. Planar objects considered were of convex polyhedral type, such as prisms, boxes, wedges, etc; and the curved objects were of the type that could be recognized uniquely by using the Extended Gaussian Image, such object-types being cylinders, cones, spheres, ellipsoids, toruses, etc. For input these algorithms used 3-D vision data acquired with a structured light scanner. In this report, we will review some of the algorithms presented earlier and show our latest experimental results on scenes that are more complex than before. In our presentation at the conference, we also plan to show some manipulation experiments that for sensory feedback use 3-D vision data analyzed according to the algorithms presented here.

Book ChapterDOI
TL;DR: In this article, local measurements of three-dimensional positions and surface normals are used to identify and locate objects, from among a set of known objects, where objects are modeled as polyhedra having up to six degrees of freedom relative to the sensors.
Abstract: This paper discusses how local measurements of three-dimensional positions and surface normals may be used to identify and locate objects, from among a set of known objects. The objects are modeled as polyhedra having up to six degrees of freedom relative to the sensors. We show that inconsistent hypotheses about pairings between sensed points and object surfaces can be discarded efficiently by using local constraints on: distances between faces, angles between face normals, and angles (relative to the surface normals) of vectors between sensed points. We show by simulation that the number of hypotheses consistent with these constraints is small. We also show how to recover the position and orientation of the object from the sense data. The algorithm's performance on data obtained from a triangulation range sensor is illustrated.

Patent
07 Feb 1986
TL;DR: In this article, a medical embodiment replaces the cameos with frames of signals representing tomographic sections of a patient and uses a LUT to provide a window for certain ranges of signals.
Abstract: A processing system and method which produces and manipulates images of a three dimensional object. Patches of video signals (called cameos) representing different parallel planes of an object are stored in a frame store from which each cameo can be accessed individually and manipulated as desired. When the effect of a change in orientation or position of the object is required in the output image each cameo is accessed and manipulated in the same manner and combined using key and priority signals. The manipulation is achieved by writing incoming video signals into addresses in a store determined by the manipulation required and provided as address maps. The medical embodiment replaces the cameos with frames of signals representing tomographic sections of a patient and uses a LUT to provide a window for certain ranges of signals.

01 Jun 1986
TL;DR: The coupled depth/slope model developed here provides a novel computational solution to the surface reconstruction problem and explicitly computes dense slope representations as well as dense representations.
Abstract: : Reconstructing a surface from sparse sensory data is a well known problem in computer vision. Early vision modules typically supply sparse depth, orientation and discontinuity information. The surface reconstruction module incorporates these sparse and possibly conflicting measurements of a surface into a consistent, dense depth map. The coupled depth/slope model developed here provides a novel computational solution to the surface reconstruction problem. This method explicitly computes dense slope representations as well as dense representations. This marked change from previous surface reconstruction algorithms allows a natural integration of orientation constraints into the surface description, a feature not easily incorporated into earlier algorithms. In addition, the coupled depth/slope model generalizes to allow for varying amounts of smoothness at different locations on the surface. This computational model helps conceptualize the problem and leads to two possible implementations-analog and digital. The model can be implemented as an electrical or biological analog network since the only computations required at each locally connected node are averages, additions and subtractions. A parallel digital algorithm can be derived by using finite difference approximations.

Patent
01 Dec 1986
TL;DR: In this paper, an average is formed from the environment pixels of an image pixel and this average is then compared with each environment pixel so as to obtain an output signal having one of three values (1, 0, -1) dependiing on whether the luminance or chrominance values of the particular environment pixel is above, within or below a given tolerance range.
Abstract: A method of detecting edge structures in a video signal, with a decision criterion being derived from the environment pixels of an image pixel so as to realize image coding with the least number of bits possible. The object of detecting all of the oblique edges i.e. edges which are not horizontal or vertical, is accomplished in that an average is formed from the environment pixels of an image pixel and this average is then compared with each environment pixel so as to obtain an output signal having one of three-values (1, 0, -1) dependiing on whether the luminance or chrominance values of the particular environment pixel is above, within or below a given tolerance range. A conclusion as to the existence of an edge and its orientation is then drawn from the number of immediately consecutive identical positive or negative values (1 or -1) of this possible three-valued signal for an image pixel and from the position of the changes in value within the sequence of environment pixels.

Patent
10 Oct 1986
TL;DR: In this paper, a target member (20) mounted on a stationary object such as a pallet (15) comprises at least three reflective elements (52, 53, 54), which are configured to form images of the identification means (35), which define a plane image (70) having an orientation other than the normal orientation with respect to the means of identifying (35) on this plane.
Abstract: A target member (20) mounted on a stationary object such as a pallet (15) comprises at least three reflective elements (52, 53, 54). means for identifying (35) as a source of high intensity light (35) and an image sensor (40) are carried by another mobile object such as a lift truck (30). The reflective elements (52, 53, 54) are configured to form images of the identification means (35), which define a plane image (70) having an orientation other than the normal orientation with respect to the means of identifying (35) on this plane (70), the images also defining a circle (82) which does not contain the identification means (35). The target member (20) may be in the form of a flat support member oriented vertically (50) on which are mounted a pair of convex mirrors (52, 54) and a concave mirror (53). Image identifying means (35) in the mirrors (52, 53, 54) are detected by an image sensor (40) such as a television camera (40), and the directions of each of the images the camera (40) are used to determine all six degrees of information relating to the position of the sensor (40) relative to the target member (20). This information can be used to guide an elevator car (30) and position it with respect to a pallet (15).

Patent
25 Jul 1986
TL;DR: In this article, a map display apparatus for travelers and drivers presenting geographical regions (the world, continents, countries, states, cities etc.) as a combination of sequential zones illustrated on pocket size cards with each zone repeated four times for North, East, West, and South orientation.
Abstract: Map display apparatus for travelers and drivers presenting geographical regions (the world, continents, countries, states, cities, etc.) as a combination of sequential zones illustrated on pocket size cards with each zone repeated four times for North, East, West, and South (N E W S) orientation. It is supplemented by a set of accessories for tracing directions in an erasable or washable form with a provision for card's multiple usage and a portable holder for their storage.

Journal ArticleDOI
TL;DR: This paper considers the problems caused by the discretisation in solid modelling and offers compression and interpolation techniques to reduce them.
Abstract: The purpose of any technique for modelling structures is to store a representation of them, and to produce two dimensional images such that a viewer correctly perceives the three dimensional nature of the structures. There are many methods for including three-dimensional visual cues in a two-dimensional image, but probably the most important one is that of shading. The intensity of light at any point on the model depends mainly on the orientation of the surface at that point with respect to the direction of the light source. In solid voxel modelling this information has to be extracted from the model. The discretisation inherent to modelling techniques has to be allowed for if successful shading is to be achieved. This paper considers the problems caused by the discretisation in solid modelling and offers compression and interpolation techniques to reduce them.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: The accuracy with which objects that generate images of line segments may be located in the presence of quantization noise is investigated and expressions for the standard deviation of errors in orientation and translation are derived.
Abstract: The accuracy with which objects that generate images of line segments may be located in the presence of quantization noise is investigated. Expressions for the standard deviation of errors in orientation and translation are derived. The accuracy of these expressions is substantiated by a number of Monte Carlo simulations and experimental tests.

Patent
30 Jan 1986
TL;DR: In this article, a reader printer with an image rotation prism for adapting the direction of a microfilm image to suitable direction is described, and the reader-printer also has a prism orientation detecting means for automatically eliminating difference between a location of the image and a position of the fed paper.
Abstract: Disclosed is a reader printer having an image rotation prism for adapting the direction of a microfilm image to suitable direction. The reader-printer has an image direction detecting means for forming the microfilm image successfully onto a copy paper without consideration of the direction of the image. The reader-printer also has a prism orientation detecting means for automatically eliminating difference between a location of the image and a location of the fed paper.