scispace - formally typeset
Search or ask a question

Showing papers on "Orientation (computer vision) published in 1983"


Journal ArticleDOI
TL;DR: The implementation of a theory for the detection of intensity changes, proposed by Marr and Hildreth, where the image is first processed independently through a set of different size operators, whose shape is the Laplacian of a Gaussian, ▿2G(x, y).
Abstract: This article describes the implementation of a theory for the detection of intensity changes, proposed by Marr and Hildreth (Proc. R. Soc. London, Ser. B 207, 1980, 187–217). According to this theory, the image is first processed independently through a set of different size operators, whose shape is the Laplacian of a Gaussian, ▿2G(x, y). The loci, along which the convolution outputs cross zero mark the positions of intensity changes at different resolutions. These zero-crossings can be described by their position, slope of the convolution output across zero, and two-dimensional orientation. The set of descriptions from different operator sizes forms the input for later visual processes, such as stereopsis and motion analysis. There are close parallels between this theory and the early processing of information by the human visual system.

204 citations


Patent
10 Jan 1983
TL;DR: In this article, an image composition system includes framestores 30, 31 for receiving information from first and second picture sources and a processor is controlled by picture shape information made available from a third framestore 32.
Abstract: The image composition system includes framestores 30, 31 for receiving information from first and second picture sources. A processor 33 provides the composed image by using information from these sources. The processor is controlled by picture shape information made available from a third framestore 32. This shape information may be provided by a camera 26 receiving an image of a silhouette for example or the shape can be manually generated via a touch tablet 38. The instantaneous value of the shape controls the blending of the pictures such that portions of the picture can be taken from a scene and inserted without noticeable degredation. Manipulation of the position, orientation or size of the inserted picture portion for example can also be effected.

165 citations


Journal ArticleDOI
TL;DR: In this paper, a method of two-dimensional shape-fabric analysis is presented which is based on the projection of lines or ellipses on the x-axis, which is used to evaluate the average length of projection of the lines (e.g., outlines of ooides, pressure solution seams) while the fabric is rotated by small increments about the origin of the x - y plane.

99 citations


Journal ArticleDOI
TL;DR: The theory which governs constraints under orthography which governs shadows cast by polyhedra and curved surfaces is described, and some methods are presented for combining shadow geometry with other gradient space techniques for 3D shape inference.
Abstract: Given a line drawing from an image with shadow regions identified, the shapes of the shadows can be used to generate constraints on the orientations of the surfaces involved. This paper describes the theory which governs those constraints under orthography. A “Basic Shadow Problem” is first posed, in which there is a single light source, and a single surface casts a shadow on another (background) surface. There are six parameters to determine: the orientation (two parameters) for each surface, and the direction of the vector (two parameters) pointing at the light source. If some set of three of these are given in advance, the remaining three can then be determined geometrically. The solution method consists of identifying “illumination surfaces” consisting of illumination vectors, assigning Huffman-Clowes line labels to their edges, and applying the corresponding constraints in gradient space. The analysis is extended to shadows cast by polyhedra and curved surfaces. In both cases, the constraints provided by shadows can be analyzed in a manner analogous to the Basic shadow Problem. When the shadow falls upon a polyhedron or curved surface, similar techniques apply. The consequences of varying the position and number of light sources are also discussed. Finally, some methods are presented for combining shadow geometry with other gradient space techniques for 3D shape inference.

78 citations


Journal ArticleDOI
Chih-Shing Ho1
TL;DR: The precision of geometric features such as the centroid, area, perimeter, and orientation measured by a 2-D digital vision system is analyzed and tested experimentally and the sampling process can be simplified or eliminated.
Abstract: The precision of geometric features such as the centroid, area, perimeter, and orientation measured by a 2-D digital vision system is analyzed and tested experimentally. The digitizing error of various geometric features can be expressed in terms of the dimensionless perimeter of the object. As a result of this work, the sampling process can be simplified or eliminated. The analysis is also expanded to cover the 3-D digital vision systems.

75 citations


Journal ArticleDOI
TL;DR: A relaxation method based on patterns of local features is used to find matches between pairs of images or subimages that differ in position or orientation, yielding good results for TV images of objects such as tools and industrial parts, as well as for aerial images of terrain.

71 citations


Patent
27 Dec 1983
TL;DR: Contour Radiography (CONRAD) as mentioned in this paper reconstructs the 3D coordinates of an identifiable contour on an object without relying on markers or pre-existing knowledge of the geometry of the object.
Abstract: A method and apparatus for reconstructing the three-dimensional coordinates of an identifiable contour on an object without relying on markers or pre-existing knowledge of the geometry of the object is described. The technique is defined as Contour Radiography. In the preferred embodiment two X-ray sources irradiate an object possessing a radiographically identifiable contour and then the two images of the contour are projected onto an X-ray film at spaced locations on the film plane. These images are digitized by the tracing of the image curves with a cursor or some other means thereby establishing the coordinates of an arbitrary number of image points. The digital data thus obtained is processed in accordance with a Contour Radiography (CONRAD) algorithm to identify corresponding points on the two curves which originate from the same point on the physical contour. The spatial coordinates of the X-ray sources are determined using a special calibration system. Then the coordinates of each corresponding point pair are processed with the spatial coordinates of the X-ray source to determine the three-dimensional coordinates of their originating space-point on the contour. In this way the three-dimensional coordinates of the contour are determined. The three-dimensional coordinates are then processed in a commercially available graphics system to visually display the reconstructed contour. The technique has particular application in medicine for determining the undistorted shape, position, size and orientation of selected internal organs, such as bone, which have a radiographically identifiable contour.

68 citations


Patent
01 Sep 1983
TL;DR: In this article, a three-dimensional triangulation-type sensor-illumination system adapted for connection to a machine vision computer is presented, which comprises a unique cross hair light pattern which provides sufficient image data to enable the computer to make three dimensional measurements of a wide variety of features including edges, corners, holes, studs, designated portions of a surface and intersections of surfaces.
Abstract: A three-dimensional triangulation-type sensor-illumination system adapted for connection to a machine vision computer. The illumination source in the preferred system comprises a unique cross hair light pattern which provides sufficient image data to the computer to enable the computer to make three-dimensional measurements of a wide variety of features, including edges, corners, holes, studs, designated portions of a surface and intersections of surfaces. The illumination source and sensor are both mounted within a single housing in a specific position and orientation relative to one another, thereby permitting the system to be internally calibrated. In addition, the sensor-illuminator unit is preferably mounted in a test fixture so that the light source is substantially normal to the surface of the part to be examined and the sensor is thereby positioned at a perspective angle relative thereto.

58 citations


Patent
19 Apr 1983
TL;DR: In this article, a vision correction system for workpiece sensing is used to detect the deviation between the first repeat pass welding path and the actual welding path of the workpiece, and the vision system in response to the deviation data provided by the image processor is utilized to provide corrected welding path data for the welding of the actual workpiece during a second repeat pass.
Abstract: Control apparatus for manipulator welding apparatus is provided that includes a vision correction system for workpiece sensing. During an initial teach mode, the manipulator is taught the desired welding path on a workpiece and data is recorded representing the welding path relative to a frame of reference defined by the workpiece. In addition to the data representing the taught welding path and the frame of reference, data representing one or more reference images or templates are also recorded in the teach mode. As successive workpieces are presented to the manipulator for performing the desired welding path, the control apparatus in a repeat work cycle mode is first controlled to measure and define the new location and orientation of the frame of reference in accordance with the workpiece. The new frame of reference data is utilized to modify the path data. In a preferred arrangement, a first repeat pass is performed by controlling the manipulator to move over the weld path in accordance with modified weld path data as determined from the new frame of reference data. The vision system utilizing an image processor detects the deviation between the first repeat pass welding path and the actual welding path of the workpiece. The control apparatus in response to the deviation data provided by the image processor in the first repeat pass modifies the first repeat pass weld path data to provide corrected welding path data for the welding of the actual workpiece during a second repeat pass with the weld being initiated at a predetermined point relative to the frame of reference.

48 citations


Journal ArticleDOI
TL;DR: It is argued that surface shape is implicitly encoded in the positions of the zero-crossings of the convolved images, and further, that the encoding can be at least partially inverted to construct an approximation to the viewed surface from the correspondence information.
Abstract: Most computational theories of early visual processing require, as a first stage, the extraction of a symbolic representation (called theprimal sketch) of noticeable changes in image irradiance. For example, thezero-crossings of a Laplacian of a Gaussian operator applied to the image form the basis of one possible representation. Computational theories of stereo or motion correspondence only specify the computation of three-dimensional surface information at such points. Yet, the visual perception, consistent for different viewers, is clearly of complete surfaces. Since in principle the class of surface which could pass through the known boundary points provided by feature point correspondence is infinite and contains widely varying surfaces, the visual system must incorporate some additional constraints besides the known points to compute the complete surface. Using the image irradiance equation, asurface consistency constraint, referred to informally asno news is good news is derived. The constraint implies that the surface must agree with the correspondence information, andnot vary radically between these points. An explicit form of this surface consistency constraint is derived, by relating the probability of a zero-crossing in a region of the image to the variation in the local surface orientation of the surface, provided that the surface albedo and the illumination are roughly constant. The surface consistency constraint is informally supported by Logan's theorem, which essentially states that all the critical information of a signal is generally contained in its zero-crossings, and by the demonstration that the transformation from surface shape to image irradiance generally preserves zero-crossings. Hence, it is argued that surface shape is implicitly encoded in the positions of the zero-crossings of the convolved images, and further, that the encoding can be at least partially inverted to construct an approximation to the viewed surface from the correspondence information.

47 citations


Patent
Atushi Hisano1
28 Sep 1983
TL;DR: In this article, the orientation of an article is determined based on the direction through the two identified feature points, and the two points are identified by checking the data relating to the distances of the two feature points from other feature points.
Abstract: Data relating to the distances between each of all the feature points of the article to be checked and the other feature points thereof is stored in a memory in advance. The image of the article is picked up by a camera, and the resulting image data is stored in another memory. With use of a computer, data relating to the distances of at least two of the feature points of the article from other feature points thereof is obtained from the image data. The two feature points are identified by checking the data relating to the distances of the two feature points from other feature points with reference to the stored data relating to the distances between each feature point and the other feature points. The orientation of the article is determined based on the direction through the two identified feature points.

Proceedings Article
22 Aug 1983
TL;DR: This work presents an iterative method for reconstructing convex polyhedra from their Extended Gaussian Images, where the objects are restricted to convexpolyhedra.
Abstract: In computing a scene description from an image, a useful intermediate representation of a scene object is given by the orientation and area of the constituent surface facets, termed the Extended Gaussian Image (EGI) of the object The EGI of a convex object uniquely represents that object We are concerned with the computational task of reconstructing the shape of scene objects from their Extended Gaussian Images, where the objects are restricted to convex polyhedra We present an iterative method for reconstructing convex polyhedra from their Extended Gaussian Images

Journal ArticleDOI
TL;DR: A method of taking 3D information into account in the segmentation process is introduced, where the image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates.

01 Oct 1983
TL;DR: Here it is shown how results in machine vision provide techniques for automatically directing a mechanical manipulator to pick one object at a time out of a pile.
Abstract: : One of the remaining obstacles to the widespread application of industrial robots is their inability to deal with parts that are not precisely positioned. In the case of manual assembly, components are often presented in bins. Current automated systems, on the other hand, require separate feeders which present the parts with carefully controlled position and attitude. Here we show how results in machine vision provide techniques for automatically directing a mechanical manipulator to pick one object at a time out of a pile. The attitude of the object to be picked up is determined using a histogram of the orientations of visible surface patches. Surface orientation, in turn, is determined using photometric stereo applied to multiple images. These images are taken with the same camera but differing lighting. The resulting needle map, giving the orientations of surface patches, is used to create an orientation histogram which is a discrete approximation to the extended Gaussian image. This can be matched against a synthetic orientation histogram obtained from prototypical models of the objects to be manipulated. Such models may be obtained from computer aided design (CAD) databases. The method thus requires that the shape of the objects be described, but it is not restricted to particular types of objects. (Author)

Journal Article
01 Jan 1983-Robot
TL;DR: A curvilinear robot constructed from a number of modular flexible sections of fixed length and diameter but independently controlled radius and direction of curvature has been equipped with an optical fibre image guide transmitting images from between the gripper jaws to the remote TV camera of Microvision-100, a microcomputer controlled real-time DMA-based vision system that is easily trained to recognise the shape, position and orientation of components.
Abstract: SUMMARY A curvilinear robot constructed from a number of modular flexible sections of fixed length and diameter but independently controlled radius and direction of curvature has been equipped with an optical fibre image guide transmitting images from between the gripper jaws to the remote TV camera of Microvision-100, a microcomputer controlled real-time DMA-based vision system that is easily trained to recognise the shape, position and orientation of components. The gripper position and orientation is controlled by feedback from the vision system, the action taken depending on component recognition and inspection for defects. Redundant degrees of freedom enable the curvilinear robot to avoid obstacles and work in confined spaces.

Journal ArticleDOI
01 Jan 1983-Robotica
TL;DR: In this paper, a curvilinear robot constructed from a number of modular flexible sections of fixed length and diameter but independently controlled radius and direction of curvature has been equipped with an optical fiber image guide transmitting images from between the gripper jaws to the remote TV camera of Microvision-100, a microcomputer controlled real-time DMA-based vision system that is easily trained to recognize the shape, position and orientation of components.
Abstract: SUMMARY A curvilinear robot constructed from a number of modular flexible sections of fixed length and diameter but independently controlled radius and direction of curvature has been equipped with an optical fibre image guide transmitting images from between the gripper jaws to the remote TV camera of Microvision-100, a microcomputer controlled real-time DMA-based vision system that is easily trained to recognise the shape, position and orientation of components. The gripper position and orientation is controlled by feedback from the vision system, the action taken depending on component recognition and inspection for defects. Redundant degrees of freedom enable the curvilinear robot to avoid obstacles and work in confined spaces.

Patent
19 Dec 1983
TL;DR: In this paper, the orientation and position of a first body, such as the inboard end 42 of a helicopter blade 21, with respect to a second body such as a rotor hub arm 36, is sensed by determining the position of the images 93, 95-97, of light from sources 80, 81 passed through slits 89, 90 on detector arrays 91, 92.
Abstract: Orientation and/or position of a first body, such as the inboard end 42 of a helicopter blade 21, with respect to a second body, such as a rotor hub arm 36, is sensed by determining the position of the images 93, 95-97, of light from sources 80, 81 passed through slits 89, 90 on detector arrays 91, 92. Signal processing means (FIG. 5, FIG. 11) provide signals indicative of such image positions, which can be used to determine, from simple trigonometry, the orientation and/or position of the first body.

Journal ArticleDOI
TL;DR: Investigation of the question as to the amount and type of orientation information identified and discriminated at brief exposures is investigated as a function of (radial) frequency with two-dimensional grey-scaled stochastic textures suggests no more than 18 orientation classes are detected with the low frequency components of such textures.

Journal ArticleDOI
TL;DR: Results indicate a monotonic relationship between image contrast and energy processed and demonstrate that the upper lindts (or resolution) for discriminable spectral orientation and spatial frequency are approximately 5 deg and 1/8 octave, respectively.
Abstract: In this paper two specific questions are considered: (1) When image energy is defined in terms of orientation and frequency components of the image spectrum, how much energy is detected as a function of contrast? (2) Does the visual system have a limited resolution for image orientation and frequency components which define the psychophysical upper limits for two dimensional image coding units? To examine these two questions, experiments were conducted with both black/white and gray-scaled images. The results indicate a monotonic relationship between image contrast and energy processed. Finally, results demonstrate that the upper lindts (or resolution) for discriminable spectral orientation and spatial frequency are approximately 5 deg and 1/8 octave, respectively. Image domain demonstrations confirm these results.

Journal ArticleDOI
TL;DR: The notion is developed that grids are also likely to differ for the phenotypic variation among the plants in each grid, and a simple adjustment for plant yields is suggested, viz. standardization per grid and truncation selection for the adjusted yield.
Abstract: Sometimes the response to grid selection is disappointing. Reasons suggested for this are: (i) selection of a fixed instead of a variable number of plants per grid, (ii) the arbitrary ways of choosing size, shape and orientation of the grids. The first reason is considered somewhat more. In addition to the common assumption that grids differ for the average growing conditions offered by them, the notion is developed that the grids are also likely to differ for the phenotypic variation among the plants in each grid. It is suggested to calculate a simple adjustment for plant yields, viz. standardization per grid, and to apply truncation selection for the adjusted yield. Some tentative results are more or less in favour of the proposed method.

Journal ArticleDOI
TL;DR: It is shown that the proposed algorithm successfully enhances the outlines of desired items as small as 1-2 cm (7-11 pixels) to the exclusion of remaining material under the conditions expected in actual use.
Abstract: It is desired to enhance the outline of agricultural contraband in X-ray images of passenger luggage Agricultural contraband consists of fruit, meats, animals, and plants We suggest most contraband can be distinguished from other material by an elliptic, rather than rectangular, cross section An algorithm is proposed to recognize such cross section using the erosion of the absolute gradient of the image Only local convolution calculations are required This algorithm is tested on a number of computed images as well as on X-ray images of model objects, isolated contraband, and contraband contained and obscured in baggage The effect of image noise, object size, orientation, and obscuration is tested It is shown that the proposed algorithm successfully enhances the outlines of desired items as small as 1-2 cm (7-11 pixels) to the exclusion of remaining material under the conditions expected in actual use

01 Jan 1983
TL;DR: This thesis describes the theory which governs constraints in the gradient space and shows how it can be applied to polyhedra and certain types of generalized cylinders, a volumetric representation for solid shapes, and a representation scheme for 3D orientation.
Abstract: Given a line drawing from an image with shadows regions identified, the shapes of the shadows can be used to generate constraints on the orientations of the surfaces involved. This thesis describes the theory which governs those constraints and shows how it can be applied to polyhedra and certain types of generalized cylinders. One of the topics explored is the use of shadows to determine 3D surface orientation. A "Basic Shadow Problem" is posed, in which a polygon casts its shadow on a flat surface. There are six parameters to determine: the orientation (2 parameters) for each surface and the direction of the illumination (2 parameters). If some set of 3 of these are given in advance, the remaining 3 can be determined geometrically. The solution method consists of identifying "illumination surfaces" consisting of illumination sectors, assigning Huffman-Clowes line labels to their edges, and applying the resulting constraints in the gradient space. The analysis is extended to shadows cast by and upon polyhedra and curved surfaces, and the consequences of varying the number and position of light sources are discussed. Another topic addressed is the analysis of images of solids of revolution. When the angle between the line of sight and the solid's axis is known, the contours in the image can be analyzed to precisely determine the shape description. This angle can be determined from the image when shadows are present by applying the results obtained for curved surfaces in general. The thesis also includes a collection of properties of the gradient space, a representation scheme for 3D orientation, and a taxonomy and analysis of properties of generalized cylinders, a volumetric representation for solid shapes.

01 Jan 1983
TL;DR: The relationship between surface curvature and the second derivatives of image irradiance is independent of other scene parameters, but insufficient to determine surface shape as mentioned in this paper, which places in perspective the difficulty encountered in previous attempts to recover surface orientation from image shading.
Abstract: Previous efforts to recover surface shape from image irradiance are reviewed in order to assess what can and cannot be accomplished. The informational requirements and restrictions of these approaches are considered. In dealing with the question of what surface parameters can be recovered locally from image shading, it is shown that, at most, shading determines relative surface curvature, i.e., the ratio of surface curvature measured in orthogonal image directions. The relationship between relative surface curvature and the second derivatives of image irradiance is independent of other scene parameters, but insufficient to determine surface shape. This result places in perspective the difficulty encountered in previous attempts to recover surface orientation from image shading.

Patent
Sean Amour1
17 Nov 1983
TL;DR: In this article, a visual target simulator for field training in which a target image appears to a trainee via a beam-splitter display so as to be superimposed on a "real world" background is presented.
Abstract: A visual target simulator for field training in which a target image appears to a trainee via a beam-splitter display so as to be superimposed on a "real world" background. The simulator includes a central processing unit coupled to a position sensor system for detecting the orientation of the beam-splitter, to a video disc player containing video images of a target together with information regarding the attitude and trajectory of the target, to a video processing unit for placing the target image at a particular point on the trainee's beam-splitter, to a memory containing data regarding the sector at which images are stored in the video disc player, and to an instructor interface. An instructor can input a target scenario by means of the interface, whereafter the central processing unit accesses the corresponding target images from the video disc based upon data from the memory. The image is displayed to the trainee at a location on the beam-splitter calculated from the display's orientation, the target trajectory and the image size. An audio system is also provided for delivering stereo audio to the trainee indicative of the orientation of the target relative to the trainee to assist in pointing the weapon in the direction of the target image.

Book ChapterDOI
01 Jan 1983
TL;DR: In this article, a 64 grey level vision system was developed for real-time robot control and inspection, which is capable of processing up to 12 TV frames per second which is an order of magnitude faster than any commercially available grey scale vision system.
Abstract: A 64 grey level vision system has been developed for real-time robot control and inspection. It is capable of processing up to 12 TV frames per second which is an order of magnitude faster than any commercially available grey scale vision system. The system consists of a programmable pipelined hardwired preprocessor for image conditioning, filtering, edge extraction, segmentation, and blob tracking and orientation, as well as for interpreting visual data output for a robot or external systems control and communications. The set-up for assembly of an electromechanical relay under vision control is described and the results are presented.

01 Feb 1983
TL;DR: In this article, a feasibility model of an advanced visual display system for flight simulation is described, which is comprised of a video projector mounted on a pilot's helmet which projects a computer generated image onto a spherical screen.
Abstract: : A feasibility model of an advanced visual display system for flight simulation is described. The feasibility model is comprised of a video projector mounted on a pilot's helmet which projects a computer generated image onto a spherical screen. The video projector utilizes a laser light source in forming the projected video raster. The display is slaved to the viewer's head pointing direction via a magnetic head tracking device, and results in imagery that is generated and displayed for the instantaneous viewing direction of the observer. Since the computer image generator requires a measurable period of time to create an image for a specific head pointing direction, an undesirable display orientation error is induced each time the viewer moves his head. A method of continuously compensating for this image display error was provided and is described. This feasibility model has demonstrated successfully, on a small scale, the helmet mounted display concept. This concept will be utilized in a full scale development model scheduled for delivery under contract in 1985. (Author)

Journal Article
TL;DR: Simultaneous dual B-modes, the tomographic plane of which can be moved freely keeping the intersecting angle always perpendicular, was devised and based on simultaneous multifrequency ultrasonography and has proved to be a great help for understanding the 3-D structure of heart.
Abstract: Simultaneous dual B-modes, the tomographic plane of which can be moved freely keeping the intersecting angle always perpendicular, was devised and based on simultaneous multifrequency ultrasonography (Miwa, et al, 1981). Dual probes, the ends of which are linked by two arms and three nodes with potentiometers are used to display the intersecting line on the both B-mode images by computation. Both images can be displayed together on one CRT, or separately on the different CRTs. This has proved to be a great help for understanding the 3-D structure of heart. Quantitative dimension and displacement speed measurement of 3-D structure of heart such as myocardium thickness, inner diameter of ventricle, and their changing speed, can be performed very exactly at the accurate three dimensional orientation to the heart. We have also recognized that measuring beam for M-mode must be guided to an exact orientation by this technology. Otherwise, unexpected error has been introduced very usually, if guided only by conventional single B-mode due to lack of information of three dimensional incident angle to the wall, valve,..

Proceedings ArticleDOI
26 Oct 1983
TL;DR: The author has pieced together various fragments of data to devise a computer simulation of these preperceptual processes and discusses in some depth the components of interactive processing which have been used in the simulation and demonstrates the variety of forms of 'Perceptual' data available.
Abstract: It is considered reasonable to suppose that highly developed sensory systems such as the human visual system will have become roughly optimised by evolution. It is therefore desirable to attempt to understand the mechanisms of such systems and to consider their application in digital image processing. Over the last few years new techniques have been employed by neurophysiologists and anatomists to explore the building bricks of the 'image pro-cessing' which precedes the act of perception. The author has pieced together various fragments of data to devise a computer simulation of these preperceptual processes. The simulation produces fragmentary line and edge information which is fully coded in terms of location, strength and orientation. It also associates connected groups of fragmentary lines and edges and analyses them statistically in terms of number, mean strength and fluctuation of strength. The coding of the fragmentary perceptual input data is such that virtually any question may be addressed to facilitate recognition of partially obscured or complex objects. The perceptual input plane is basically quiet, containing only profile data in many situations. It is therefore admirably suited to extraction of dynamic behaviour of associated profiles. Spectral coding may also be incorporated, if desired. The paper discusses in some depth the components of interactive processing which have been used in the simulation and demonstrates the variety of forms of 'perceptual' data available.© (1983) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
14 Oct 1983
TL;DR: In this article, a parallel projection optical system is used to identify and locate features on components to be assembled in position and orientation so that a gripper mechanism can pick up a component and move it into a correct position relative to the other component for assembly.
Abstract: In automatic assembly apparatus operating under electronic vision control, features on components to be assembled must be recognized and located in position and orientation so that a gripper mechanism may be directed to one component to pick it up and move it into correct position and orientation relative to the other component for assembly. The invention provides a known parallel projection optical system 8, 9, at the location of each component which provides a plan view for an electronic camera 11 of the components 20, 22 at constant scale regardless of lateral or axial component movements. The grey-level picture of each component prowded by the camera is thresholded into a binarized picture at a threshold level which selects a primary component feature 27 within a part of the camera field of view which is certain to contain this feature. From the known location of secondary features 25, 26 and 38, 39, 40 of the component relative to the primary feature, successively limited search areas 35 and 41, 42 are set up within the camera field of view to select these secondary features when thresholded at levels related to the primary threshold. Sufficient features are thereby located in the camera field of view to provide position and orientation information on the component to a computer which directs the gripper 15 to assemble the components.

Proceedings ArticleDOI
26 Oct 1983
TL;DR: In this article, a vision system was developed at North Carolina State University to identify the orientation and three dimensional location of steam turbine blades that are stacked in an industrial A-frame cart.
Abstract: A vision system has been developed at North Carolina State University to identify the orientation and three dimensional location of steam turbine blades that are stacked in an industrial A-frame cart. The system uses a controlled light source for structured illumination and a single camera to extract the information required by the image processing software to calculate the position and orientation of a turbine blade in real time.© (1983) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.