scispace - formally typeset
Search or ask a question

Showing papers on "Orientation (computer vision) published in 1979"


Journal ArticleDOI
TL;DR: In this paper, an algorithm for solving the stereoscopic matching problem is proposed, which consists of five steps: (1) each image is filtered at different orientations with bar masks of four sizes that increase with eccentricity.
Abstract: An algorithm is proposed for solving the stereoscopic matching problem. The algorithm consists of five steps: (1) Each image is filtered at different orientations with bar masks of four sizes that increase with eccentricity; the equivalent filters are one or two octaves wide. (2) Zero-crossings in the filtered images, which roughly correspond to edges, are localized. Positions of the ends of lines and edges are also found. (3) For each mask orientation and size, matching takes place between pairs of zero-crossings or terminations of the same sign in the two images, for a range of disparities up to about the width of the mask’s central region. (4) Wide masks can control vergence movements, thus causing small masks to come into correspondence. (5) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2½-D sketch.

1,666 citations


Journal ArticleDOI
TL;DR: Methods of hidden surface removal and shading for computer-displayed surfaces if the surface to be displayed is approximated by a large number of square faces of restricted orientation work at least an order of magnitude faster than previously published methods.

438 citations


Proceedings ArticleDOI
09 Jan 1979
TL;DR: In photometric stereo as mentioned in this paper, the direction of the incident illumination between successive views is varied while holding the viewing direction constant, which provides enough information to determine surface orientation at each picture element.
Abstract: This paper introduces a novel technique called photometric stereo. The idea of photometric stereo is to vary the direction of the incident illumination between successive views while holding the viewing direction constant. This provides enough information to determine surface orientation at each picture element. Traditional stereo techniques determine range by relating two images of an object viewed from different directions. If the correspondence between picture elements is known, then distance to the object can be calculated by triangulation. Unfortunately, it is difficult to determine this correspondence. In photometric stereo, the imaging geometry does not change. Therefore, the correspondence between picture elements is known a priori. This stereo technique is photometric because it uses the intensity values recorded at a single picture element, in successive views, rather than the relative positions of features.

248 citations


Patent
29 May 1979
TL;DR: In this paper, a robot assembly for acquiring unorientated workpieces from a bin is described, where a computer analyzes the data to determine candidate holdsites on the workpiece and the hand of the robot assembly engages a workpiece at a selected holdsite.
Abstract: A robot assembly for acquiring unorientated workpieces from a bin. A sensing system views the bin and collects data. A computer analyzes the data to determine candidate holdsites on the workpiece. The hand of the robot assembly then engages a workpiece at a selected holdsite. The workpiece is moved to a pose where the position and orientation of the workpiece are determined. After this determination, the workpiece may be disengaged, or moved to an intermediate or final goal site.

104 citations


Journal ArticleDOI
TL;DR: A model is presented to predict human dynamic spatial orientation in response to multisensory stimuli and computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation.
Abstract: A model is being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors Central nervous system function is then modeled as a steady-state Kalman filter which blends information from the various sensors to form an estimate of spatial orientation Where necessary, this linear central estimator has been augmented with nonlinear elements to reflect more accurately some highly nonlinear human response characteristics Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended Possible means are described for extending the model to better represent the active pilot with varying skill and work load levels

86 citations


Proceedings Article
20 Aug 1979
TL;DR: A theorem is proved which relates this Hough-l0ike transform to the gradient space, showing that the transform also directly indicates local surface orientation and has many properties that make it an appealling substitute for some other current image transforms.
Abstract: A new approach to obtaining shape information from textural information in static monocular Images la outlined. Also presented is a new aggregation transform which determines (under certain condition) vanishing points and lines. A theorem is proved which relates this Hough-l0ike transform to the gradient space, showing that the transform also directly indicates local surface orientation. Additionally, the transform is shown to have many properties that make it an appealling substitute for some other current image transforms. An example is given of its application to a synthetic textured Image.

83 citations


Journal ArticleDOI
TL;DR: A class of feature-detection operations using orthonormal basis functions is introduced in this paper, where the basis functions are derived from the Karhunen-Loeve expansion of the local image data.

73 citations


Journal ArticleDOI
Terry Caelli1, Bela Julesz1
TL;DR: A psychophysical law is derived and it is shown that differences in the variance of orientation, but not of dipole length, result in texture discrimination.
Abstract: By defining texture as a global feature attained by integration over the image domain, we show that texture discrimination can be predicted for a special class of visual textures (composed of paired dots) as a function of such global features. We derive a psychophysical law based on these global features and show that differences in the variance of orientation, but not of dipole length, result in texture discrimination.

56 citations


19 Jan 1979
TL;DR: An iterative relaxation algorithm for solving a first-order, non-linear partial differential equation on a grid of points that has the distinct advantage of being capable of handling any reflectance function whether analytically or empirically specified.
Abstract: : The shape of an object can be determined from the shading in a single image by solving a first-order, non-linear partial differential equation. The method of characteristics can be used to do this, but it suffers from a number of theoretical difficulties and implementation problems. This thesis presents an iterative relaxation algorithm for solving this equation on a grid of points. Here, repeated local computations eventually lead to a global solution. The algorithm solves for the surface orientation at each point by employing an iterative relaxation scheme. The constraint of surface smoothness is achieved while simultaneously satisfying the constraints imposed by the equation of image illumination. The algorithm has the distinct advantage of being capable of handling any reflectance function whether analytically or empirically specified. Included are brief overviews of some of the more important shape-from-shading algorithms in existence and a list of potential applications of this iterative approach to several image domains including scanning electron microscopy, remote sensing of topography and industrial inspection. (Author)

43 citations


Book ChapterDOI
20 Aug 1979
TL;DR: An overall framework is suggested for extracting shape information from images, in which the analysis proceeds through three representations: the primal sketch, which makes explicit the intensity changes and local two-dimensional geometry of an image, and the 3-D model representation, which allows an object-centred description of the three-dimensional structure and organization of a viewed shape.
Abstract: For human vision to be explained by a computational theory, the first question is plain: What are the problems the brain solves when we see? It is argued that vision is the construction of efficient symbolic descriptions from images of the world. An important aspect of vision is therefore the choice of representations for the different kinds of information in a visual scene. An overall framework is suggested for extracting shape information from images, in which the analysis proceeds through three representations; (1) the primal sketch, which makes explicit the intensity changes and local two-dimensional geometry of an image, (2) the 2 1/2-D sketch, which is a viewer-centred representation of the depth, orientation and discontinuities of the visible surfaces, and (3) the 3-D model representation, which allows an object-centred description of the threedimensional structure and organization of a viewed shape. The critical act in formulating computational theories for processes capable of constructing these representations is the discovery of valid constraints on the way the world behaves, that provide sufficient additional information to allow recovery of the desired characteristic. Finally, once a computational theory for a process has been formulated, algorithms for implementing it may be designed, and their performance compared with that of the human visual processor.

41 citations


Journal Article
TL;DR: In this article, the role that the orientation of observed features has on the grey tone of the resulting positive image is discussed. But the authors focus on the role of street patterns on the effect of the angle difference between the radar azimuth and the street pattern and especially the orientation orientation of the walls of the structures imaged.
Abstract: Emphasis is placed on the role that the orientation of observed features has on the grey tone of the resulting positive image. As an example it is shown that in the Los Angeles urbanized region, large areas have significantly lower grey tones than adjacent areas having similar land cover. It is determined that this effect is the result of the angle difference between the radar azimuth and the street pattern and especially the orientation of the walls of the structures imaged. Therefore, knowledge of this information is essential in order to ensure accurate interpretation of radar imagery. It is concluded that for radar systems operated from platforms which have fixed azimuth angles (e.g., satellite systems such as Seasat-A), an interpretation methodology, which considers street patterns, is considered especially critical for proper and accurate SAR imagery.

Journal ArticleDOI
TL;DR: The present study devoted to the statistical analysis of edges in still monochrome TV pictures, which concerns orientation, edge length, edge width, runlength between edges and edge slope probability distributions as well as the measure of orientation continuity along an edge and the relative frequencies of edge pixels and contrasted isolated pixels.
Abstract: The present study is devoted to the statistical analysis of edges in still monochrome TV pictures. The visual information carried by the edges is especially important both for image interpretation and for subjective image quality evaluation. Statistical knowledge on edges is helpful to improve image coding techniques significantly as well as processing techniques for scene analysis. After an introduction on nonstationary local statistical models, we describe the parameters of edges and the methods used to measure them. Statistical data collected on these parameters are then presented. The data concern orientation, edge length, edge width, runlength between edges and edge slope probability distributions as well as the measure of orientation continuity along an edge and the relative frequencies of edge pixels and contrasted isolated pixels.

Proceedings ArticleDOI
19 Jun 1979
TL;DR: Multiple frame analysis techniques are being investigated that involve on the one hand, averaging stenosis measurements from adjacent frames, and on the otherhand, averaging adjacent frame images directly and then measuring stenosis from the averaged image.
Abstract: A computer technique is being developed at the Jet Propulsion Laboratory to automate the measurement of coronary stenosis. A Vanguard 35mm film transport is optically coupled to a Spatial Data System vidicon/digitizer which in turn is controlled by a DEC PDP 11/55 computer. Programs have been developed to track the edges of the arterial shadow, to locate normal and atherosclerotic vessel sections and to measure percent stenosis. Multiple frame analysis techniques are being investigated that involve on the one hand, averaging stenosis measurements from adjacent frames, and on the other hand, averaging adjacent frame images directly and then measuring stenosis from the averaged image. For the latter case, geometric transformations are used to force registration of vessel images whose spatial orientation changes.

Proceedings Article
20 Aug 1979
TL;DR: For several classes of surfaces, image analysis simplifies and this result has already been established for planar surfaces and for the subclass of doubly curved surfaces known as generalized cones.
Abstract: Reflectance map techniques make explicit the relationship between image intensity and surface orientation. In general, however, trade-offs between image intensity and surface shape emerge which cannot be resolved in a single view. Existing methods for determining shape from a single view embody assumptions about surface curvature. The image Hessian matrix is introduced as a convenient viewer-centered representation of surface curvature. Properties of surface curvature are expressed as properties of the Hessian matrix. For several classes of surfaces, image analysis simplifies. This result has already been established for planar surfaces. Similar simplification is demonstrated for singly curved surfaces and for the subclass of doubly curved surfaces known as generalized cones. These studies help to delineate shape information that can be determined from object boundaries and shape information that can be determined from shading.

Book ChapterDOI
A. D. Gara1
01 Jan 1979
TL;DR: Recent results on 2-dimensional correlation by optical matched filtering demonstrate the ability to determine the location and orientation of objects in low contrast scenes.
Abstract: The application of optical computing to sensor-based robots will be described in terms of processing image information in real-time by the methods of spatial Fourier transform filtering. The basic requirements will be reviewed and advances in optical device technology which have led to real-time operation will be described. In particular, recent results on 2-dimensional correlation by optical matched filtering demonstrate the ability to determine the location and orientation of objects in low contrast scenes. These advances have resulted in a laboratory demonstration of a vision system for robot control.

Patent
26 Jun 1979
TL;DR: In this article, the authors proposed a pattern recognition system for locating the geometric center lines or center points of distinguishable areas on a generally planar surface, where the center points are detected by four separate scanning directions, the outputs of these directions being synchronized with each other.
Abstract: The present invention relates to pattern recognition systems for locating the geometric center lines or center points of distinguishable areas on a generally planar surface. In the past, especially when locating center points of integrated circuit pads, the locating operation was done manually. The present invention provides an automatic device in the form of circuitry for analyzing a serial digital data stream representative of the image of the scanned field, and for determining the positions of center lines of center points of distinguishable areas in -the field falling within a selected size range. The circuitry includes separate sections for analyzing the scanned image in order to detect center lines with respect to four separate scanning directions, the outputs of these sections being synchronized with each other, although delayed from the original serial data stream, and being representative of center lines of areas falling within the selected size range. The outputs of the four separate sections may be logically combined in any desired manner to produce a serial output signal indicative of center points of areas falling within the selcted size range. Selected center points may then be used in conjunction with a wire bonding machine to provide information indicative of the position and orientation of the chip.

Proceedings ArticleDOI
09 Jan 1979
TL;DR: A simple "blind" algorithm which can be used on the resulting binary map to locate edges and is therefore highly independent of operator interaction is developed.
Abstract: Discontinuities in a noisy image can be detected by thresholding a differentiator adapted to the general spatial quality of the image. This paper develops a simple "blind" algorithm which can be used on the resulting binary map to locate edges. The data is first subjected to a thinning operation and the map is processed to yield clusters of points which are best represented by straight lines. It is not necessary to identify the subsets, and the algorithm is therefore highly independent of operator interaction. A mathematical basis for the technique and several examples are presented.© (1979) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Book ChapterDOI
01 Jan 1979
TL;DR: This chapter will illustrate how modern data processing technology can be applied to automate visual inspection with speeds and accuracies far beyond human capabilities.
Abstract: Visual inspection can be modeled as a data processing procedure involving analysis of distributed image information. This chapter will discuss the nature of the data processing involved, and will outline procedures for instrumenting visual recognition, orientation, and measurement for industrial process control. It will illustrate how modern data processing technology can be applied to automate visual inspection with speeds and accuracies far beyond human capabilities.