scispace - formally typeset
Search or ask a question

Showing papers on "Orientation (computer vision) published in 1995"


Journal ArticleDOI
TL;DR: A reliable method for extracting structural features from fingerprint images is presented and a “goodness index” (GI) which compares the results of automatic extraction with manually extracted ground truth is evaluated.

635 citations


Journal ArticleDOI
01 Feb 1995
TL;DR: A survey of research in the area of vision sensor planning is presented, and a brief description of representative sensing strategies for the tasks of object recognition and scene reconstruction are presented.
Abstract: A survey of research in the area of vision sensor planning is presented. The problem can be summarized as follows: given information about the environment as well as information about the task that the vision system is to accomplish, develop strategies to automatically determine sensor parameter values that achieve this task with a certain degree of satisfaction. With such strategies, sensor parameters values can be selected and can be purposefully changed in order to effectively perform the task at hand. The focus here is on vision sensor planning for the task of robustly detecting object features. For this task, camera and illumination parameters such as position, orientation, and optical settings are determined so that object features are, for example, visible, in focus, within the sensor field of view, magnified as required, and imaged with sufficient contrast. References to, and a brief description of, representative sensing strategies for the tasks of object recognition and scene reconstruction are also presented. For these tasks, sensor configurations are sought that will prove most useful when trying to identify an object or reconstruct a scene. >

493 citations


Proceedings ArticleDOI
20 Jun 1995
TL;DR: The algorithm is based on a nonlinear combination of linear filters and searches for elongated, symmetric line structures, while suppressing the response to edges, leading to an efficient, parameter-free implementation.
Abstract: Presents a novel, parameter-free technique for the segmentation and local description of line structures on multiple scales, both in 2D and in 3D. The algorithm is based on a nonlinear combination of linear filters and searches for elongated, symmetric line structures, while suppressing the response to edges. The filtering process creates one sharp maximum across the line-feature profile and across the scale-space. The multi-scale response reflects local contrast and is independent of the local width. The filter is steerable in both the orientation and scale domains, leading to an efficient, parameter-free implementation. A local description is obtained that describes the contrast, the position of the center-line, the width, the polarity, and the orientation of the line. Examples of images from different application domains demonstrate the generic nature of the line segmentation scheme. The 3D filtering is applied to magnetic resonance volume data in order to segment cerebral blood vessels. >

372 citations


Patent
31 Mar 1995
TL;DR: In this article, a method for accurately surveying and determining the physical location of objects in a scene is described, which uses image data captured by one or more cameras and three points from the scene which may either be measured after the images are captured or may be included in the calibrated target placed in the scene at the time of image capture.
Abstract: Methods and apparatus for accurately surveying and determining the physical location of objects in a scene are disclosed which use image data captured by one or more cameras and three points from the scene which may either be measured after the images are captured or may be included in the calibrated target placed in the scene at the time of image capture. Objects are located with respect to a three dimensional coordinate system defined with reference to the three points. The methods and apparatus permit rapid set up and capture of precise location data using simple apparatus and simple image processing. The precise location and orientation of the camera utilized to capture each scene is determined from image data, from the three point locations and from optical parameters of the camera.

202 citations


Patent
05 Apr 1995
TL;DR: In this paper, a digital motion analyser is used for training and simulating physical skills using a programmable digital signal processor and a universal accelerometer, which measures the acceleration and calculates the linear velocity, angular velocity, the orientation, and the position of a moving object, and stores and plays back the motion using audiovisual means.
Abstract: A method of training and simulating physical skills using a digital motion analyzing device that measures the necessary and sufficient information to describe uniquely a rigid body motion. The device, comprising a programmable digital signal processor and a universal accelerometer, measures the acceleration and calculates the linear velocity, the angular velocity, the orientation, and the position of a moving object, and stores and plays back the motion using audiovisual means and compares it with other pre-recorded motions. The student can choose a model and try to imitate the model with the help of audiovisual means and biofeedback means. The device is portable. It can also be connected to a computer where the motion can be further analyzed by comparing it with a database comprising many other characteristic motions. If a projectile is involved, such as in a golf swing, the trajectory of the projectile is calculated.

196 citations


Proceedings ArticleDOI
20 Jun 1995
TL;DR: This work develops techniques for computing a prototype trajectory of an ensemble of trajectories, for defining configuration states along the prototype, and for recognizing gestures from an unsegmented, continuous stream of sensor data.
Abstract: We define a gesture to be a sequence of states in a measurement or configuration space. For a given gesture, these states are used to capture both the repeatability and variability evidenced in a training set of example trajectories. The states are positioned along a prototype of the gesture, and shaped such that they are narrow in the directions in which the ensemble of examples is tightly constrained, and wide in directions in which a great deal of variability is observed. We develop techniques for computing a prototype trajectory of an ensemble of trajectories, for defining configuration states along the prototype, and for recognizing gestures from an unsegmented, continuous stream of sensor data. The approach is illustrated by application to a range of gesture-related sensory data: the two-dimensional movements of a mouse input device, the movement of the hand measured by a magnetic spatial position and orientation sensor, and, lastly, the changing eigenvector projection coefficients computed from an image sequence. >

189 citations


Patent
28 Mar 1995
TL;DR: In this paper, a symbol is represented as a square array of data cells surrounded by a border 30 of orientation and timing data cells, which can be used to determine the orientation of each data cell.
Abstract: The present invention is a symbol 10 that includes a square array 12 of data cells 14 surrounded by a border 30 of orientation and timing data cells. The border 30 can be surrounded by an external data field 18 also including information data cells 20. The orientation and timing for sampling each data cell can be determined from the border 30 or from additional orientation and timing cells in the internal data field 12 or external data field 18. A system 40 and 42 is also included that captures an image of the symbol, determines symbol orientation, decodes the contents of the symbol and outputs the decoded contents to a display or other device. The present invention also includes a device 48 that can produce symbols on a substrate such as a label.

183 citations


Journal ArticleDOI
TL;DR: An algorithm for pose estimation based on the volume measurement of tetrahedra composed of feature-point triplets extracted from an arbitrary quadrangular target and the lens center of the vision system is proposed.
Abstract: Pose estimation is an important operation for many vision tasks. In this paper, the authors propose an algorithm for pose estimation based on the volume measurement of tetrahedra composed of feature-point triplets extracted from an arbitrary quadrangular target and the lens center of the vision system. The inputs to this algorithm are the six distances joining all feature pairs and the image coordinates of the quadrangular target. The outputs of this algorithm are the effective focal length of the vision system, the interior orientation parameters of the target, the exterior orientation parameters of the camera with respect to an arbitrary coordinate system if the target coordinates are known in this frame, and the final pose of the camera. The authors have also developed a shape restoration technique which is applied prior to pose recovery in order to reduce the effects of inaccuracies caused by image projection. An evaluation of the method has shown that this pose estimation technique is accurate and robust. Because it is based on a unique and closed form solution, its speed makes it a potential candidate for solving a variety of landmark-based tracking problems. >

179 citations


Journal ArticleDOI
TL;DR: Unlike earlier non-feature-based, curved surface shape-from-texture approaches, the assumption that the surface texture is isotropic is not required; surface texture homogeneity can be assumed instead.
Abstract: Presents a non-feature-based solution to the problem of computing the shape of curved surfaces from texture information. First, the use of local spatial-frequency spectra and their moments to describe texture is discussed and motivated. A new, more accurate method for measuring the local spatial-frequency moments of an image texture using Gabor elementary functions and their derivatives is presented. Also described is a technique for separating shading from texture information, which makes the shape-from-texture algorithm robust to the shading effects found in real imagery. Second, a detailed model for the projection of local spectra and spectral moments of any surface reflectance patterns (not just textures) is developed. Third, the conditions under which the projection model can be solved for the orientation of the surface at each point are explored. Unlike earlier non-feature-based, curved surface shape-from-texture approaches, the assumption that the surface texture is isotropic is not required; surface texture homogeneity can be assumed instead. The algorithm's ability to operate on anisotropic and nondeterministic textures, and on both smooth- and rough-textured surfaces, is demonstrated. >

147 citations


Proceedings ArticleDOI
23 Oct 1995
TL;DR: Results proved that the elements of the Gabor elements are precisely localized on the edges of the images, and give a local decomposition as linear combinations of "textons" in the textured regions.
Abstract: A crucial problem in image analysis is to construct efficient low-level representations of an image, providing precise characterization of features which compose it, such as edges and texture components. An image usually contains very different types of features, which have been successfully modelled by the very redundant family of 2D Gabor oriented wavelets, describing the local properties of the image: localization, scale, preferred orientation, amplitude and phase of the discontinuity. However, this model generates representations of very large size. Instead of decomposing a given image over this whole set of Gabor functions, we use an adaptive algorithm (called matching pursuit) to select the Gabor elements which approximate at best the image, corresponding to the main features of the image. This produces compact representation in terms of few features that reveal the local image properties. Results proved that the elements are precisely localized on the edges of the images, and give a local decomposition as linear combinations of "textons" in the textured regions. We introduce a fast algorithm to compute the matching pursuit decomposition.

135 citations


Proceedings ArticleDOI
09 May 1995
TL;DR: A new measure of perceptual image quality based on a multiple channel human visual system (HVS) model for use in digital image compression that correlates better with perceptual imagequality than the conventional SNR measure.
Abstract: We propose a new measure of perceptual image quality based on a multiple channel human visual system (HVS) model for use in digital image compression. The model incorporates the HVS light sensitivity, spatial frequency and orientation sensitivity, and masking effects. The model is based on the concept of local band-limited contrast (LBC) in oriented spatial frequency bands. This concept leads to a simple masking function. The model has the flexibility to account for the changes in frequency sensitivity as a function of local luminance and is consistent with masking experiments using gratings and edges. Numerical scaling experiments with a test panel and a set a test images that were coded using different coding algorithms showed that the proposed measure correlates better with perceptual image quality than the conventional SNR measure.

01 Jan 1995
TL;DR: In this article, a technique for characterization and segmentation of anisotropic patterns that exhibit a single local orientation is described, which can be done with the help of quantitative image analysis.
Abstract: This paper describes a technique for characterization and segmentation of anisotropic patterns that exhibit a single local orientation. Using Gaussian derivatives we construct a gradient-square tensor at a selected scale. Smoothing of this tensor allows us to combine information in a local neighborhood without canceling vectors pointing in opposite directions. Whereas opposite vectors would cancel, their tensors reinforce. Consequently, the tensor characterizes orientation rather than direction. Usually this local neighborhood is at least a few times larger than the scale parameter of the gradient operators. The eigenvalues yield a measure for anisotropy whereas the eigenvectors indicate the local orientation. In addition to these measures we can detect anomalies in textured patterns. 1. Introduction Information from subsurface structures may help geologists in their search for hydrocarbons (oil and gas). In addition to seismic measurements which are performed at the earth’s surface important information can be extracted from a borehole. This can be done either by downhole imaging of the borehole wall or by analyzing the removed borehole material “the core”. Core imaging requires careful drilling with a hollow drillbit. The cores are transported to the surface for further analysis. Apart from physical measurements geologists are interested in the spatial organization of the acquired rock formations. We show that this can be done with the help of quantitative image analysis. The cylindrical cores can be cut longitudinally (slabbed) and digitization of the flat surface yields a 2D slabbed core image. Quantitative information about the layer structure in a borehole may help the geologist to improve their interpretation. The approach to be followed is guided by a simple layer model of the earth’s subsurface. These layers can be described by a number of parameters which may all vary as a function of depth. Some of these parameters have a direct geometric meaning (dip and azimuth) whereas others are much more difficult to express quantitatively in a unique way. In this paper we will focus on orientation and anisotropy measurements applied to slabbed core images.

Journal ArticleDOI
TL;DR: In this paper, the tendency for the smallest possible surface energy was considered as the driving force for the texture evolution, and simple expressions for crucial characteristics of texture evolution were obtained, as for instance for the typical time (or thickness) at which the system will switch from a random to a peaked distribution, and for the time dependence of the spread of the orientational distribution of the crystallites in the growing film.

Journal ArticleDOI
TL;DR: It was found that the deformations of shading and/or highlights produced levels of performance similar to those obtained for the optical deformation of textured surfaces, suggesting that the human visual system utilizes a much richer array of optical information to support its perception of shape than is typically appreciated.
Abstract: One of the fundamental issues in the study of human perception concerns how the shapes of objects in the environment are visually specified from the measurable properties of optical stimulation. There are many different aspects of optical structure that are known to provide perceptually salient information about an object’s threedimensional form. Some of these properties—the socalled pictorial depth cues—are available within individual static images. These include texture gradients, linear perspective, and patterns of shading. Others are defined by the systematic transformations among a sequence of multiple images, and include the disparity between each eye’s view in binocular vision, and the optical deformations that occur when objects are observed in motion. In the theoretical analysis of motion or binocular disparity, two distinct classes of optical phenomena need to be considered. One involves the optical transformations of identifiable image features, such as surface texture or the vertices of a polyhedron, for which it is possible to establish a point-to-point correspondence over multiple views. The ability to match corresponding features in different images is a necessary condition for most existing computational models for the analysis of 3-D shape from motion or stereo, but there are other types of optical transformations that occur frequently in natural vision, for which this condition cannot be satisfied. These include the optical deformations of occlusion contours and smooth gradients of image shading. Patterns of shading in an image arise because of systematic changes in local surface orientation. Patches that are oriented perpendicularly to the prevailing direction of illumination reflect the greatest amount of light, while those that are parallel to the direction of illumination re

Patent
28 Feb 1995
TL;DR: In this article, an ultrasound probe is positioned at a selected probe position and orientation relative to the region of interest, and echo signals representative of ultrasound echoes received from the regions of interest are generated.
Abstract: A method and apparatus for ultrasound imaging utilize tissue-centered scan conversion to compensate for transducer motion. A reference point, typically located in a region of interest, is selected. An ultrasound probe is positioned at a selected probe position and orientation relative to the region of interest. The region of interest is ultrasonically scanned, and echo signals representative of ultrasound echoes received from the region of interest are generated. The echo signals are referenced to the probe position and orientation. The probe position and orientation relative to the reference point are determined, typically by a sensing device. The echo signals and the probe position and orientation are transformed to image signals for display. The process is repeated for different probe positions and orientations to obtain a plurality of images. Each of the images is referenced to the selected reference point as the probe position and orientation changes, thereby compensating for transducer motion.

Journal Article
TL;DR: In this article, the authors developed an automated method for alignment, sizing and quantification of images using three-dimensional reference templates to optimize the interpretation of myocardial SPECT.
Abstract: UNLABELLED To optimize the interpretation of myocardial SPECT, we developed an automated method for alignment, sizing and quantification of images using three-dimensional reference templates. METHODS Stress and rest reference templates were built using a hybrid three-dimensional image registration scheme based on principal-axes and simplex-minimization techniques. Normal patient studies were correlated to a common orientation, position and size. Aligned volumes were added to each other to create amalgamated templates. Separate templates were built for normal stress and rest SPECT 99mTc-sestamibi scans of 23 men and 15 women. The same algorithm was used to correlate abnormal test-patient studies with respective normal templates. The robustness of the fitting algorithm was evaluated by registering data with simulated defects and by repeated registrations after arbitrary misalignment of images. To quantify regional count distribution, 18 three-dimensional segments were outlined on the templates, and counts in the segment were evaluated for all test patients. RESULTS Our technique provided accurate and reproducible alignment of the images and compensated for varying dimensions of the myocardium by adjusting scaling parameters. The algorithm successfully registered both normal and abnormal studies. The mean registration errors caused by simulated defects were 1.5 mm for position, 1.3 degrees for tilt and 5.3% for sizing (stress images), and 1.4 mm, 2.0 degrees and 3.7% (rest images); these errors were below the limits of visual assessment. CONCLUSION Automated three-dimensional image fitting to normal templates can be used for reproducible quantification of myocardial SPECT, eliminating operator-dependence of the results.

Patent
06 Jun 1995
TL;DR: In this paper, the CAD design system is used in designing the targets as well as for the assembly process using the targets, and a plurality of targets can also be used to monitor and inspect a forming process such as on a sheet panel.
Abstract: Methods for assembling, handling, and fabricating are disclosed in which targets are used on objects. The targets can be specifically applied to the object, or can be an otherwise normal feature of the object. Conveniently, the targets are removable from the object or covered by another object during an assembling process. One or more robots and imaging devices for the targets are used. The robots can be used to handle or assemble a part, or a fixture may be used in conjunction with the robots. Conveniently, the CAD design system is used in designing the targets as well as for the assembly process using the targets. A plurality of targets can also be used to monitor and inspect a forming process such as on a sheet panel.

Patent
14 Sep 1995
TL;DR: In this paper, a three-dimensional image display apparatus has an image display device whose display surface is held fixed relative to the viewer, a transparent body for accommodating the image display devices therein, an encoder for detecting rotation angles of the device with respect to the transparent body, an orientation detector for calculating the difference between the orientation of the opaque body and the position of the object on the display surface, and an image generator for generating an image for display on the device in accordance with the output of the orientation detector.
Abstract: The orientation of an image display device is maintained fixed relative to the viewer, the image display device is enclosed in a transparent body, and by detecting rotation angles of the image display device with respect to the transparent body, an image that is to be viewed when the transparent body is held in a hand or the like is displayed on the image display device, thereby enabling the viewer to view a displayed object from any desired direction as if he is holding the object in his hand. The three-dimensional image display apparatus has an image display device whose display surface is held fixed relative to the viewer, a transparent body for accommodating the image display device therein, an encoder for detecting rotation angles of the image display device with respect to the transparent body, an orientation detector for calculating the difference between the orientation of the transparent body and the orientation of the image display device on the basis of the output of the encoder, and an image generator for generating an image for display on the image display device in accordance with the output of the orientation detector.

Patent
30 Jan 1995
TL;DR: In this paper, a flat display can display an image corresponding to text, data and graphic information in several orientations, and a plurality of switches enable the user to select an orientation for the image relative to the orientation of the flat display.
Abstract: A pen-based computer including a housing and a flat display integral therewith. The flat display can display an image corresponding to text, data and graphic information in several orientations. A plurality of switches enables the user to select an orientation for the image relative to the orientation of the flat display. In one embodiment the switches are mercury switches positioned to automatically align the orientation of the image relative to motion of the flat panel display relative to the force of gravity. An optional additional switch resets the orientation of the image to a predetermined orientation or prevents the reorientation of the image responsive to the mercury switches.

Journal ArticleDOI
TL;DR: A new method is presented for visualizing data as they are generated from real-time applications that allows viewers to perform simple data analysis tasks such as detection of data groups and boundaries, target detection, and estimation on a dynamic sequence of data frames.
Abstract: A new method is presented for visualizing data as they are generated from real-time applications. These techniques allow viewers to perform simple data analysis tasks such as detection of data groups and boundaries, target detection, and estimation. The goal is to do this rapidly and accurately on a dynamic sequence of data frames. Our techniques take advantage of an ability of the human visual system called preattentive processing. Preattentive processing refers to an initial organization of the visual system based on operations believed to be rapid, automatic, and spatially parallel. Examples of visual features that can be detected in this way include hue, orientation, intensity, size, curvature, and line length. We believe that studies from preattentive processing should be used to assist in the design of visualization tools, especially those for which high speed target, boundary, and region detection are important. Previous work has shown that results from research in preattentive processing can be used to build visualization tools that allow rapid and accurate analysis of individual, static data frames. We extend these techniques to a dynamic real-time environment. This allows users to perform similar tasks on dynamic sequences of frames, exactly like those generated by real-time systems such as visual interactive simulation. We studied two known preattentive features, hue and curvature. The primary question investigated was whether rapid and accurate target and boundary detection in dynamic sequences is possible using these features. Behavioral experiments were run that simulated displays from our preattentive visualization tools. Analysis of the results of the experiments showed that rapid and accurate target and boundary detection is possible with both hue and curvature. A second question, whether interactions occur between the two features in a real-time environment, was answered positively.

Journal ArticleDOI
TL;DR: The current model and algorithm are more accurate, yet substantially simpler, than earlier versions of this approach, and are tested on photographs of real-world surfaces.

Patent
31 Jul 1995
TL;DR: A display device for a camera finder include a display for displaying data, a position sensor which outputs a signal that is dependent on an orientation of the camera, and a controller for controlling the display such that the data is displayed along one of several predetermined directions as discussed by the authors.
Abstract: A display device for a camera finder include a display for displaying data, a position sensor which outputs a signal that is dependent on an orientation of the camera finder, and a controller for controlling the display such that the data is displayed along one of several predetermined directions. The predetermined direction that is selected depends on the output signal of the position sensor.

Patent
18 Jul 1995
TL;DR: In this paper, a curved band image is formed that includes an image of an orientation feature, and the longitudinal position of the orientation feature is then determined in the coordinate system of the straight band image, which longitudinal position is then converted into an angular displacement to provide the orientation of the wafer.
Abstract: The invention can be used to find the orientation of a semiconductor wafer without wafer handling, i.e., in a non-contact manner. The invention uses knowledge of the position of a semiconductor wafer, and the position of an orientation feature of the wafer, to find the orientation of the wafer. According to the invention, a curved band image is formed that includes an image of an orientation feature. The curved band image is then transformed into a straight band image. The longitudinal position of the orientation feature is then determined in the coordinate system of the straight band image, which longitudinal position is then converted into an angular displacement in the coordinate system of the curved band image to provide the orientation of the wafer.

Journal ArticleDOI
TL;DR: An orientation-adaptive boundary estimation process, embedded in a multiresolution pyramidal structure, that allows the use of different clustering procedures without spatial connectivity constraints is proposed.

Proceedings ArticleDOI
20 Mar 1995
TL;DR: Results of some of the experiments designed to compare various similarity measures for application to image databases are reported, currently working with texture images and intend to work with face images in the near future.
Abstract: Similarity between images is used for storage and retrieval in image databases. In the literature, several similarity measures have been proposed that may be broadly categorized as: (1) metric based, (2) set-theoretic based, and (3) decision-theoretic based measures. In each category, measures based on crisp logic as well as fuzzy logic are available, In some applications such as image databases, measures based on fuzzy logic would appear to be naturally better suited, although so far no comprehensive experimental study has been undertaken. In this paper, we report results of some of the experiments designed to compare various similarity measures for application to image databases. We are currently working with texture images and intend to work with face images in the near future. As a first step for comparison, the similarity matrices for each of the similarity measure is computed over a set of selected textures and are presented as visual images. Comparative analysis of these images reveals the relative characteristics of each of these measures. Further experiments are needed to study their sensitivity to small changes in images such as illumination, magnification, orientation etc. We describe these experiments (sensitivity analysis, transition analysis etc.) that are currently in progress. >

Journal ArticleDOI
TL;DR: The parallel processing of acoustic information in grasshoppers corresponds to the evolution of acoustic communication in Acridids, as song evolved only when the ability of hearing and localization was already present.
Abstract: In grasshoppers the acoustic information for pattern recognition and directional analysis is processed via parallel channels and not serially This can be concluded from the following results established by behavioural experiments:

Patent
24 Jan 1995
TL;DR: In this article, a method and an apparatus are disclosed that extract digital elevation data from a pair of stereo images with two corresponding sets of airborne control data associated with each image of the stereo image pair.
Abstract: A method and an apparatus are disclosed that extract digital elevation data from a pair of stereo images with two corresponding sets of airborne control data associated with each image of the stereo image pair Point and edge features are identified from the stereo image pair, projected from respective camera stations of the stereo images onto a horizontal projection plane using respective set of airborne control dam including respective positions of a camera station, the interior orientation and calibration data of the camera, respective roll angles, respective pitch angles, respective yaw angles of each image, and respective flight bearing angles The space coordinate of each projection are compared in object space to obtain an object-space parallax The topographic elevation of each feature is derived from said object-space parallax, a base length, and the altitude of a camera station autonomously to eliminate the need for stereoscopic viewing device or ground control data

Journal ArticleDOI
TL;DR: In this paper, the authors developed an algorithm based on correlation analysis that detects and associates events with similar waveforms, which can provide useful insights into geometry and style of faulting at depth within the crust.
Abstract: SUMMARY Highly similar waveforms of different earthquakes are due to similar focal mechanisms and common propagation paths. The relative hypocentral locations of events in clusters of similar earthquakes can provide useful insights into geometry and style of faulting at depth within the crust. The detection of such earthquake clusters within large data sets can only be accomplished efficiently by means of an automatic procedure. Therefore, we developed an algorithm based on correlation analysis that detects and associates events with similar waveforms. The algorithm has been applied to a data set recorded in the western Swiss Alps: 619 out of a total of 1497 events exhibit similarities with other events. Based on a more detailed investigation of two selected clusters with known focal mechanisms, it could be shown that the active fault planes correspond to neotectonic structures mapped in the study area. Due to their oblique orientation relative to the larger-scale epicentral alignment, these faults have been interpreted as Riedel shears.

Journal ArticleDOI
TL;DR: It is shown that there are a number of trade-offs in the density with which information can be displayed using texture, and a trade-off between the size of the texture elements and the precision with which the location can be specified.
Abstract: Results from vision research are applied to the synthesis of visual texture for the purposes of information display. The literature surveyed suggests that the human visual system processes spatial information by means of parallel arrays of neurons that can be modeled by Gabor functions. Based on the Gabor model, it is argued that the fundamental dimensions of texture for human perception are orientation, size (1/frequency), and contrast. It is shown that there are a number of trade-offs in the density with which information can be displayed using texture. Two of these are (1) a trade-off between the size of the texture elements and the precision with which the location can be specified, and (2) the precision with which texture orientation can be specified and the precision with which texture size can be specified. Two algorithms for generating texture are included.

Proceedings ArticleDOI
25 Sep 1995
TL;DR: This paper presents the vision-based road detection system currently operative onto the MOB-LAB land vehicle, based on a full-custom low-cost massively parallel system that achieves real-time performances in the processing of image sequences, due to the extremely efficient implementation of the algorithm.
Abstract: This paper presents the vision-based road detection system currently operative onto the MOB-LAB land vehicle. Based on a full-custom low-cost massively parallel system, it achieves real-time performances (/spl sime/17 Hz) in the processing of image sequences, due to the extremely efficient implementation of the algorithm. Assuming a flat road and the complete set of acquisition parameters are known (camera position, orientation, optics), the system is capable to detect road markings on structured roads even in extremely severe shadow conditions.