scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Patent
04 Nov 2003
TL;DR: In this paper, a method for correlating tracking data associated with an activity occurring in a 3D space with images captured within the space comprises the steps of: (a) locating a camera with respect to the three-dimensional space, wherein the camera at a given location has a determinable orientation and field of view that encompasses at least a portion of the space; (b) capturing a plurality of images with the camera and storing data corresponding to the images, including a capture time for each image; (c) capturing tracking data from identification tags attached to the people and/or
Abstract: A method for correlating tracking data associated with an activity occurring in a three-dimensional space with images captured within the space comprises the steps of: (a) locating a camera with respect to the three-dimensional space, wherein the camera at a given location has a determinable orientation and field of view that encompasses at least a portion of the space; (b) capturing a plurality of images with the camera and storing data corresponding to the images, including a capture time for each image; (c) capturing tracking data from identification tags attached to the people and/or objects within the space and storing the tracking data, including a tag capture time for each time that a tag is remotely accessed; (d) correlating each image and the tracking data by interrelating tracking data having a tag capture time in substantial correspondence with the capture time of each image, thereby generating track data corresponding to each image; (e) utilizing the track data to determine positions of the people and/or objects within the three dimensional space at the capture time of each image; and (f) utilizing the location and orientation of the camera to determine the portion of the space captured in each image and thereby reduce the track data to a track data subset corresponding to people and/or objects positioned within the portion of space captured in each image.

198 citations

Journal ArticleDOI
TL;DR: A novel face representation and recognition approach by exploring information jointly in image space, scale and orientation domains by convolving multiscale and multi-orientation Gabor filters is proposed.
Abstract: Information jointly contained in image space, scale and orientation domains can provide rich important clues not seen in either individual of these domains. The position, spatial frequency and orientation selectivity properties are believed to have an important role in visual perception. This paper proposes a novel face representation and recognition approach by exploring information jointly in image space, scale and orientation domains. Specifically, the face image is first decomposed into different scale and orientation responses by convolving multiscale and multi-orientation Gabor filters. Second, local binary pattern analysis is used to describe the neighboring relationship not only in image space, but also in different scale and orientation responses. This way, information from different domains is explored to give a good face representation for recognition. Discriminant classification is then performed based upon weighted histogram intersection or conditional mutual information with linear discriminant analysis techniques. Extensive experimental results on FERET, AR, and FRGC ver 2.0 databases show the significant advantages of the proposed method over the existing ones.

197 citations

Proceedings ArticleDOI
26 Jun 2006
TL;DR: This work presents a new volumetric method for reconstructing watertight triangle meshes from arbitrary, unoriented point clouds that efficiently produces solid models of low genus even for noisy and highly irregular data containing large holes, without loosing fine details in densely sampled regions.
Abstract: We present a new volumetric method for reconstructing watertight triangle meshes from arbitrary, unoriented point clouds. While previous techniques usually reconstruct surfaces as the zero level-set of a signed distance function, our method uses an unsigned distance function and hence does not require any information about the local surface orientation. Our algorithm estimates local surface confidence values within a dilated crust around the input samples. The surface which maximizes the global confidence is then extracted by computing the minimum cut of a weighted spatial graph structure. We present an algorithm, which efficiently converts this cut into a closed, manifold triangle mesh with a minimal number of vertices. The use of an unsigned distance function avoids the topological noise artifacts caused by misalignment of 3D scans, which are common to most volumetric reconstruction techniques. Due to a hierarchical approach our method efficiently produces solid models of low genus even for noisy and highly irregular data containing large holes, without loosing fine details in densely sampled regions. We show several examples for different application settings such as model generation from raw laser-scanned data, image-based 3D reconstruction, and mesh repair.

197 citations

Patent
11 Jun 2007
TL;DR: In this paper, a method of analyzing data based on the physiological orientation of a driver is provided, where data is descriptive of driver's gaze-direction is processing and criteria defining a location of driver interest is determined.
Abstract: A method of analyzing data based on the physiological orientation of a driver is provided. Data is descriptive of a driver's gaze-direction is processing and criteria defining a location of driver interest is determined. Based on the determined criteria, gaze-direction instances are classified as either on-location or off-location. The classified instances can then be used for further analysis, generally relating to times of elevated driver workload and not driver drowsiness. The classified instances are transformed into one of two binary values (e.g., 1 and 0) representative of whether the respective classified instance is on or off location. The uses of a binary value makes processing and analysis of the data faster and more efficient. Furthermore, classification of at least some of the off-location gaze direction instances can be inferred from the failure to meet the determined criteria for being classified as an on-location driver gaze direction instance.

196 citations

Journal ArticleDOI
TL;DR: The randomness and mean orientation angle maps generated using the adaptive decomposition significantly improve the physical interpretation of the scattering observed at the three different frequencies.
Abstract: Previous model-based decomposition techniques are applicable to a limited range of vegetation types because of their specific assumptions about the volume scattering component. Furthermore, most of these techniques use the same model, or just a few models, to characterize the volume scattering component in the decomposition for all pixels in an image. In this paper, we extend the model-based decomposition idea by creating an adaptive model-based decomposition technique, allowing us to estimate both the mean orientation angle and a degree of randomness for the canopy scattering for each pixel in an image. No scattering reflection symmetry assumption is required to determine the volume contribution. We examined the usefulness of the proposed decomposition technique by decomposing the covariance matrix using the National Aeronautics and Space Administration/Jet Propulsion Laboratory Airborne Synthetic Aperture Radar data at the C-, L-, and P-bands. The randomness and mean orientation angle maps generated using our adaptive decomposition significantly improve the physical interpretation of the scattering observed at the three different frequencies.

196 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691