Topic
Orientation (computer vision)
About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.
Papers published on a yearly basis
Papers
More filters
•
21 May 1992
TL;DR: In this article, an occurrence of a predefined object in an image is recognized by receiving image data, convolving the image data with a set of predefined functions to analyze the image features into occurrences of pre-defined elementary features, and examining the occurrences for an instance of a combination of the elementary features that is characteristic of the predefined objects.
Abstract: An occurrence of a predefined object in an image is recognized by receiving image data, convolving the image data with a set of predefined functions to analyze the image data into occurrences of predefined elementary features, and examining the occurrences for an occurrence of a predefined combination of the elementary features that is characteristic of the predefined object. Preferably the image data are convolved directly with a first predefined function to determine blob responses, and a second predefined function to determine ganglia responses indicating edges of objects. Then the ganglia responses are convolved with a third predefined function to determine simple responses indicating lines in the image, and the simple responses are combined with the ganglia responses to determine complex responses indicating terminated line segments in the image. A pointing finger, for example, is recognized from the combination of a blob response and a complex response. The method, for example, permits a data input terminal to recognize in real time the presence, position, and orientation of a pointing finger, to eliminate the need for data input devices such as "mice" or "joysticks." Therefore a user can direct an application program in the most natural way, without the distraction of manipulating a data input device.
173 citations
••
23 Jun 1999TL;DR: The compass operator detects step edges without assuming that the regions on either side have constant color and finds the orientation of a diameter that maximizes the difference between two halves of a circular window.
Abstract: The compass operator detects step edges without assuming that the regions on either side have constant color. Using distributions of pixel colors rather than the mean, the operator finds the orientation of a diameter that maximizes the difference between two halves of a circular window. Junctions can also be detected by exploiting their lack of bilateral symmetry. This approach is superior to a multi-dimensional gradient method in situations that often result in false negatives, and it localizes edges better as scale increases.
173 citations
••
12 Oct 2008TL;DR: This paper presents an imaging system that enables one to control the depth of field in new and powerful ways, and describes extended DOF, where a large depth range is captured with a very wide aperture but with nearly depth-independent defocus blur.
Abstract: The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector, during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured both in and out of focus.
Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate three applications of flexible DOF. First, we describe extended DOF, where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Applying deconvolution to a captured image gives an image with extended DOF and yet high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness while objects in between are severely blurred. Finally, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.
172 citations
••
TL;DR: An integrated real-time processing chain which utilizes multiple occurrence of objects in images is described which has been verified using image sections from two different flights and manually extracted ground truth data from the inner city of Munich.
Abstract: Vehicle detection has been an important research field for years as there are a lot of valuable applications, ranging from support of traffic planners to real-time traffic management. Especially detection of cars in dense urban areas is of interest due to the high traffic volume and the limited space. In city areas many car-like objects (e.g., dormers) appear which might lead to confusion. Additionally, the inaccuracy of road databases supporting the extraction process has to be handled in a proper way. This paper describes an integrated real-time processing chain which utilizes multiple occurrence of objects in images. At least two subsequent images, data of exterior orientation, a global DEM, and a road database are used as input data. The segments of the road database are projected in the non-geocoded image using the corresponding height information from the global DEM. From amply masked road areas in both images a disparity map is calculated. This map is used to exclude elevated objects above a certain height (e.g., buildings and vegetation). Additionally, homogeneous areas are excluded by a fast region growing algorithm. Remaining parts of one input image are classified based on the `Histogram of oriented Gradients (HoG)' features. The implemented approach has been verified using image sections from two different flights and manually extracted ground truth data from the inner city of Munich. The evaluation shows a quality of up to 70 percent.
172 citations
••
TL;DR: Tests demonstrate that the uniformity of images of any orientation can be improved significantly with a correction matrix from just one orientation and still further with two matrices, one axial and the other either coronal or sagittal.
171 citations