Topic
Orientation (computer vision)
About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.
Papers published on a yearly basis
Papers
More filters
•
TL;DR: In this paper, a robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation based on a new coarse-to-fine object proposal that boosts the vehicle detection.
Abstract: In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the network's outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.
192 citations
••
TL;DR: A reflectance map makes the relationship between image intensity and surface orientation explicit and shows that this provides sufficient information to determine surface orientation at each image point.
190 citations
••
TL;DR: In this article, a complete and detailed map of the ice-velocity field on mountain glaciers is obtained by cross-correlating SPOT5 optical images, without ground control points.
190 citations
••
20 Jun 1995TL;DR: This work develops techniques for computing a prototype trajectory of an ensemble of trajectories, for defining configuration states along the prototype, and for recognizing gestures from an unsegmented, continuous stream of sensor data.
Abstract: We define a gesture to be a sequence of states in a measurement or configuration space. For a given gesture, these states are used to capture both the repeatability and variability evidenced in a training set of example trajectories. The states are positioned along a prototype of the gesture, and shaped such that they are narrow in the directions in which the ensemble of examples is tightly constrained, and wide in directions in which a great deal of variability is observed. We develop techniques for computing a prototype trajectory of an ensemble of trajectories, for defining configuration states along the prototype, and for recognizing gestures from an unsegmented, continuous stream of sensor data. The approach is illustrated by application to a range of gesture-related sensory data: the two-dimensional movements of a mouse input device, the movement of the hand measured by a magnetic spatial position and orientation sensor, and, lastly, the changing eigenvector projection coefficients computed from an image sequence. >
189 citations
••
TL;DR: The role of focused attention in vision is examined and experiments show that the mixture of a few horizontal and vertical line segments embedded in an aggregate of diagonal line segments can be rapidly counted by a parallel (preattentive) process, while the discrimination between horizontal and Vertical orientation requires serial search by shifting focal attention to each line segment.
Abstract: The role of focused attention in vision is examined. Recent theories of attention hypothesize that serial search by focal attention is required for discrimination between different combinations of features. Experiments are reported which show that the mixture of a few (less than five) horizontal and vertical line segments embedded in an aggregate of diagonal line segments can be rapidly counted (also called ‘subitizing’) by a parallel (preattentive) process, while the discrimination between horizontal and vertical orientation requires serial search by shifting focal attention to each line segment. Thus detecting and counting targets that differ in orientation can be done in parallel by a preattentive process, whereas knowing ‘what’ the orientation of a target is (horizontal or vertical, ie of a single conspicuous feature) requires a serial search by focal attention.
189 citations