scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Posted Content
TL;DR: In this paper, a robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation based on a new coarse-to-fine object proposal that boosts the vehicle detection.
Abstract: In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the network's outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.

192 citations

Journal ArticleDOI
TL;DR: A reflectance map makes the relationship between image intensity and surface orientation explicit and shows that this provides sufficient information to determine surface orientation at each image point.

190 citations

Journal ArticleDOI
TL;DR: In this article, a complete and detailed map of the ice-velocity field on mountain glaciers is obtained by cross-correlating SPOT5 optical images, without ground control points.

190 citations

Proceedings ArticleDOI
20 Jun 1995
TL;DR: This work develops techniques for computing a prototype trajectory of an ensemble of trajectories, for defining configuration states along the prototype, and for recognizing gestures from an unsegmented, continuous stream of sensor data.
Abstract: We define a gesture to be a sequence of states in a measurement or configuration space. For a given gesture, these states are used to capture both the repeatability and variability evidenced in a training set of example trajectories. The states are positioned along a prototype of the gesture, and shaped such that they are narrow in the directions in which the ensemble of examples is tightly constrained, and wide in directions in which a great deal of variability is observed. We develop techniques for computing a prototype trajectory of an ensemble of trajectories, for defining configuration states along the prototype, and for recognizing gestures from an unsegmented, continuous stream of sensor data. The approach is illustrated by application to a range of gesture-related sensory data: the two-dimensional movements of a mouse input device, the movement of the hand measured by a magnetic spatial position and orientation sensor, and, lastly, the changing eigenvector projection coefficients computed from an image sequence. >

189 citations

Journal ArticleDOI
Dov Sagi1, Bela Julesz1
TL;DR: The role of focused attention in vision is examined and experiments show that the mixture of a few horizontal and vertical line segments embedded in an aggregate of diagonal line segments can be rapidly counted by a parallel (preattentive) process, while the discrimination between horizontal and Vertical orientation requires serial search by shifting focal attention to each line segment.
Abstract: The role of focused attention in vision is examined. Recent theories of attention hypothesize that serial search by focal attention is required for discrimination between different combinations of features. Experiments are reported which show that the mixture of a few (less than five) horizontal and vertical line segments embedded in an aggregate of diagonal line segments can be rapidly counted (also called ‘subitizing’) by a parallel (preattentive) process, while the discrimination between horizontal and vertical orientation requires serial search by shifting focal attention to each line segment. Thus detecting and counting targets that differ in orientation can be done in parallel by a preattentive process, whereas knowing ‘what’ the orientation of a target is (horizontal or vertical, ie of a single conspicuous feature) requires a serial search by focal attention.

189 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691