scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Journal ArticleDOI
01 Jan 1993
TL;DR: A software procedure for fully automated detection of brain contours from single-echo 3-D MRI data, developed initially for scans with coronal orientation, and the potential of the technique for generalization to other problems is discussed.
Abstract: A software procedure is presented for fully automated detection of brain contours from single-echo 3-D MRI data, developed initially for scans with coronal orientation. The procedure detects structures in a head data volume in a hierarchical fashion. Automatic detection starts with a histogram-based thresholding step, whenever necessary preceded by an image intensity correction procedure. This step is followed by a morphological procedure which refines the binary threshold mask images. Anatomical knowledge, essential for the discrimination between desired and undesired structures, is implemented in this step through a sequence of conventional and novel morphological operations, using 2-D and 3-D operations. A final step of the procedure performs overlap tests on candidate brain regions of interest in neighboring slice images to propagate coherent 2-D brain masks through the third dimension. Results are presented for test runs of the procedure on 23 coronal whole-brain data sets, and one sagittal whole-brain data set. Finally, the potential of the technique for generalization to other problems is discussed, as well as limitations of the technique. >

300 citations

Journal ArticleDOI
TL;DR: Cue theory, which states that the visual system computes the distances of objects in the environment based on information from the posture of the eyes and from the patterns of light projected onto the retinas by the environment, is presented.
Abstract: The sources of visual information that must be present to correctly interpret spatial relations in images, the relative importance of different visual information sources with regard to metric judgments of spatial relations in images, and the ways that the task in which the images are used affect the visual information's usefulness are discussed Cue theory, which states that the visual system computes the distances of objects in the environment based on information from the posture of the eyes and from the patterns of light projected onto the retinas by the environment, is presented Three experiments in which the influence of pictorial cues on perceived spatial relations in computer-generated images was assessed are discussed Each experiment examined the accuracy with which subjects matched the position, orientation, and size of a test object with a standard by interactively translating, rotating, and scaling the test object >

300 citations

Journal ArticleDOI
TL;DR: This paper focuses on the presentation of APERO the orientation software, which has a large library of parametric model of distortion allowing a precise modelization of all the kind of pinhole camera the authors know, including several model of fish-eye.
Abstract: . IGN has developed a set of photogrammetric tools, APERO and MICMAC, for computing 3D models from set of images. This software, developed initially for its internal needs are now delivered as open source code. This paper focuses on the presentation of APERO the orientation software. Compared to some other free software initiatives, it is probably more complex but also more complete, its targeted user is rather professionals (architects, archaeologist, geomophologist) than people. APERO uses both computer vision approach for estimation of initial solution and photogrammetry for a rigorous compensation of the total error; it has a large library of parametric model of distortion allowing a precise modelization of all the kind of pinhole camera we know, including several model of fish-eye; there is also several tools for geo-referencing the result. The results are illustrated on various application, including the data-set of 3D-Arch workshop.

298 citations

Journal ArticleDOI
TL;DR: The results show that the best fitting model function for single-unit orientation tuning curves is a von Mises circular function with a variable degree of skewness, but other functions, such as a wrapped Gaussian, fit the data nearly as well.
Abstract: This paper compares the ability of some simple model functions to describe orientation tuning curves obtained in extracellular single-unit recordings from area 17 of the cat visual cortex. It also investigates the relationships between three methods currently used to estimate preferred orientation from tuning curve data: (a) least-squares curve fitting, (b) the vector sum method and (c) the Fourier transform method (Worgotter and Eysel 1987). The results show that the best fitting model function for single-unit orientation tuning curves is a von Mises circular function with a variable degree of skewness. However, other functions, such as a wrapped Gaussian, fit the data nearly as well. A cosine function provides a poor description of tuning curves in almost all instances. It is demonstrated that the vector sum and Fourier methods of determining preferred orientation are equivalent, and identical to calculating a least-square fit of a cosine function to the data. Least-squares fitting of a better model function, such as a von Mises function or a wrapped Gaussian, is therefore likely to be a better method for estimating preferred orientation. Monte-Carlo simulations confirmed this, although for broad orientation tuning curves sampled at 45° intervals, as is typical in optical recording experiments, all the methods gave similarly accurate estimates of preferred orientation. The sampling interval, the estimated error in the response measurements and the probable shape of the underlying response function all need to be taken into account in deciding on the best method of estimating preferred orientation from physiological measurements of orientation tuning data.

296 citations

Proceedings ArticleDOI
10 Dec 2015
TL;DR: This paper proposes to use Deep Convolutional Neural Network features from combined layers to perform orientation robust aerial object detection, and explores the inherent characteristics of DC-NN as well as relate the extracted features to the principle of disentangling feature learning.
Abstract: Detecting objects in aerial images is challenged by variance of object colors, aspect ratios, cluttered backgrounds, and in particular, undetermined orientations. In this paper, we propose to use Deep Convolutional Neural Network (DCNN) features from combined layers to perform orientation robust aerial object detection. We explore the inherent characteristics of DC-NN as well as relate the extracted features to the principle of disentangling feature learning. An image segmentation based approach is used to localize ROIs of various aspect ratios, and ROIs are further classified into positives or negatives using an SVM classifier trained on DCNN features. With experiments on two datasets collected from Google Earth, we demonstrate that the proposed aerial object detection approach is simple but effective.

294 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691