scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new scheme that enables us to apply a filter mask (or a convolution filter) to orientation data to give time-domain filters for orientation data that are computationally efficient and satisfy such important properties as coordinate invariance, time invariance and symmetry.
Abstract: Capturing live motion has gained considerable attention in computer animation as an important motion generation technique. Canned motion data are comprised of both position and orientation components. Although a great number of signal processing methods are available for manipulating position data, the majority of these methods cannot be generalized easily to orientation data due to the inherent nonlinearity of the orientation space. In this paper, we present a new scheme that enables us to apply a filter mask (or a convolution filter) to orientation data. The key idea is to transform the orientation data into their analogues in a vector space, to apply a filter mask on them, and then to transform the results back to the orientation space. This scheme gives time-domain filters for orientation data that are computationally efficient and satisfy such important properties as coordinate invariance, time invariance and symmetry. Experimental results indicate that our scheme is useful for various purposes, including smoothing and sharpening.

107 citations

Book ChapterDOI
14 Sep 2014
TL;DR: This paper proposes a framework for solving the problem of image quality transfer using random forest regression to relate patches in the low-quality data set to voxel values in the high quality data set and demonstrates efficacy on a standard data set.
Abstract: This paper introduces image quality transfer. The aim is to learn the fine structural detail of medical images from high quality data sets acquired with long acquisition times or from bespoke devices and transfer that information to enhance lower quality data sets from standard acquisitions. We propose a framework for solving this problem using random forest regression to relate patches in the low-quality data set to voxel values in the high quality data set. Two examples in diffusion MRI demonstrate the idea. In both cases, we learn from the Human Connectome Project (HCP) data set, which uses an hour of acquisition time per subject, just for diffusion imaging, using custom built scanner hardware and rapid imaging techniques. The first example, super-resolution of diffusion tensor images (DTIs), enhances spatial resolution of standard data sets with information from the high-resolution HCP data. The second, parameter mapping, constructs neurite orientation density and dispersion imaging (NODDI) parameter maps, which usually require specialist data sets with two b-values, from standard single-shell high angular resolution diffusion imaging (HARDI) data sets with b = 1000 s mm− 2. Experiments quantify the improvement against alternative image reconstructions in comparison to ground truth from the HCP data set in both examples and demonstrate efficacy on a standard data set.

107 citations

Journal ArticleDOI
TL;DR: This work adapted RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane lines to the image features, based on both ridgeness and ridge orientation, which it claims addresses detection reliability better.
Abstract: Detection of lane markings based on a camera sensor can be a low-cost solution to lane departure and curve-over-speed warnings A number of methods and implementations have been reported in the literature However, reliable detection is still an issue because of cast shadows, worn and occluded markings, variable ambient lighting conditions, for example We focus on increasing detection reliability in two ways First, we employed an image feature other than the commonly used edges: ridges, which we claim addresses this problem better Second, we adapted RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane lines to the image features, based on both ridgeness and ridge orientation In addition, the model was fitted for the left and right lane lines simultaneously to enforce a consistent result Four measures of interest for driver assistance applications were directly computed from the fitted parametric model at each frame: lane width, lane curvature, and vehicle yaw angle and lateral offset with regard the lane medial axis We qualitatively assessed our method in video sequences captured on several road types and under very different lighting conditions We also quantitatively assessed it on synthetic but realistic video sequences for which road geometry and vehicle trajectory ground truth are known

106 citations

Patent
24 Jan 2013
TL;DR: In this paper, the authors present methods, devices, systems, circuits and associated computer executable code for detecting and predicting the position and trajectory of surgical tools using radiographic imaging system.
Abstract: The present invention includes methods, devices, systems, circuits and associated computer executable code for detecting and predicting the position and trajectory of surgical tools. According to some embodiments of the present invention, images of a surgical tool within or in proximity to a patient may be captured by a radiographic imaging system. The images may be processed by associated processing circuitry to determine and predict position, orientation and trajectory of the tool based on 3D models of the tool, geometric calculations and mathematical models describing the movement and deformation of surgical tools within a patient body.

106 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a system that takes as input an astronomical image, and returns as output the pointing, scale, and orientation of that image (the astrometric calibration or WCS information).
Abstract: We have built a reliable and robust system that takes as input an astronomical image, and returns as output the pointing, scale, and orientation of that image (the astrometric calibration or WCS information). The system requires no first guess, and works with the information in the image pixels alone; that is, the problem is a generalization of the "lost in space" problem in which nothing--not even the image scale--is known. After robust source detection is performed in the input image, asterisms (sets of four or five stars) are geometrically hashed and compared to pre-indexed hashes to generate hypotheses about the astrometric calibration. A hypothesis is only accepted as true if it passes a Bayesian decision theory test against a background hypothesis. With indices built from the USNO-B Catalog and designed for uniformity of coverage and redundancy, the success rate is 99.9% for contemporary near-ultraviolet and visual imaging survey data, with no false positives. The failure rate is consistent with the incompleteness of the USNO-B Catalog; augmentation with indices built from the 2MASS Catalog brings the completeness to 100% with no false positives. We are using this system to generate consistent and standards-compliant meta-data for digital and digitized imaging from plate repositories, automated observatories, individual scientific investigators, and hobbyists. This is the first step in a program of making it possible to trust calibration meta-data for astronomical data of arbitrary provenance.

106 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691