scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Patent
14 Nov 2000
TL;DR: In this paper, a user can automatically calibrate a projector-camera system to recover the mapping from a given point in the source image and its corresponding point in a camera image, and vice-versa.
Abstract: The present invention enables a user to automatically calibrate a projector-camera system (14/10) to recover the mapping from a given point in the source (pre-projection) image and its corresponding point in the camera image, and vice-versa One or more calibration patterns are projected onto a flat surface (18) with possibly unknown location and orientation by a projector (14) with possibly unknown location, orientation and focal length Images of these patterns are captured by a camera (10) mounted at a possibly unknown location, orientation and with possibly unknown focal length Parameters for mapping between the source image and the camera image are computed (22) The present invention can become an essential component of a projector-camera system, such as automatic keystone correction and vision-based control of computer systems

133 citations

Journal ArticleDOI
TL;DR: Results are promising: not only the absolute image orientation gets significantly enhanced when the RTK-option is used, also block deformation is reduced, however, remaining offsets originating from time synchronization or camera event triggering should be considered during flight planning.
Abstract: Unmanned aerial vehicles (UAV) are increasingly used for topographic mapping. Despite the flexibility gained when using those devices, one has to invest more effort for ground control measurements compared to conventional photogrammetric airborne data acquisition, because positioning devices on UAVs are generally less accurate. Additionally, the limited quality of employed end-user cameras asks for self-calibration, which might cause some problems as well. A good distribution of ground control points (GCPs) is not only needed to solve for the absolute orientation of the image block in the desired coordinate frame, but also to mitigate block deformation effects which are resulting mainly from remaining systematic errors in the camera calibration. In this paper recent developments in the UAV-hardware market are picked up: some providers equip fixed-wing UAVs with RTK-GNSS-enabled 2-frequency receivers and set up a processing pipeline which allows them to promise an absolute block orientation in a similar accuracy range as through traditional indirect sensor orientation. Besides the analysis of the actually obtainable accuracy, when one of those systems is used, we examine the effect different flight directions and altitudes (cross flight) have onto the bundle adjustment. For this purpose two test areas have been prepared and flown with a fixed-wing UAV. Results are promising: not only the absolute image orientation gets significantly enhanced when the RTK-option is used, also block deformation is reduced. However, remaining offsets originating from time synchronization or camera event triggering should be considered during flight planning. In flat terrains a cross flight pattern helps to enhance results because of better and more reliable self-calibration.

132 citations

Patent
12 Feb 2010
TL;DR: In this paper, a method for determining the pose of a camera with respect to at least one object of a real environment for use in authoring/augmented reality application that includes generating a first image by the camera capturing a real object of the real environment, generating first orientation data from at least 1 orientation sensor associated with the camera or from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera.
Abstract: Method for determining the pose of a camera with respect to at least one object of a real environment for use in authoring/augmented reality application that includes generating a first image by the camera capturing a real object of a real environment, generating first orientation data from at least one orientation sensor associated with the camera or from an algorithm which analyses the first image for finding and determining features which are indicative of an orientation of the camera, allocating a distance of the camera to the real object, generating distance data indicative of the allocated distance, determining the pose of the camera with respect to a coordinate system related to the real object of the real environment using the distance data and the first orientation data. May be performed with reduced processing requirements and/or higher processing speed, in mobile device such as mobile phones having display, camera and orientation sensor.

132 citations

Journal ArticleDOI
TL;DR: A method based on three neural networks of the local linear map type which enables a computer to identify the head orientation of a user by learning from examples is presented.
Abstract: Humans easily recognize where another person is looking and often use this information for interspeaker coordination. We present a method based on three neural networks of the local linear map type which enables a computer to identify the head orientation of a user by learning from examples. One network is used for color segmentation, a second for localization of the face, and the third for the final recognition of the head orientation. The system works at a frame rate of one image per second on a common workstation, We analyze the accuracy achieved at different processing steps and discuss the usability of the approach in the context of a visual human-machine interface.

132 citations

Patent
09 Jan 1985
TL;DR: In this paper, a tree structure representing the three-dimensional universe is represented by a plurality of nodes, one for each volume in the 3D universe which is at least partially occupied by objects in the scene.
Abstract: An image generator for generating two-dimensional images of three-dimensional solid objects at a high speed defines a scene to be displayed within a cuboid three-dimensional universe which has been hierarchically subdivided into a plurality of discrete volumes of uniform size and similar orientation. The three-dimensional universe is represented by a tree structure having a plurality of nodes, one for each volume in the three-dimensional universe which is at least partially occupied by objects in the scene. A user may select a point of view for viewing the object. Nodes in the tree structure representing the three-dimensional universe are visited in a sequence determined by the point of view selected by the user so that nodes corresponding to volumes which are unobstructed by other volumes are visited first. Each visited node which is enclosed by the object is projected onto a subdivided view plane organized into a hierarchy of a plurality of discrete areas. Areas of the view plane which are completely enclosed by the projection are painted onto a display screen. Areas which intersect but are not enclosed by the projection are further subdivided to locate those areas which are enclosed. A representation of the hierarchically-subdivided view plane arranged in a tree structure is stored in a store. Each time an area of the view plane is painted, an entry in the representation of the view plane corresponding to that area is marked. The corresponding entry in the representation of the view plane is checked before an area is painted, to ensure that no area is painted more than once so that hidden surfaces are not displayed. To create sectional views, a user may define a region of the three-dimensional universe, and volumes outside of that region are not projected. Due to the hierarchical structure of the three-dimensional universe and the two-dimensional view plane, the symmetry of subdivisions, and the resulting simplicity of the calculations necessary to create an image, real time image generation wherein calculations necessary to create the image are performed by hard-wired digital logic elements to increase speed performance is possible.

132 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691