scispace - formally typeset
Search or ask a question
Topic

Orientation (computer vision)

About: Orientation (computer vision) is a research topic. Over the lifetime, 17196 publications have been published within this topic receiving 358181 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A methodology was developed to delineate buildings from a point cloud and classify the present gaps, and two learning algorithms – SVM and Random Forests were tested for mapping the damaged regions based on radiometric descriptors.
Abstract: Point clouds generated from airborne oblique images have become a suitable source for detailed building damage assessment after a disaster event, since they provide the essential geometric and radiometric features of both roof and facades of the building. However, they often contain gaps that result either from physical damage or from a range of image artefacts or data acquisition conditions. A clear understanding of those reasons, and accurate classification of gap-type, are critical for 3D geometry-based damage assessment. In this study, a methodology was developed to delineate buildings from a point cloud and classify the present gaps. The building delineation process was carried out by identifying and merging the roof segments of single buildings from the pre-segmented 3D point cloud. This approach detected 96% of the buildings from a point cloud generated using airborne oblique images. The gap detection and classification methods were tested using two other data sets obtained with Unmanned Aerial Vehicle (UAV) images with a ground resolution of around 1–2 cm. The methods detected all significant gaps and correctly identified the gaps due to damage. The gaps due to damage were identified based on the surrounding damage pattern, applying Gabor wavelets and a histogram of gradient orientation features. Two learning algorithms – SVM and Random Forests were tested for mapping the damaged regions based on radiometric descriptors. The learning model based on Gabor features with Random Forests performed best, identifying 95% of the damaged regions. The generalization performance of the supervised model, however, was less successful: quality measures decreased by around 15–30%.

154 citations

Journal ArticleDOI
TL;DR: In this article, the authors focus on the analysis of profile view images with reference to the fine details of the contrasts both in the metal particles and in the outer support layers, and prove that complex contrasts which very often appear in the images can be interpreted on the grounds of the structural features of the catalysts and on the recording conditions in the microscope.

153 citations

Posted Content
TL;DR: A Rotation-equivariant Detector (ReDet) is proposed, which explicitly encodes rotation equivariance and rotation invariance and incorporates rotation- equivariant networks into the detector to extract rotation-Equivariant features, which can accurately predict the orientation and lead to a huge reduction of model size.
Abstract: Recently, object detection in aerial images has gained much attention in computer vision. Different from objects in natural images, aerial objects are often distributed with arbitrary orientation. Therefore, the detector requires more parameters to encode the orientation information, which are often highly redundant and inefficient. Moreover, as ordinary CNNs do not explicitly model the orientation variation, large amounts of rotation augmented data is needed to train an accurate object detector. In this paper, we propose a Rotation-equivariant Detector (ReDet) to address these issues, which explicitly encodes rotation equivariance and rotation invariance. More precisely, we incorporate rotation-equivariant networks into the detector to extract rotation-equivariant features, which can accurately predict the orientation and lead to a huge reduction of model size. Based on the rotation-equivariant features, we also present Rotation-invariant RoI Align (RiRoI Align), which adaptively extracts rotation-invariant features from equivariant features according to the orientation of RoI. Extensive experiments on several challenging aerial image datasets DOTA-v1.0, DOTA-v1.5 and HRSC2016, show that our method can achieve state-of-the-art performance on the task of aerial object detection. Compared with previous best results, our ReDet gains 1.2, 3.5 and 2.6 mAP on DOTA-v1.0, DOTA-v1.5 and HRSC2016 respectively while reducing the number of parameters by 60\% (313 Mb vs. 121 Mb). The code is available at: \url{this https URL}.

153 citations

Patent
04 Jan 2008
TL;DR: In this paper, a computer-implemented method is performed at a portable multifunction device with a rectangular touch screen display that includes a portrait view and a landscape view, and the method includes detecting the device in a first orientation and while the device is in the first orientation, displaying an application in first mode on the touch screen displays in first view.
Abstract: In accordance with some embodiments, a computer-implemented method is performed at a portable multifunction device with a rectangular touch screen display that includes a portrait view and a landscape view. The method includes detecting the device in a first orientation, and while the device is in the first orientation, displaying an application in a first mode on the touch screen display in a first view. The method also includes detecting the device in a second orientation, and in response to detecting the device in the second orientation, displaying the application in a second mode on the touch screen display in a second view. The first mode of the application differs from the second mode of the application by more than a change in display orientation.

153 citations

Patent
28 Oct 2002
TL;DR: In this paper, the tracking light is scanned into the real world environment, which includes at least one detector pair, and the horizontal location of the first detector is determined within a specific scan line inferring the scan line edge time.
Abstract: A virtual image is registered among a perceived real world background. Tracking light is scanned into the real world environment, which includes at least one detector pair. A first time and a second time at which the tracking light impinges on the first detector is detected, in which the first time and second time occurs within adjacent scan lines. A time at which a horizontal scan line edge (e.g., beginning of scan line or end of scan line) is encountered is derived as occurring one half way between the first time and the second time. The horizontal location of the first detector then is determined within a specific scan line inferring the scan line edge time. The vertical location of the detector is determined within a scan frame by measuring time duration using the beginning of the frame. By determining a location independently from the temporal resolution of the augmented imaging system, the temporal location of the detector is identified to a sub-pixel/sub-line precision. The augmented image is registered either to a 3D real world spatial coordinate system or to a time domain coordinate system based upon tracked position and orientation of the user.

153 citations


Network Information
Related Topics (5)
Segmentation
63.2K papers, 1.2M citations
82% related
Pixel
136.5K papers, 1.5M citations
79% related
Image segmentation
79.6K papers, 1.8M citations
78% related
Image processing
229.9K papers, 3.5M citations
77% related
Feature (computer vision)
128.2K papers, 1.7M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021535
2020771
2019830
2018727
2017691