scispace - formally typeset
Search or ask a question
Topic

Corner detection

About: Corner detection is a research topic. Over the lifetime, 2912 publications have been published within this topic receiving 60946 citations.


Papers
More filters
Proceedings Article
01 Jan 2003
TL;DR: A new corner and edge detector developed from the phase congruency model of feature detection is described, which results in reliable feature detection under varying illumination conditions with fixed thresholds.
Abstract: There are many applications such as stereo matching, mo- tion tracking and image registration that require so called 'corners' to be detected across image sequences in a reliable manner. The Harris cor- ner detector is widely used for this purpose. However, the response from the Harris operator, and other corner operators, varies considerably with image contrast. This makes the setting of thresholds that are appropri- ate for extended image sequences difficult, if not impossible. This paper describes a new corner and edge detector developed from the phase con- gruency model of feature detection. The new operator uses the principal moments of the phase congruency information to determine corner and edge information. The resulting corner and edge operator is highly local- ized and has responses that are invariant to image contrast. This results in reliable feature detection under varying illumination conditions with fixed thresholds. An additional feature of the operator is that the corner map is a strict subset of the edge map. This facilitates the cooperative use of corner and edge information.

436 citations

Journal ArticleDOI
TL;DR: Different implementations of adaptive smoothing are presented, first on a serial machine, for which a multigrid algorithm is proposed to speed up the smoothing effect, then on a single instruction multiple data (SIMD) parallel machine such as the Connection Machine.
Abstract: A method to smooth a signal while preserving discontinuities is presented. This is achieved by repeatedly convolving the signal with a very small averaging mask weighted by a measure of the signal continuity at each point. Edge detection can be performed after a few iterations, and features extracted from the smoothed signal are correctly localized (hence, no tracking is needed). This last property allows the derivation of a scale-space representation of a signal using the adaptive smoothing parameter k as the scale dimension. The relation of this process to anisotropic diffusion is shown. A scheme to preserve higher-order discontinuities and results on range images is proposed. Different implementations of adaptive smoothing are presented, first on a serial machine, for which a multigrid algorithm is proposed to speed up the smoothing effect, then on a single instruction multiple data (SIMD) parallel machine such as the Connection Machine. Various applications of adaptive smoothing such as edge detection, range image feature extraction, corner detection, and stereo matching are discussed. >

436 citations

Book
14 Feb 2020
TL;DR: This paper presents a meta-modelling framework for 3D Vision Applications that automates the very labor-intensive and therefore time-heavy and expensive process of 3D image processing.
Abstract: Preface. Acknowledgements. Notation and Abbreviations. Part I. 1 Introduction. 1.1 Stereo-pair Images and Depth Perception. 1.2 3D Vision Systems. 1.3 3D Vision Applications. 1.4 Contents Overview: The 3D Vision Task in Stages. 2 Brief History of Research on Vision. 2.1 Abstract. 2.2 Retrospective of Vision Research. 2.3 Closure. Part II. 3 2D and 3D Vision Formation. 3.1 Abstract. 3.2 Human Visual System. 3.3 Geometry and Acquisition of a Single Image. 3.4 Stereoscopic Acquisition Systems. 3.5 Stereo Matching Constraints. 3.6 Calibration of Cameras. 3.7 Practical Examples. 3.8 Appendix: Derivation of the Pin-hole Camera Transformation. 3.9 Closure. 4 Low-level Image Processing for Image Matching. 4.1 Abstract. 4.2 Basic Concepts. 4.3 Discrete Averaging. 4.4 Discrete Differentiation. 4.5 Edge Detection. 4.6 Structural Tensor. 4.7 Corner Detection. 4.8 Practical Examples. 4.9 Closure. 5 Scale-space Vision. 5.1 Abstract. 5.2 Basic Concepts. 5.3 Constructing a Scale-space. 5.4 Multi-resolution Pyramids. 5.5 Practical Examples. 5.6 Closure. 6 Image Matching Algorithms. 6.1 Abstract. 6.2 Basic Concepts. 6.3 Match Measures. 6.4 Computational Aspects of Matching. 6.5 Diversity of Stereo Matching Methods. 6.6 Area-based Matching. 6.7 Area-based Elastic Matching. 6.8 Feature-based Image Matching. 6.9 Gradient-based Matching. 6.10 Method of Dynamic Programming. 6.11 Graph Cut Approach. 6.12 Optical Flow. 6.13 Practical Examples. 6.14 Closure. 7 Space Reconstruction and Multiview Integration. 7.1 Abstract. 7.2 General 3D Reconstruction. 7.3 Multiview Integration. 7.4 Closure. 8 Case Examples. 8.1 Abstract. 8.2 3D System for Vision-Impaired Persons. 8.3 Face and Body Modelling. 8.4 Clinical and Veterinary Applications. 8.5 Movie Restoration. 8.6 Closure. Part III. 9 Basics of the Projective Geometry. 9.1 Abstract. 9.2 Homogeneous Coordinates. 9.3 Point, Line and the Rule of Duality. 9.4 Point and Line at Infinity. 9.5 Basics on Conics. 9.6 Group of Projective Transformations. 9.7 Projective Invariants. 9.8 Closure. 10 Basics of Tensor Calculus for Image Processing. 10.1 Abstract. 10.2 Basic Concepts. 10.3 Change of a Base. 10.4 Laws of Tensor Transformations. 10.5 The Metric Tensor. 10.6 Simple Tensor Algebra. 10.7 Closure. 11 Distortions and Noise in Images. 11.1 Abstract. 11.2 Types and Models of Noise. 11.3 Generating Noisy Test Images. 11.4 Generating Random Numbers with Normal Distributions. 11.5 Closure. 12 Image Warping Procedures. 12.1 Abstract. 12.2 Architecture of the Warping System. 12.3 Coordinate Transformation Module. 12.4 Interpolation of Pixel Values. 12.5 The Warp Engine. 12.6 Software Model of the Warping Schemes. 12.7 Warp Examples. 12.8 Finding the Linear Transformation from Point Correspondences. 12.9 Closure. 13 Programming Techniques for Image Processing and Computer Vision. 13.1 Abstract. 13.2 Useful Techniques and Methodology. 13.3 Design Patterns. 13.4 Object Lifetime and Memory Management. 13.5 Image Processing Platforms. 13.6 Closure. 14 Image Processing Library. References. Index.

365 citations

Proceedings ArticleDOI
21 May 2001
TL;DR: Results from an actual flight test show the vision-based state estimates are accurate to within 5 cm in each axis of translation, and 5 degrees in eachaxis of rotation, making vision a viable sensor to be placed in the control loop of a hierarchical flight management system.
Abstract: We present the design and implementation of a real-time computer vision system for a rotorcraft unmanned aerial vehicle to land onto a known landing target. This vision system consists of customized software and off-the-shelf hardware which perform image processing, segmentation, feature point extraction, camera pan/tilt control, and motion estimation. We introduce the design of a landing target which significantly simplifies the computer vision tasks such as corner detection and correspondence matching. Customized algorithms are developed to allow for realtime computation at a frame rate of 30 Hz. Such algorithms include certain linear and nonlinear optimization schemes for model-based camera pose estimation. We present results from an actual flight test which show the vision-based state estimates are accurate to within 5 cm in each axis of translation, and 5 degrees in each axis of rotation, making vision a viable sensor to be placed in the control loop of a hierarchical flight management system.

358 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work claims that recognizing objects and predicting contours are two mutually related tasks, and shows that it can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, it exploits object-related features as high- level cues for contour detection.
Abstract: Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection.

354 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Image processing
229.9K papers, 3.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20238
202240
202178
202095
2019143
2018146