scispace - formally typeset
Search or ask a question
Author

Eam Khwang Teoh

Bio: Eam Khwang Teoh is an academic researcher from Nanyang Technological University. The author has contributed to research in topics: Image segmentation & Active shape model. The author has an hindex of 23, co-authored 121 publications receiving 3110 citations. Previous affiliations of Eam Khwang Teoh include Johns Hopkins University & Institute for Infocomm Research Singapore.


Papers
More filters
Journal ArticleDOI
TL;DR: A robust algorithm, called CHEVP, is presented for providing a good initial position for the B-Snake model, and a minimum error method by Minimum Mean Square Error (MMSE) is proposed to determine the control points of the B -Snake model by the overall image forces on two sides of lane.

812 citations

Journal ArticleDOI
TL;DR: A Catmull–Rom spline-based lane model which describes the perspective effect of parallel lines has been proposed for generic lane boundary and its coarse-to-fine matching offers an acceptable solution at an affordable computational cost, and thus speeds up the process of lane detection.

228 citations

Journal ArticleDOI
TL;DR: A new approach called gradient-direction corner detector for the corner detection is presented which is developed from the popular Plessey corner detection and is based on the measure of the gradient module of the image gradient direction and the constraints of the false corner response suppression.

199 citations

Journal Article
TL;DR: A new approach for corner detection called the GradientDirection corner detector is presented, developed from the popular Plessey corner detector, based on the measure of the gradient module of the image gradient direction and the constraints of false corner response suppression.
Abstract: In this paper the analysis of gray level corner detections has been carried out. The performances of cornerness measures for some corner detection algorithms are discussed. This paper presents a new approach for corner detection called the GradientDirection corner detector which is developed from the popular Plessey corner detector. The GradientDirection corner detector is based on the measure of the gradient module of the image gradient direction and the constraints of false corner response suppression.

198 citations

Journal ArticleDOI
TL;DR: The proposed Generalized 2D Principal Component Analysis (G2DPCA) overcomes the limitations of the recently proposed 2D PCA and shows the excellent performance in face image representation and recognition.

130 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A review of recent as well as classic image registration methods to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas.

6,842 citations

Book ChapterDOI
07 May 2006
TL;DR: It is shown that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time.
Abstract: Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations [1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion.

3,828 citations

Journal ArticleDOI
TL;DR: This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction and the resulting methods are accurate, noise resistant and fast.
Abstract: This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new feature detectors are based on the minimization of this local image region, and the noise reduction method uses this region as the smoothing neighbourhood. The resulting methods are accurate, noise resistant and fast. Details of the new feature detectors and of the new noise reduction method are described, along with test results.

3,669 citations

Journal Article
TL;DR: In this paper, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations, and a comparison of corner detectors based on this criterion applied to 3D scenes is made.
Abstract: Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations[1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion. © Springer-Verlag Berlin Heidelberg 2006.

3,413 citations

Journal ArticleDOI
TL;DR: It is shown how the proposed methodology applies to the problems of blob detection, junction detection, edge detection, ridge detection and local frequency estimation and how it can be used as a major mechanism in algorithms for automatic scale selection, which adapt the local scales of processing to the local image structure.
Abstract: The fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works, Witkin (1983) and Koenderink (1984) proposed to approach this problem by representing image structures at different scales in a so-called scale-space representation. Traditional scale-space theory building on this work, however, does not address the problem of how to select local appropriate scales for further analysis. This article proposes a systematic methodology for dealing with this problem. A framework is presented for generating hypotheses about interesting scale levels in image data, based on a general principle stating that local extrema over scales of different combinations of γ-normalized derivatives are likely candidates to correspond to interesting structures. Specifically, it is shown how this idea can be used as a major mechanism in algorithms for automatic scale selection, which adapt the local scales of processing to the local image structure. Support for the proposed approach is given in terms of a general theoretical investigation of the behaviour of the scale selection method under rescalings of the input pattern and by integration with different types of early visual modules, including experiments on real-world and synthetic data. Support is also given by a detailed analysis of how different types of feature detectors perform when integrated with a scale selection mechanism and then applied to characteristic model patterns. Specifically, it is described in detail how the proposed methodology applies to the problems of blob detection, junction detection, edge detection, ridge detection and local frequency estimation. In many computer vision applications, the poor performance of the low-level vision modules constitutes a major bottleneck. It is argued that the inclusion of mechanisms for automatic scale selection is essential if we are to construct vision systems to automatically analyse complex unknown environments.

2,942 citations