scispace - formally typeset
Search or ask a question

Showing papers on "Object-class detection published in 1992"


Proceedings ArticleDOI
30 Aug 1992
TL;DR: A robust facial feature detector based on a generalized symmetry interest operator that was tested on a large face data base with a success rate of over 95%.
Abstract: Locating facial features is crucial for various face recognition schemes. The authors suggest a robust facial feature detector based on a generalized symmetry interest operator. No special tuning is required if the face occupies 15-60% of the image. The operator was tested on a large face data base with a success rate of over 95%. >

144 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: The proposed scheme is characterized by four aspects: facial feature detection using color image segmentation; target image extraction using a sub-space classification method; robust feature extraction based on K-L expansion of an invariant feature space; and face classifier training based on 3D CG modeling of the human face.
Abstract: Proposes a scheme that offers accurate and robust identification of human faces. The scheme is characterized by four aspects: facial feature detection using color image segmentation; target image extraction using a sub-space classification method; robust feature extraction based on K-L expansion of an invariant feature space; and face classifier training based on 3D CG modeling of the human face. The scheme's flexibility under a wide range of image acquisition conditions has been confirmed through the assessment of an experimental face identification system. >

30 citations


Journal ArticleDOI
TL;DR: TIC is shown to be better suited than other reported clutter measures because of its ability to accurately quantify perceptual effects and to serve as a robust indicator of the object detection and false alarm rates as a function of image clutter.
Abstract: Automatic object detection is one of the basic tasks performed by an image understanding system. Object detection approaches need to perform accurately and robustly over a wide range of scenes. Although a number of detection approaches have been developed and reported, a need remains for standards by which to judge the relative merits of such approaches. Image characteristics, object characteristics, and detection methodology are recognized as the main variables affecting object detection. A basis for their quantitative analysis and evaluation is developed. This research keeps object detection methodology constant while varying image and object characteristics to develop a set of quantiative standards. This requires an ability to derive a quantitative measure for the "clutter" observed in an image. A performance index for object detection approaches, as a function of scene nature, is valuable. Current approaches to image clutter or quality characterization are studied and a new measure based on image texture content and object characteristics is proposed. An extensive set of experimental studies is utilized to evaluate this texture-based image clutter (TIC) measure. TIC is shown to be better suited than other reported clutter measures because of its ability to accurately quantify perceptual effects and to serve as a robust indicator ofthe object detection and false alarm rates as a function of image clutter.

24 citations


Proceedings ArticleDOI
30 Nov 1992
TL;DR: A system for detecting human like moving objects in time-varying images consisting of three subprocesses: changing region detection, moving object tracking and movement interpretation that ensures the reliable detection of the trajectories in difficult cases such as movement across complicated backgrounds.
Abstract: Reports a system for detecting human like moving objects in time-varying images. The authors show how it is possible to detect the image trajectories of people moving in ordinary indoor scenes. The system consists of three subprocesses: changing region detection, moving object tracking and movement interpretation. The processes are executed in parallel so hat each one can recover from the others' errors. This ensures the reliable detection of the trajectories in difficult cases such as movement across complicated backgrounds. The authors have built a trial detection system using a parallel image processing system. The details of the trial system and experimental results of walking person detection are described. >

20 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: The authors present an approach to feature detection, which is a fundamental issue in many intermediate-level vision problems such as stereo, motion correspondence, image registration, etc, based on a scale-interaction model of the end-inhibition property exhibited by certain cells in the visual- cortex of mammals.
Abstract: The authors present an approach to feature detection, which is a fundamental issue in many intermediate-level vision problems such as stereo, motion correspondence, image registration, etc. The approach is based on a scale-interaction model of the end-inhibition property exhibited by certain cells in the visual- cortex of mammals. These feature detector cells are responsive to short lines, line endings, corners and other such sharp changes in curvature. In addition, this method also provides a compact representation of feature information which is useful in shape recognition problems. Application to face recognition and motion correspondence are illustrated. >

18 citations


Book
03 Jan 1992
TL;DR: This work superposed the images from two successive times for an observer translating relative to a planar road and showed the displacement field, that is, the transformation of the image points between the successive time points.
Abstract: When a visual observer moves forward, the projections of the objects in the scene will move over the visual image. If an object extends vertically from the ground, its image will move differently from the immediate background. This difference is called motion parallax [1, 2]. Much work in automatic visual navigation and obstacle detection has been concerned with computing motion fields or more or less complete 3-D information about the scene [3–5]. These approaches, in general, assume a very unconstrained environment and motion. If the environment is constrained, for example, motion occurs on a planar road, then this information can be exploited to give more direct solutions to, for example, obstacle detection [6]. Figure 6.1 shows superposed the images from two successive times for an observer translating relative to a planar road. The arrows show the displacement field, that is, the transformation of the image points between the successive time points.

11 citations


Proceedings ArticleDOI
15 Jun 1992
TL;DR: A unified decision-theoretic framework for automating the establishment of feature point correspondences in a temporally dense sequence of images is discussed, which provides robust feature correspondences for the estimation of three-dimensional structure and motion over an extended number of image frames.
Abstract: A unified decision-theoretic framework for automating the establishment of feature point correspondences in a temporally dense sequence of images is discussed. The approach extends a recent sequential detection algorithm to guide the detection and tracking of object feature points through an image sequence. The resulting extended feature tracks provide robust feature correspondences, for the estimation of three-dimensional structure and motion, over an extended number of image frames. >

5 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: The authors propose an approach to define objects qualitatively and hierarchically by generic shapes (primitives) arranged by generic relations so that a class of objects has the same description and to recognize them in a parallel and bottom-up way in the image.
Abstract: The authors propose an approach to define objects qualitatively and hierarchically by generic shapes (primitives) arranged by generic relations so that a class of objects has the same description and to recognize them in a parallel and bottom-up way in the image. >

3 citations


Book ChapterDOI
01 Jan 1992
TL;DR: A model-based object detection algorithm for separating the objects from the background in the scale-space domain and the results indicate that the multiscale approach is better than the single scale approach and the degradation in performance is greater with clutter than with white noise.
Abstract: Scale-space representation is a topic of active research in computer vision. Several researchers have studied the behavior of signals in the scale-space domain and how a signal can be reconstructed from its scale-space. However, not much work has been done on the signal detection problem, i.e. detecting the presence or absence of signal models from a given scale-space representation. In this paper we propose a model-based object detection algorithm for separating the objects from the background in the scale-space domain. There are a number of unresolved issues, some of which are discussed here. The algorithm is used to detect an infrared image of a tank in a noisy background. The performance of a multiscale approach is compared with that of a single scale approach by using a synthetic image and adding controlled amounts of noise. A synthetic image of randomly placed blobs of different sizes is used as the clean image. Two classes of noisy images arc considered. The first class is obtained by adding clutter (i.e. colored noise) and the second class by adding an equivalent amount of white noise. The multiscale and single scale algorithms are applied to delect the blobs, and performance indices such as number of detections, number of false alarms, delocalization errors etc. are computed. The results indicate that (i) the multiscale approach is better than the single scale approach and (ii) the degradation in performance is greater with clutter than with white noise.

2 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: Presents preliminary results from an investigation of edge detection with a principal components analysis, finding the parameters of an edge detector can be derived from data about edges in images.
Abstract: Presents preliminary results from an investigation of edge detection with a principal components analysis. In this way the parameters of an edge detector can be derived from data about edges in images. The preliminary investigations study how the optimal region of interest varies with the size of objects in the image. >