scispace - formally typeset
Search or ask a question

Showing papers on "Object-class detection published in 1997"


Journal ArticleDOI
TL;DR: Simulated and real images have been tested in a variety of formats, and the results show that the symmetry can be determined using the Gaussian image.
Abstract: Symmetry detection is important in the area of computer vision. A 3D symmetry detection algorithm is presented in this paper. The symmetry detection problem is converted to the correlation of the Gaussian image. Once the Gaussian image of the object has been obtained, the algorithm is independent of the input format. The algorithm can handle different kinds of images or objects. Simulated and real images have been tested in a variety of formats, and the results show that the symmetry can be determined using the Gaussian image.

257 citations


Proceedings ArticleDOI
21 Apr 1997
TL;DR: A rule-based face detection algorithm in frontal views is developed that is applied to frontal views extracted from the European ACTS M2VTS database that contains the videosequences of 37 different persons and found that the algorithm provides a correct facial candidate in all cases.
Abstract: Face detection is a key problem in building automated systems that perform face recognition A very attractive approach for face detection is based on multiresolution images (also known as mosaic images) Motivated by the simplicity of this approach, a rule-based face detection algorithm in frontal views is developed that extends the work of G Yang and TS Huang (see Pattern Recognition, vol27, no1, p53-63, 1994) The proposed algorithm has been applied to frontal views extracted from the European ACTS M2VTS database that contains the videosequences of 37 different persons It has been found that the algorithm provides a correct facial candidate in all cases However, the success rate of the detected facial features (eg eyebrows/eyes, nostrils/nose, and mouth) that validate the choice of a facial candidate is found to be 865% under the most strict evaluation conditions

214 citations


Proceedings Article
01 Jan 1997
TL;DR: An integrated system for the acquisition, normalisation and recognition of moving faces in dynamic scenes using mixture models and the use of Gaussian colour mixtures for face detection and tracking is introduced.
Abstract: An integrated system for the acquisition, normalisation and recognition of moving faces in dynamic scenes is introduced. Four face recognition tasks are deened and it is argued that modelling person-speciic probability densities in a generic face space using mixture models provides a technique applicable to all four tasks. The use of Gaussian colour mixtures for face detection and tracking is also described. Results are presented using data from the integrated system.

56 citations


Journal ArticleDOI
01 Dec 1997
TL;DR: The proposed face detection method first uses a neural network to classify the images and then segments the candidate face regions, and an energy thresholding method which can take the shape, colour and edge characteristics of the face features into the extraction process is devised to extract the lips.
Abstract: In modern multimedia systems, video and image signals usually need to be indexed or retrieved according to their contents. Colour characteristics are proposed for use in detection of human faces in colour images with complex backgrounds. The proposed face detection method first uses a neural network to classify the images and then segments the candidate face regions. Then, an energy thresholding method which can take the shape, colour and edge characteristics of the face features into the extraction process is devised to extract the lips. Finally, three shape descriptors of the lip feature are used to further verify the existence of the face in the candidate face regions. The experimental results show that this method can detect faces in the images from different sources in an accurate and efficient manner. Since faces are common elements in video and image signals, the proposed face detection method is an advance towards the goal of content-based video and image indexing and retrieval.

48 citations


01 Jan 1997
TL;DR: A new technique for a faster computation of the activities of the hidden layer units is proposed and has been demonstrated on face detection examples.
Abstract: We propose a new technique for a faster computation of the activities of the hidden layer units. This has been demonstrated on face detection examples.

44 citations


Book
13 Apr 1997
TL;DR: This paper presents a meta-analysis of statistical linear models used in image segmentation to derive Radial masks in line and edge detection and some approaches to image restoration.
Abstract: Preface 1. Introduction 2. Statistical linear models 3. Line detection 4. Edge detection 5. Object detection 6. Image segmentation 7. Radial masks in line and edge detection 8. Performance analysis 9. Some approaches to image restoration References Index.

38 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: A novel object detection algorithm that combines template matching methods with feature-based methods via hierarchical MRF and MAP estimation is presented, which helps to achieve robustness against complex backgrounds and partial occlusions in object detection.
Abstract: This paper presents a new scale, position and orientation invariant approach to object detection. The proposed method first chooses attention regions in an image based on the region detection result on the image. Within the attention regions, the method then detects targets using a novel object detection algorithm that combines template matching methods with feature-based methods via hierarchical MRF and MAP estimation. Hierarchical MRF and MAP estimation provide a flexible framework to incorporate various visual clues. The combination of template matching and feature detection helps to achieve robustness against complex backgrounds and partial occlusions in object detection. Experimental results are given in the paper.

35 citations


Journal ArticleDOI
TL;DR: A phase-only vector filter is designed based on the surface normal of a range image of a face, which allows the face recognition to be performed between range face and range face, or between range and intensity face.
Abstract: The surface normal of a range image of a face can be decomposed into three components. The combinations of these three weighted components produce 2-D intensity images with different illuminations. A phase-only vector filter is designed based on these normal components. With such a vector filter, the face recognition can be performed between range face and range face, or between range face and intensity face. This kind of recognition is less sensitive to the changes of illumination of the input face.

25 citations


Proceedings ArticleDOI
10 Jan 1997
TL;DR: Khosravi et al. as mentioned in this paper used a deformable template model to describe the human face and used a probabilistic framework to extract frontal frames from a video sequence, which can be passed to recognition and classifications systems for further processing.
Abstract: Mehdi KhosraviNCR Human Interface Technology CenterAtlanta, Georgia, 30309Monson H. HayesGeorgia Institute of Technology, Department of Electrical EngineeringAtlanta, Georgia, 30332ABSTRACTThis paper presents an approach for the detection of human face and eyes in real time and in uncontrolled environments.The system has been implemented on a PC platform with the aid of simple commercial devices such as an NTSC videocamera and a monochrome frame grabber. The approach is based on a probabilistic framework that uses a deformabletemplate model to describe the human face. The system has been tested on both head-and-shoulder sequences as well ascomplex scenes with multiple people and random motion. The system is able to locate the eyes from different head poses(rotations in image plane as well as in depth). The information provided by the location of the eyes is used to extract faceswith frontal pose from a video sequence. The extracted frontal frames can be passed to recognition and classificationsystems for further processing.Keywords : Face Detection, Eye Detection, Face Segmentation, Ellipse Fitting1. INTRODUCTIONIn recent years, face detection from video data has become a popular research area. There are numerous commercialapplications of face detection in face recognition, verification, classification, identification as well as security access andmultimedia. To extract the human faces in an uncontrolled environment most of these applications must deal with thedifficult problems of variations in lighting, variations in pose, occlusion of people by other people, and cluttered or non-uniform backgrounds.A review of the approaches to face detection that have been proposed are described in[1]. In [2], Sung and Poggio presentedan example-based learning approach for locating unoccluded human frontal faces. The approach measures a distancebetween the local image and a few view-based "face" and "non face" pattern prototypes at each image location to locate theface. In [3], Turk and Pentland used the distance to a "face space", defined by "eigenfaces", to locate and track frontalhuman faces. In [4], human faces were detected by searching for significant facial features at each location in the image. In[5]

21 citations


Journal Article
TL;DR: This paper proposes a new and simple algorithm for face extraction from a color image that detects the lip region in a face object by evaluating the values of seven pattern variables of those face candidates.
Abstract: This paper proposes a new and simple algorithm for face extraction from a color image. The approach detects the lip region in a face object. First, the lip- and skin-color pixels in an image are extracted on the basis of statistical probability analysis. These lip- and skin-color pixels are segmented separately by using binary image processing techniques to produce lip- and skin-color regions. Each region that has a skin-color region and one or more lip-color regions as its subset regions is nominated as a face candidate. To detect only the face objects from the face candidates, the algorithm evaluates the values of seven pattern variables of those face candidates. Aface candidate with all its seven pattern variable values within the valid range values of face object class is detected as a face object. Verifications of the proposed algorithm were provided by the experimental results that gave 91.8% detection of face objects from 104 sample images. This is significant for locating or detecting the faces in color images.

9 citations


Proceedings ArticleDOI
21 Apr 1997
TL;DR: This work investigates a new approach to detect human face from monocular image sequences using genetic algorithms and has developed two models to be used as a tool to calculate the fitness for each observation in the search procedure.
Abstract: This work investigates a new approach to detect human face from monocular image sequences. Our method consists of two main search procedures, both using genetic algorithms. The first one is to find a head inside the scene and the second one is to identify the existence of face within the extracted head area. For this purpose, we have developed two models to be used as a tool to calculate the fitness for each observation in the search procedure: one is a head model which is approached by an ellipse and the other is a face template the size of which is adjustable. The procedures work sequentially. The head search is activated first, and after the head area is found, the face identification is activated. The experiment demonstrates the effectiveness of the method.

01 Jan 1997
TL;DR: A model based 3 D object recognition system is realized that can extract the relational face graph of the object in the origional image, then by matching the relational faces with the one in the modle base, the system can recognize what kind of object is in the image.
Abstract: A model based 3 D object recognition system is realized. The system can extract the relational face graph of the object in the origional image, then by matching the relational face graph with the one in the modle base, the system can recognize what kind of object is in the image. If the stereo images of one scene are given, the system can accurately establish the correspondence between the two images using higher level knowledge about the scene. In addition, the characteristic matching method has been proposed. The main idea of this method is to recognize the object by using the faces which can best represent the characteristics of the object. The robustness of the system has been proven by experiments.

Proceedings ArticleDOI
12 Oct 1997
TL;DR: The objective is to verify face locations hypothesized in photographs such as the ones typified by those found in newspapers by confirming the face images while rejecting the non-face images.
Abstract: The human face is an object that is easily located in complex scenes by infants and adults alike. Our objective is to verify face locations hypothesized in photographs such as the ones typified by those found in newspapers. Our approach to face verification is based on a methodology of a hierarchical rule-based system. The rules themselves are derived by a fuzzy model of assigning scores to the "goodness" of match between image features and model features. Scores of the match computed with rules are further refined by a relaxation process. Face candidates are generated by a face locator and include face images as well as non-face images. Our objective is to confirm the face images while rejecting the non-face images. In an experiment on 80 face candidates returned by a face locator applied to newspaper photographs, the face verifier correctly identified all the faces as faces with a false positive rate of about 10%.

Journal ArticleDOI
Geoff West1
TL;DR: Evaluation techniques are discussed for assessing arc and line detection algorithms and for features in the context of verification and pose refinement strategies that can then be used for the design and integration of indexing and verification stages of object recognition.
Abstract: A popular paradigm in computer vision is based on dividing the vision problem into three stages namely segmentation, feature extraction and recognition. For example edge detection followed by line detection followed by planar object recognition. It can be argued that each of these stages needs to be thoroughly described to enable vision systems to be configured with predictable performance. However an alternative view is that the performance of each stage is not in itself important as long as the overall performance is acceptable. This paper discusses feature performance concentrating on the assessmentof edge-based feature detection and object recognition. Evaluation techniques are discussed for assessing arc and line detection algorithmsand for features in the context of verification and pose refinement strategies. These techniques can then be used for the design and integration of indexing and verification stages of object recognition. A theme of the paper is the need to assess feature extraction in the context of the chosen task.

Proceedings ArticleDOI
09 Sep 1997
TL;DR: A novel method for model-based counting of multi-colored objects is presented and the instances of a known object are counted based on color similarity.
Abstract: A novel method for model-based counting of multi-colored objects is presented. The instances of a known object are counted based on color similarity. Active search for color is employed for fast object search. Experimental results are presented.

Journal ArticleDOI
TL;DR: A series of strategies-will be described to achieve a system which enables face recognition under varying pose, which includes the multi-view face modeling, the threshold image based face feature detection, the affine transformation based face posture normalization and the template matching based face identification.
Abstract: In many automatic face recognition systems, posture constraining is a key factor preventing them from application. In this paper, a series of strategies-will be described to achieve a system which enables face recognition under varying pose. These approaches include the multi-view face modeling, the threshold image based face feature detection, the affine transformation based face posture normalization and the template matching based face identification. Combining all of these strategies, a face recognition system with the pose invariance is designed successfully. Using a 75MHZ Pentium PC and with a database of 75 individuals, 15 images for each person, and 225 test images with various postures, a very good recognition rate of 96.89% is obtained.

Proceedings ArticleDOI
12 Oct 1997
TL;DR: A genetic algorithm is used to construct human face templates to evolve their templates to adapt to the characteristics which human faces commonly have and confirms the human face region among the candidate regions using thehuman face templates constructed by the genetic algorithm.
Abstract: Describes a computer vision system to locate a human face in an image sequence. The system operates on an image sequence captured during a time period and locates the human face region on each image in the sequence. It is the first important step in a human face recognition system. In this system we use a genetic algorithm to construct human face templates. We evolve our templates to adapt to the characteristics which human faces commonly have. The system consists of the following image processing modules: (1) noise reduction, (2) moving object region detection, (3) human face candidate region detection, (4) human face region verification. In the last step, we confirm the human face region among the candidate regions using the human face templates constructed by the genetic algorithm.

Proceedings ArticleDOI
20 Jun 1997
TL;DR: This paper describes an automatic face component detection algorithm that is able to detect face components from face images under size variation, complex background, and skew angle variation.
Abstract: Summary form only given, substantially as follows. This paper describes an automatic face component detection algorithm on the natural office scene. The object image of this algorithm is the front face image of a person seating in a chair at the office. Our algorithm do not use a special capturing environment such as lighting, fixed background and pose. Men seating in a chair sight the camera without any restriction. Namely, the proposed algorithm is able to detect face components from face images under size variation, complex background, and skew angle variation. We have several steps for the face component detection algorithm. The first step is an adaptive sobel edge detection algorithm and the second step is a 2-pass labeling algorithm. The third step is verifications of size, shape and symmetric using face model knowledge for eye detection. The forth step is skew normalization using eye location. The final step is the detection of the mouth and nose.

Book ChapterDOI
12 Mar 1997
TL;DR: A system for automatic face recognition from images of faces is presented based on an hybrid iconic approach, where a first recognition score is obtained by matching a person's face against an eigen-space obtained from an image ensemble of known indivisuals.
Abstract: The automatic detection of person's identity is a very interesting issue both in social and industrial environments. In this paper a system for automatic face recognition from images of faces is presented. The proposed approach is based on an hybrid iconic approach, where a first recognition score is obtained by matching a person's face against an eigen-space obtained from an image ensemble of known indivisuals. The identity is verified by computing the correlation of the gray level histograms of the new face image and the one in the database.

Proceedings ArticleDOI
28 Oct 1997
TL;DR: The algorithm that adaptively thresholds the normalized range differences is shown to give rapid and robust results for edge detection of obstacles in real outdoor range images, and is useful for obstacle detection in a mobile robot.
Abstract: This paper presents a new method for rapidly detecting obstacles in a spherical coordinate system using range images sensed by a Laser Imaging Range Sensor (LIRS). The algorithm that adaptively thresholds the normalized range differences is shown to give rapid and robust results for edge detection of obstacles in real outdoor range images, and is useful for obstacle detection in a mobile robot.

Book ChapterDOI
01 Jan 1997
TL;DR: The robustness and efficiency of the proposed method has been extensively tested in offline simulations as well as in online processing of numerous scenes, and can deal with complex textured scenes and temporarily varying image signal statistics.
Abstract: Publisher Summary This chapter presents an algorithm for detection of moving objects in image sequences. The proposed algorithm uses texture features which are obtained blockwise from the frames of the image sequence. The object detection itself is essentially a temporal change detection algorithm, detecting changes of corresponding texture features between successive frames. The subsequent change detection algorithm operates on a small number of simple features computed per block. This approach results in an efficient object detection scheme with low computational complexity. The method is insensitive to small movements of strongly textured areas like trees moving in the wind. An object entering or leaving a block will, however, cause a change of the feature in almost all cases. Thus the reliability of the object detection can be increased by using suitable texture features. Sharing the low complexity with earlier detection methods, the presented algorithm can deal with complex textured scenes and temporarily varying image signal statistics. The robustness and efficiency of the proposed method has been extensively tested in offline simulations as well as in online processing of numerous scenes.

Proceedings ArticleDOI
07 Feb 1997
TL;DR: A face detection system that automatically locates faces in gray-level images and a system which matches a given face image with faces in a database, performed by template matching using templates derived from a selected set of normalized faces.
Abstract: A face detection system that automatically locates faces in gray-level images is described. Also described is a system which matches a given face image with faces in a database. Face detection in an image is performed by template matching using templates derived from a selected set of normalized faces. Instead of using original gray level images, vertical gradient images were calculated and used to make the system more robust against variations in lighting conditions and skin color. Faces of different sizes are detected by processing the image at several scales. Further, a coarse- to-fine strategy is used to speed up the processing, and a combination of whole face and face component templates are used to ensure low false detection rates. The input to the face recognition system is a normalized vertical gradient image of a face, which is compared against a database using a set of pretrained feedforward neural networks with a winner-take-all fuser. The training is performed by using an adaptation of the backpropagation algorithm. This system has been developed and tested using images from the FERET database and a set of images obtained from Rowley, et al and Sung and Poggio.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
28 Oct 1997
TL;DR: This scheme utilizes qualitative components of a 3D object in an image as the basis for object representation, and performs partial object matching for object detection.
Abstract: The paper proposes a 3D object structure representation and detection scheme for object-based image retrieval. Based on findings in psychological research on visual cognition, this scheme utilizes qualitative components of a 3D object in an image as the basis for object representation, and performs partial object matching for object detection. During this process, the contextual information is used. This technique plays an important role in the 3D object-based image retrieval system under development.

Proceedings ArticleDOI
09 Sep 1997
TL;DR: A novel method of using automatically segmented facial image data for facial feature detection using a quality measure to identify those image data from a large training set that are better to describe the feature.
Abstract: In conventional image-based feature detection a time consuming pre-processing step is required to manually segment the training features from the unsegmented face images. We present a novel method of using automatically segmented facial image data for facial feature detection. A quality measure is defined to identify those image data from a large training set that are better to describe the feature. The best quality subset is then extracted and used to train the feature detector. The detection performance obtained by the automatically segmented data set after refinement is almost as high as that obtained by the feature detector trained by a manually segmented set.