scispace - formally typeset
Search or ask a question

Showing papers by "Stan Z. Li published in 1998"


Proceedings ArticleDOI
23 Jun 1998
TL;DR: This paper proposes a novel pattern classification approach, called the nearest linear combination (NLC) approach, for eigenface based face recognition, using a linear combination of prototypical vectors to extend the representational capacity of the prototypes by generalization through interpolation and extrapolation.
Abstract: This paper proposes a novel pattern classification approach, called the nearest linear combination (NLC) approach, for eigenface based face recognition. Assume that multiple prototypical vectors are available per class, each vector being a point in an eigenface space. A linear combination of prototypical vectors belonging to a face class is used to define a measure of distance from the query vector to the class, the measure being defined as the Euclidean distance from the query to the linear combination nearest to the query vector (hence NLC). This contrasts to the nearest neighbor (NN) classification where a query vector is compared with each prototypical vector individually. Using a linear combination of prototypical vectors, instead of each of them individually, extends the representational capacity of the prototypes by generalization through interpolation and extrapolation. Experiments show that it leads to better results than existing classification methods.

98 citations


Journal ArticleDOI
TL;DR: This work presents a new approach for the analysis of convex minimization-based edge-preserving image smoothing and the parameter selection therein, and a scheme is proposed for selecting parameters to achieve desirable responses at edges.
Abstract: This work presents a new approach for the analysis of convex minimization-based edge-preserving image smoothing and the parameter selection therein. The global solution, that is, the response of a convex smoothing model to the ideal step edge, is derived in close-form. By analyzing the close-form solution, insights are drawn into how the optimal solution responds to edges in the data and how the parameter values affect resultant edges in the solution. Based on this, a scheme is proposed for selecting parameters to achieve desirable responses at edges. The theoretic results are substantiated by experiments.

64 citations


Journal ArticleDOI
TL;DR: The combinatorial optimization problem of MAP estimation is converted to one of constrained real optimization and then solved by using the augmented Lagrange-Hopfield (ALH) method, which effectively overcomes instabilities that are inherent in the penalty method or the Lagrange multiplier method in constrained optimization.
Abstract: The combinatorial optimization problem of MAP estimation is converted to one of constrained real optimization and then solved by using the augmented Lagrange-Hopfield (ALH) method proposed in 1984. The ALH effectively overcomes instabilities that are inherent in the penalty method or the Lagrange multiplier method in constrained optimization. It produces good solutions with reasonable costs.

28 citations


Proceedings ArticleDOI
16 Aug 1998
TL;DR: This work proposes to use the second level LC, that of the prototypes belonging to the same face class, to treat the prototypes coherently to improve face recognition under a new condition not captured by the prototypes by using a linear combination of them.
Abstract: A hierarchical representation consisting of two level linear combinations (LC) is proposed for face recognition. At the first level, a face image is represented as a linear combination (LC) of a set of basis vectors, i.e. eigenfaces. Thereby a face image corresponds to a feature vector (prototype) in the eigenface space. Normally several such prototypes are available for a face class, each representing the face under a particular condition such as in viewpoint, illumination, and so on. We propose to use the second level LC, that of the prototypes belonging to the same face class, to treat the prototypes coherently. The purpose is to improve face recognition under a new condition not captured by the prototypes by using a linear combination of them. A new distance measure called nearest LC (NLC) is proposed as opposed to the NN. Experiments show that our method yields significantly better results than the one level eigenface methods.

16 citations


Journal ArticleDOI
TL;DR: Experiments show that in the presence of outliers, the M-ESTimator outperforms the LS estimator and the AM-estimator performs better than the M, while experimental results are presented to compare the performance of the LS, M- and AM-mostimators for the angle estimation.
Abstract: A robust method is presented for computing rotation angles of image sequences from a set of corresponding points containing outliers. Assuming known rotation axis, a least-squares (LS) solution are derived to compute the rotation angle from a clean data set of point correspondences. Since clean data is not guaranteed, we introduce a robust solution, based on the M-estimator, to deal with outliers. Then we present an enhanced robust algorithm, called the annealing M-estimator (AM-estimator), for reliable robust estimation. The AM-estimator has several attractive advantages over the traditional M-estimator: By definition, the AM-estimator involves neither scale estimator nor free parameters and hence avoids instabilities therein. Algorithmically, it uses a deterministic annealing technique to approximate the global solution regardless of the initialization. Experimental results are presented to compare the performance of the LS, M- and AM-estimators for the angle estimation. Experiments show that in the presence of outliers, the M-estimator outperforms the LS estimator and the AM-estimator outperforms the M-estimator.

13 citations


Book ChapterDOI
02 Jun 1998
TL;DR: A probabilistic approach for solving the problem of matching and recognizing jigsaw objects under partial occlusion, rotation, translation and scaling using the maximum a posteriori (MAP) principle.
Abstract: Assume that some objects are present in an image but can be seen only partially and are overlapping each other. To recognize the objects, we have to firstly separate the objects from one another, and then match them against the modeled objects using partial observation. This paper presents a probabilistic approach for solving this problem. Firstly, the task is formulated as a two-stage optimal estimation process. The first stage, matching, separates different objects and finds feature correspondences between the scene and each potential model object. The second stage, recognition, resolves inconsistencies among the results of matching to different objects and identifies object categories. Both the matching and recognition are formulated in terms of the maximum a posteriori (MAP) principle. Secondly, contextual constraints, which play an important role in solving the problem, are incorporated in the probabilistic formulation. Specifically, between-object constraints are encoded in the prior distribution modeled as a Markov random field, and within-object constraints are encoded in the likelihood distribution modeled as a Gaussian. They are combined into the posterior distribution which defines the MAP solution. Experimental results are presented for matching and recognizing jigsaw objects under partial occlusion, rotation, translation and scaling.

12 citations


Proceedings ArticleDOI
14 Apr 1998
TL;DR: A novel method for generalizing the representational capacity of available face database using the feature line representation, which covers more of the face space than the feature points and thus expands the capacity of the available database.
Abstract: A face image can be represented by a point in a feature space such as spanned by a number of eigenfaces. In methods based on nearest neighbor classification, the representational capacity of a face database depends on how prototypical face images are chosen to account for possible image variations and also how many prototypical images or their feature points are available. We propose a novel method for generalizing the representational capacity of available face database. Any two feature points of the same class (individual) are generalized by the feature line passing through the points. The feature line covers more of the face space than the feature points and thus expands the capacity of the available database. In the feature line representation, the classification is based on the distance between the feature point of the query image and each of the feature lines of the prototypical images. Experiments are presented using a data set from five databases: the MIT, Cambridge, Bern, Yale and our own. There are 620 images of 124 individuals subject to varying viewpoint, illumination, and expression. The results show that the error rate of the proposed method is about 55%-60% of that of the standard eigenface method of M.A. Turk and A.P. Pentland (1991). They also demonstrate that the recognition result can be used for inferring how the position of the input face relative to the two retrieved faces.

12 citations


Proceedings ArticleDOI
16 Aug 1998
TL;DR: A scheme to use wavelet decomposition with Markovrandom field models, the parameters of each Markov random field models are used as features in texture classification to classify different textures.
Abstract: We present a new scheme to classify different textures. In the past years, wavelet decomposition has been used in texture classification. The usual features of classification are energy and entropy. In this paper, we propose a scheme to use wavelet decomposition with Markov random field models, the parameters of each Markov random field models are used as features in texture classification. Thus we can analyze the textures with Markov random field models on different scales with the wavelet decomposition.

11 citations


Journal ArticleDOI
TL;DR: A novel relaxation labeling method based on Augmented Lagrangian multipliers and the graded Hopfield neural network (ALH) yields results comparable to the best of the existing RL algorithms in terms of the optimized objective values, yet it is more suitable for analog neural implementation.

8 citations


Book ChapterDOI
08 Jan 1998
TL;DR: This work presents a new approach for analyzing the problem of edge-preserving image smoothing using convex minimization and for selecting smoothing parameters and a scheme is proposed for selecting parameters to achieve desirable response at edges.
Abstract: This work presents a new approach for analyzing the problem of edge-preserving image smoothing using convex minimization and for selecting smoothing parameters. The close-form (global) solution is derived as the response of a convex smoothing model to the ideal step edge. Insights into how the minimal solution responds to edges in the data and how the parameter values affect resultant edges in the solution are drawn from the analytic expression of the close-form solution. Based on this, a scheme is proposed for selecting parameters to achieve desirable response at edges.

3 citations


Journal ArticleDOI
TL;DR: A Bayesian approach to object matching is presented, where each feature in the scene is assigned a feature of the known model objects, and an optimal solution is defined as a maximum a posteriori estimate.
Abstract: A Bayesian approach to object matching is presented. An object and a scene are each represented by features, such as critical points, line segments and surface patches, constrained by unary properties and contextual relations. The matching is presented as a labeling problem, where each feature in the scene is assigned (associated with) a feature of the known model objects. The prior distribution of a scene's labeling is modeled as a Markov random field, which encodes the between-object constraints. The conditional distribution of the observed features labeled is assumed to be Gaussian, which encodes the within-object constraints. An optimal solution is defined as a maximum a posteriori estimate. Relationships with previous work are discussed. Experimental results are shown.

Journal Article
TL;DR: The right lateral thoracotomy is a safe and effective alternative to a median sternotomy for correction of cardiac defects and advantages of this approach include less injury, maintaining the continuity and the integrity of the bony thorax, and preventing postoperative pigeon breast.
Abstract: OBJECTIVE: To review the experience of correction of congenital cardiac defects through a right minithoracotomy METHOD: 319 patients underwent correction of congenital heart malformations through right lateral thoracotomy under cardiopulmonary bypass The average age was 344 +/- 159 years (range, 5 months-8 years) The average body weight was 1366 - 398 kg (range, 6 - 26 kg) Cardiac defects repaired included atrial septal defect in 87 patients (1 patient associated with left superior vena cava (LSVC), 6 pulmonary stenosis, 5 partial anomalous pulmonary venous connection), ventricular septal defect in 200 (7 patients with coexisting patent ductus arteriosus, 7 mitral insufficiency, 3 LSVC, 11 right ventricular outflow tract obstruction), Fallot's Tetralogy in 19 (3 patients associated with LSVC, 1 single coronary malformation), partial endocardial cushion defect in 2 and other defects in 11 The mean cardiopulmonary bypass time was 5607 +/- 2490 min (range, 20 - 176 min) and the mean aortic crossclamping time was 3297 +/- 2038 min (range, 6 - 140 min) The average mechanical ventilation time after operation was 1875 +/- 2457 hr (range, 2 - 14072 hr), and the mean postoperative hospital stay was 708 +/- 069 days (range, 7 - 17 days) RESULT: No operative mortality and severe postoperative complications were noted CONCLUSION: The right lateral thoracotomy is a safe and effective alternative to a median sternotomy for correction of cardiac defects Advantages of this approach include less injury, maintaining the continuity and the integrity of the bony thorax, and preventing postoperative pigeon breast The cosmetic result is superior to that of median sternotomy or bilateral submammary incision

Book ChapterDOI
08 Jan 1998
TL;DR: This paper presents an active object recognition algorithm using MAP-MRF matching and sensor planning strategy that is used for determining the next viewpoint of the camera if ambiguity exists in the matching result.
Abstract: This paper presents an active object recognition algorithm using MAP-MRF matching and sensor planning strategy. The matching between the sensed and model object is based on surface properties. A new measure of surface distinguishability is defined for sensor planning. MAP-MRF framework is used for generating matching label set. A measure of confidence of correct match is determined based on the posterior energy. An active object recognition algorithm is used for determining the next viewpoint of the camera if ambiguity exists in the matching result. The next viewpoint is chosen based the surface with highest distinguishability. Experimental results on images under perfect and imperfect segmentation are presented.