scispace - formally typeset
Search or ask a question

Showing papers on "Point (geometry) published in 2007"


Proceedings ArticleDOI
29 Jul 2007
TL;DR: In this paper, the Hidden Point Removal (HPPR) operator is proposed to determine the visible points in a point cloud, as viewed from a given viewpoint, without reconstructing a surface or estimating normals.
Abstract: This paper proposes a simple and fast operator, the "Hidden" Point Removal operator, which determines the visible points in a point cloud, as viewed from a given viewpoint. Visibility is determined without reconstructing a surface or estimating normals. It is shown that extracting the points that reside on the convex hull of a transformed point cloud, amounts to determining the visible points. This operator is general - it can be applied to point clouds at various dimensions, on both sparse and dense point clouds, and on viewpoints internal as well as external to the cloud. It is demonstrated that the operator is useful in visualizing point clouds, in view-dependent reconstruction and in shadow casting.

324 citations


Journal ArticleDOI
TL;DR: Since images of faces often belong to a manifold of intrinsically low dimension, the LLTSA algorithm for effective face manifold learning and recognition is developed, which achieves much higher recognition rates than a few competing methods.

258 citations


Proceedings ArticleDOI
04 Jul 2007
TL;DR: An algorithm for reconstructing watertight surfaces from unoriented point sets using the Voronoi diagram of the input point set to deduce a tensor field whose principal axes and eccentricities locally represent respectively the most likely direction of the normal to the surface, and the confidence in this direction estimation.
Abstract: We introduce an algorithm for reconstructing watertight surfaces from unoriented point sets. Using the Voronoi diagram of the input point set, we deduce a tensor field whose principal axes and eccentricities locally represent respectively the most likely direction of the normal to the surface, and the confidence in this direction estimation. An implicit function is then computed by solving a generalized eigenvalue problem such that its gradient is most aligned with the principal axes of the tensor field, providing a best-fitting isosurface reconstruction. Our approach possesses a number of distinguishing features. In particular, the implicit function optimization provides resilience to noise, adjustable fitting to the data, and controllable smoothness of the reconstructed surface. Finally, the use of simplicial meshes (possibly restricted to a thin crust around the input data) and (an)isotropic Laplace operators renders the numerical treatment simple and robust.

242 citations


Journal ArticleDOI
TL;DR: This relation affords a rapid approximation to B point measurement that, in noisy or degraded signals, is superior to visual B point identification and to a derivative-based estimate.
Abstract: The B point on the impedance cardiograph waveform corresponds to the opening of the aortic valve and is an important parameter for calculating systolic time intervals, stroke volume, and cardiac output. Identifying the location of the B point is sometimes problematic because the characteristic upstroke that serves as a marker of this point is not always apparent. Here is presented a reliable method for B point identification, based on the consistent relationship between the R to B interval (RB) and the interval between the R-wave and the peak of the dZ/dt function (RZ). The polynomial function relating RB to RZ (RB = 1.233RZ - 0.0032RZ(2) - 31.59) accounts for 90%-95% of the variance in the B point location across ages and gender and across baseline and stress conditions. This relation affords a rapid approximation to B point measurement that, in noisy or degraded signals, is superior to visual B point identification and to a derivative-based estimate.

206 citations


Journal ArticleDOI
TL;DR: In this article, the inverse kinematics of a general 6R serial kinematic chain are computed using the study model of Euclidean displacements, which identifies a displacement with a point on a six-dimensional quadric S 6 2 in seven-dimensional projective space P7.

170 citations


Patent
21 Mar 2007
TL;DR: In this article, a POD module system includes a housing, a coaxial cable connector formed on the housing and connectable to a first device to receive a cable signal, a port formed on a housing to receive wire or wireless signal from a second device, a module unit to process at least one of the cable signal and the at least 1 of the wireless signal to generate at least a copy protection signal and one of video and audio signals, respectively.
Abstract: A POD module system includes a housing, a coaxial cable connector formed on the housing and connectable to a first device to receive a cable signal, a port formed on the housing to receive a wire or wireless signal from a second device, a module unit to process at least one of the cable signal and the at least one of the wireless signal to generate at least one of a copy protection signal and one of video and audio signals, respectively, and a connector formed on the housing and connectable to a third device to transmit the at least one of the copy protection signal and the one of video and audio signals to the third device such that the third device generates at least one of an image and a sound to correspond to the at least one of the copy protection signal and the one of video and audio signals.

159 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: This work presents a new approach to reconstruct the shape of a 3D object or scene from a set of calibrated images that combines the topological flexibility of a point-based geometry representation with the robust reconstruction properties of scene-aligned planar primitives.
Abstract: We present a new approach to reconstruct the shape of a 3D object or scene from a set of calibrated images. The central idea of our method is to combine the topological flexibility of a point-based geometry representation with the robust reconstruction properties of scene-aligned planar primitives. This can be achieved by approximating the shape with a set of surface elements (surfels) in the form of planar disks which are independently fitted such that their footprint in the input images matches. Instead of using an artificial energy functional to promote the smoothness of the recovered surface during fitting, we use the smoothness assumption only to initialize planar primitives and to check the feasibility of the fitting result. After an initial disk has been found, the recovered region is iteratively expanded by growing further disks in tangent direction. The expansion stops when a disk rotates by more than a given threshold during the fitting step. A global sampling strategy guarantees that eventually the whole surface is covered. Our technique does not depend on a shape prior or silhouette information for the initialization and it can automatically and simultaneously recover the geometry, topology, and visibility information which makes it superior to other state-of-the-art techniques. We demonstrate with several high-quality reconstruction examples that our algorithm performs highly robustly and is tolerant to a wide range of image capture modalities.

151 citations


Proceedings ArticleDOI
13 Jun 2007
TL;DR: This work presents a robust method that identifies sharp features in a point cloud by returning a set of smooth curves aligned along the edges by leveraging the concept of robust moving least squares to locally fit surfaces to potential features.
Abstract: Defining sharp features in a given 3D model facilitates a better understanding of the surface and aids visualizations, reverse engineering, filtering, simplification, non-photo realism, reconstruction and other geometric processing applications. We present a robust method that identifies sharp features in a point cloud by returning a set of smooth curves aligned along the edges. Our feature extraction is a multi-step refinement method that leverages the concept of robust moving least squares to locally fit surfaces to potential features. Using Newton's method, we project points to the intersections of multiple surfaces then grow polylines through the projected cloud. After resolving gaps, connecting corners, and relaxing the results, the algorithm returns a set of complete and smooth curves that define the features. We demonstrate the benefits of our method with two applications: surface meshing and point-based geometry compression.

144 citations


Proceedings ArticleDOI
01 Sep 2007
TL;DR: This paper presents 3D offline path planner for unmanned aerial vehicles (UAVs) using multiobjective evolutionary algorithms for finding solutions corresponding to conflicting goals of minimizing length of path and maximizing margin of safety using the commonly-used NSGA-II algorithm.
Abstract: In this paper, we present 3D offline path planner for unmanned aerial vehicles (UAVs) using multiobjective evolutionary algorithms for finding solutions corresponding to conflicting goals of minimizing length of path and maximizing margin of safety. In particular, we have chosen the commonly-used NSGA-II algorithm for this purpose. The algorithm generates a curved path which is represented using B-spline curves. The control points of the B-spline curve are the decision variables in the genetic algorithm. In particular, we solve two problems, assuming the normal flight envelope restriction: i. path planning for UAV when no other constraint is assumed to be present and ii. path planning for UAV if the vehicle has to necessarily pass through a particular point in the space. The use of a multiobjective evolutionary algorithm helps in generating a number of feasible paths with different trade-offs between the objective functions. The availability of a number of trade-off solutions allows the user to choose a path according to his/her needs easily, thereby making the approach more pragmatic. Although an automated decision-making aid is the next immediate need of research, we defer it for another study.

139 citations


Journal ArticleDOI
TL;DR: In this article, an improved meshless method is proposed, based on the combination of the natural neighbour finite element method with the radial point interpolation method, the natural neighbor Radial Point Interpolation Method (NNRPIM).

137 citations


Journal ArticleDOI
TL;DR: Linearly conforming point interpolation method (LC-PIM) as mentioned in this paper is formulated for three-dimensional elasticity problems, where shape functions are generated using point-interpolation method by adopting polynomial basis functions and local supporting nodes are selected based on the background cells.
Abstract: Linearly conforming point interpolation method (LC-PIM) is formulated for three-dimensional elasticity problems. In this method, shape functions are generated using point interpolation method by adopting polynomial basis functions and local supporting nodes are selected based on the background cells. The shape functions so constructed have the Kronecker delta functions property and it allows straightforward imposition of point essential boundary conditions. Galerkin weak form is used for creating discretized system equations, and a nodal integration scheme with strain-smoothing operation is used to perform the numerical integration. The present LC-PIM can guarantee linear exactness and monotonic convergence for the numerical results. Numerical examples are used to examine the present method in terms of accuracy, convergence, and efficiency. Compared with the finite element method using linear elements, the LC-PIM can achieve better efficiency, and higher accuracy especially for stresses. Copyright © 2007 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the Hidden Point Removal (HPPR) operator is proposed to determine the visible points in a point cloud, as viewed from a given viewpoint, using a simple and fast operator.
Abstract: This paper proposes a simple and fast operator, the "Hidden" Point Removal operator, which determines the visible points in a point cloud, as viewed from a given viewpoint. Visibility is determined...

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of optimal truss design, where cross-sectional areas and the positions of joints are simultaneously optimized from a general point of view. But they focus on the difference between simultaneous and alternating optimization of geometry and topology.
Abstract: The paper addresses the classical problem of optimal truss design where cross-sectional areas and the positions of joints are simultaneously optimized. Se-veral approaches are discussed from a general point of view. In particular, we focus on the difference between simultaneous and alternating optimization of geometry and topology. We recall a rigorously mathematical approach based on the implicit programming technique which considers the classical single load minimum compliance problem subject to a volume constraint. This approach is refined leading to three new problem formulations which can be treated by methods of Mathematical Programming. In particular, these formulations cover the effect of melting end nodes, i.e., vanishing potential bars due to changes in the geometry. In one of these new problem formulations, the objective function is a polynomial of degree three and the constraints are bilinear or just sign constraints. Because heuristics is avoided, certain optimality properties can be proven for resulting structures. The paper closes with two numerical test examples.

Journal ArticleDOI
TL;DR: This work shows that a natural geometric description of continuous repetitive hand trajectories is not Euclidean but equi-affine, and develops a mathematical framework based on differential geometry, Lie group theory and Cartan’s moving frame method for the analysis of humanhand trajectories.
Abstract: Humans interact with their environment through sensory information and motor actions. These interactions may be understood via the underlying geometry of both perception and action. While the motor space is typically considered by default to be Euclidean, persistent behavioral observations point to a different underlying geometric structure. These observed regularities include the “two-thirds power law”, which connects path curvature with velocity, and “local isochrony”, which prescribes the relation between movement time and its extent. Starting with these empirical observations, we have developed a mathematical framework based on differential geometry, Lie group theory and Cartan’s moving frame method for the analysis of human hand trajectories. We also use this method to identify possible motion primitives, i.e., elementary building blocks from which more complicated movements are constructed. We show that a natural geometric description of continuous repetitive hand trajectories is not Euclidean but equi-affine. Specifically, equi-affine velocity is piecewise constant along movement segments, and movement execution time for a given segment is proportional to its equi-affine arc-length. Using this mathematical framework, we then analyze experimentally recorded drawing movements. To examine movement segmentation and classification, the two fundamental equi-affine differential invariants—equi-affine arc-length and curvature are calculated for the recorded movements. We also discuss the possible role of conic sections, i.e., curves with constant equi-affine curvature, as motor primitives and focus in more detail on parabolas, the equi-affine geodesics. Finally, we explore possible schemes for the internal neural coding of motor commands by showing that the equi-affine framework is compatible with the common model of population coding of the hand velocity vector when combined with a simple assumption on its dynamics. We then discuss several alternative explanations for the role that the equi-affine metric may play in internal representations of motion perception and production.

Journal ArticleDOI
TL;DR: In this article, Wolfson et al. present a survey of current business, 69, 40, 41, 42, 43, 46, 48, 49, 50, 51, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69.
Abstract: Scheuren, F. (1989), Comment on “The Social Policy Simulation Database and Model: An Example of Survey and Administrative Data Integration,” by M. Wolfson, S. Gribble, M. Bordt, B. Murphy, and G. Rowe, Survey of Current Business, 69, 40–41. (2005), “Paradata From Concept to Completion,” in Symposium 2005: Methodological Challenges for Future Information Needs, Ottawa: Statistics Canada, available at www.statcan.ca/english/freepub/11-522-XIE/ 2005001/9432.pdf.

Dissertation
19 Feb 2007
TL;DR: This thesis introduces a method for measuring melodic similarity for notated music such as MIDI files and the creation of a ground truth for a large music collection (RISM) is described, along with a performance measure and the application of both the ground truth and the measure for the MIREX algorithm competition.
Abstract: This thesis introduces a method for measuring melodic similarity for notated music such as MIDI files. This music search algorithm views music as sets of notes that are represented as weighted points in the two-dimensional space of time and pitch. Two point sets can be compared by calculating how much effort it would take to convert one into the other; effort is measured by determining how much weight has to be moved over what distances. To make these point set comparisons efficient enough for searching large databases, the distances between every item (point set) in the database and a small, ?xed set of special (vantage) point sets can be pre-calculated. Whenever a new query needs to be compared to the items in the database, one can restrict the search to those items with similar distances to the special point sets. For studying the performance of the transportation-based search algorithm and other, similar ones, the creation of a ground truth for a large music collection (RISM) is described, along with a performance measure and the application of both the ground truth and the measure for the MIREX algorithm competition.

01 Jan 2007
TL;DR: The Coherent Point Drift (CPD) method as discussed by the authors is a probabilistic method for non-rigid registration of point sets, where the registration is treated as a maximum likelihood estimation problem with motion coherence constraint over the velocity field such that one point set moves coherently to align with the second set.
Abstract: We introduce Coherent Point Drift (CPD), a novel probabilistic method for nonrigid registration of point sets. The registration is treated as a Maximum Likelihood (ML) estimation problem with motion coherence constraint over the velocity field such that one point set moves coherently to align with the second set. We formulate the motion coherence constraint and derive a solution of regularized ML estimation through the variational approach, which leads to an elegant kernel form. We also derive the EM algorithm for the penalized ML optimization with deterministic annealing. The CPD method simultaneously finds both the non-rigid transformation and the correspondence between two point sets without making any prior assumption of the transformation model except that of motion coherence. This method can estimate complex non-linear non-rigid transformations, and is shown to be accurate on 2D and 3D examples and robust in the presence of outliers and missing points.

Journal ArticleDOI
TL;DR: A variant of Jarratt method for solving non-linear equations is presented, where per iteration the new method adds the evaluation of the function at another point in the procedure iterated by Jarratts method.

Journal ArticleDOI
Devrim Akca1
TL;DR: Gruen et al. as mentioned in this paper proposed an extension to the basic algorithm, which can simultaneously match surface geometry and its attribute information, e.g. intensity, colour, temperature, etc. under a combined estimation model.
Abstract: 3D surface matching would be an ill conditioned problem when the curvature of the object surface is either homogenous or isotropic, e.g. for plane or spherical types of objects. A reliable solution can only be achieved if supplementary information or functional constraints are introduced. In a previous paper, an algorithm for the least squares matching of overlapping 3D surfaces, which were digitized/sampled point by point using a laser scanner device, by the photogrammetric method or other techniques, was proposed [Gruen, A., and Akca, D., 2005. Least squares 3D surface and curve matching. ISPRS Journal of Photogrammetry and Remote Sensing 59 (3), 151–174.]. That method estimates the transformation parameters between two or more fully 3D surfaces, minimizing the Euclidean distances instead of z -differences between the surfaces by least squares. In this paper, an extension to the basic algorithm is given, which can simultaneously match surface geometry and its attribute information, e.g. intensity, colour, temperature, etc. under a combined estimation model. Three experimental results based on terrestrial laser scanner point clouds are presented. The experiments show that the basic algorithm can solve the surface matching problem provided that the object surface has at least the minimal information. If not, the laser scanner derived intensities are used as supplementary information to find a reliable solution. The method derives its mathematical strength from the least squares image matching concept and offers a high level of flexibility for many kinds of 3D surface correspondence problem.

Journal ArticleDOI
TL;DR: An improved response surface based optimization technique is presented for two-dimensional airfoil design at transonic speed by adding the actual function value to the data set used to construct the polynomials.
Abstract: An improved response surface based optimization technique is presented for two-dimensional airfoil design at transonic speed. The method is based on an iterative scheme where least-square fitted quadratic polynomials of objective function and constraints are repeatedly corrected locally, about the current minimum, by adding the actual function value to the data set used to construct the polynomials. When no further cost function reduction is achieved, the design domain upon which the optimization is initially performed is changed, preserving its initial size, by updating the center point with the position of the last minimum found. The optimization is then conducted by using the same approximations built over the initial design space, which are again iteratively corrected until convergence to a given tolerance. To construct the response surfaces, the design space is explored by using a uniform Latin hypercube, aiming at reducing the bias error, in contrast with previous techniques based on D-optimality criterion. The geometry is modeled by using the PARSEC parameterization

Proceedings ArticleDOI
10 Dec 2007
TL;DR: A qualitative experimental evaluation in an indoor lab environment is presented, which demonstrates that the suggested system is able to register and detect changes in spatial 3D data and also to detect changes that occur in colour space and are not observable using range values only.
Abstract: This paper presents a system for autonomous change detection with a security patrol robot. In an initial step a reference model of the environment is created and changes are then detected with respect to the reference model as differences in coloured 3D point clouds, which are obtained from a 3D laser range scanner and a CCD camera. The suggested approach introduces several novel aspects, including a registration method that utilizes local visual features to determine point correspondences (thus essentially working without an initial pose estimate) and the 3D-NDT representation with adaptive cell size to efficiently represent both the spatial and colour aspects of the reference model. Apart from a detailed description of the individual parts of the difference detection system, a qualitative experimental evaluation in an indoor lab environment is presented, which demonstrates that the suggested system is able register and detect changes in spatial 3D data and also to detect changes that occur in colour space and are not observable using range values only.

Journal ArticleDOI
TL;DR: It was found that occasionally updating the correlation function leads to more accurate predictions than using external surrogates alone and in the case of high imaging rates during treatment the aggressive update methods are more accurate than the conservative ones.
Abstract: In this work we develop techniques that can derive the tumor position from external respiratory surrogates (abdominal surface motion) through periodically updated internal/external correlation. A simple linear function is used to express the correlation between the tumor and surrogate motion. The function parameters are established during a patient setup session with the tumor and surrogate positions simultaneously measured at a 30 Hz rate. During treatment, the surrogate position, constantly acquired at 30 Hz, is used to derive the tumor position. Occasionally, a pair of radiographic images is acquired to enable the updating of the linear correlation function. Four update methods, two aggressive and two conservative, are investigated: (A1) shift line through the update point; (A2) re-fit line through the update point; (C1) re-fit line with extra weight to the update point; (C2) minimize the distances to the update point and previous line fit point. In the present study of eight lung cancer patients, tumor and external surrogate motion demonstrate a high degree of linear correlation which changes dynamically over time. It was found that occasionally updating the correlation function leads to more accurate predictions than using external surrogates alone. In the case of high imaging rates during treatment (greater than 2 Hz) the aggressive update methods (A1 and A2) are more accurate than the conservative ones (C1 and C2). The opposite is observed in the case of low imaging rates.

Patent
19 Sep 2007
TL;DR: In this article, a data recording apparatus has a receiver unit that receives a stream of encoded digital data; an analyzer that detects change in an attribute of the received stream and that outputs the detection information; a controller that generates management information containing the output by the analyzer and time information indicating detection time of the change as a first entry point; a drive that records the management information and the received streams to a data storage medium; and an input unit that defines a second entry point.
Abstract: Entry points are managed so they are easy for users to understand. The data recording apparatus has a receiver unit that receives a stream of encoded digital data; an analyzer that detects change in an attribute of the received stream and that outputs the detection information; a controller that generates management information containing the detection information output by the analyzer and time information indicating detection time of the change as a first entry point; a drive that records the management information and the received stream to a data storage medium; and an input unit that defines a second entry point. This second entry point is set relative to the playback path of the stream and is used to access and read from a particular point in the stream. The controller further generates the management information containing the first entry point and the second entry point separately identified.

Journal ArticleDOI
TL;DR: This paper proposes to find the optimal approximation of a cyclically extended closed curve of double size, and to select the best possible starting point by search in the extended search space for the curve.

Proceedings ArticleDOI
01 Dec 2007
TL;DR: It is shown that any algorithm that computes a tour for the Dubins' vehicle following an ordering of points optimal for the Euclidean TSP cannot have an approximation ratio better than Omega(n), where n is the number of points.
Abstract: We consider algorithms for the curvature-constrained traveling salesman problem, when the nonholonomic constraint is described by Dubins' model. We indicate a proof of the NP-hardness of this problem. In the case of low point densities, i.e., when the Euclidean distances between the points are larger than the turning radius of the vehicle, various heuristics based on the Euclidean Traveling salesman problem are expected to perform well. In this paper we do not put a constraint on the minimum Euclidean distance. We show that any algorithm that computes a tour for the Dubins' vehicle following an ordering of points optimal for the Euclidean TSP cannot have an approximation ratio better than Omega(n), where n is the number of points. We then propose an algorithm that is not based on the Euclidean solution and seems to behave differently. For this algorithm, we obtain an approximation guarantee of O (min {(1+rho/epsiv)log n, (1+rho/epsiv)2), where rho is the minimum turning radius, and epsiv is the minimum Euclidean distance between any two points.

Journal ArticleDOI
TL;DR: In this paper, it was shown that such a set has patch counting and topological entropy 0 if it has uniform cluster frequencies and is pure point diffractive, and that the patch counting entropy vanishes whenever the repetitivity function satisfies a certain growth restriction.
Abstract: Delone sets of finite local complexity in Euclidean space are investigated. We show that such a set has patch counting and topological entropy 0 if it has uniform cluster frequencies and is pure point diffractive. We also note that the patch counting entropy vanishes whenever the repetitivity function satisfies a certain growth restriction.

Proceedings ArticleDOI
10 Sep 2007
TL;DR: A flexible algorithmic pipeline that combines accurate line detection techniques with robust statistical candidate initialization and refinement stages is proposed that compares favorably with the state of the art in vanishing point detection.
Abstract: In this paper, we describe the components of a robust algorithm for the detection of vanishing points in man-made environments. We designed our approach to work under quite general conditions (e.g., uncalibrated camera); and in contrast to several other approaches, the assumption of a dominant line-alignment w.r.t. the orthogonal axes of the world coordinate frame (Manhattan world) is not explicitly exploited. Our only premise is, that if a significant number of the imaged line segments meet very accurately in a point, this point is very likely to be a good candidate for a real vanishing point. For finding such points under a wide range of conditions, we propose a flexible algorithmic pipeline that combines accurate line detection techniques with robust statistical candidate initialization and refinement stages. The method was evaluated on a set of images exhibiting largely varying characteristics concerning image quality and scene complexity. Experiments show that the method, despite the variations, works in a stable manner and that its performance compares favorably with the state of the art.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the problem of finding an upper bound for the number of equilibrium points of a potential of several fixed point charges in Rn. This question goes back to J. C. Maxwell [10] and M. Morse [11].
Abstract: We discuss the problem of finding an upper bound for the number of equilibrium points of a potential of several fixed point charges in Rn. This question goes back to J. C. Maxwell [10] and M. Morse ...

Proceedings ArticleDOI
20 Jun 2007
TL;DR: This paper proposes a dimensionality reduction algorithm that leads to the projection with the minimal local estimation error, and indicates that LLP keeps the local information in the sense that the projection value of each point can be well estimated based on its neighbors and their projection values.
Abstract: This paper presents a Local Learning Projection (LLP) approach for linear dimensionality reduction. We first point out that the well known Principal Component Analysis (PCA) essentially seeks the projection that has the minimal global estimation error. Then we propose a dimensionality reduction algorithm that leads to the projection with the minimal local estimation error, and elucidate its advantages for classification tasks. We also indicate that LLP keeps the local information in the sense that the projection value of each point can be well estimated based on its neighbors and their projection values. Experimental results are provided to validate the effectiveness of the proposed algorithm.

Proceedings ArticleDOI
12 Nov 2007
TL;DR: A new algorithm called Orthogonal Neighborhood Preserving Embedding (ONPE) for face recognition, which overcomes the metric distortion problem of NPE, while metric distortion usually leads to performance degradation.
Abstract: In this paper, we propose a new algorithm called Orthogonal Neighborhood Preserving Embedding (ONPE) for face recognition. ONPE can preserve local geometry information and is based on the local linearity assumption that each data point and its k nearest neighbors lie on a linear manifold locally embedded in the image space. ONPE is based on Neighborhood Preserving Embedding (NPE), but overcomes the metric distortion problem of NPE, while metric distortion usually leads to performance degradation. Besides, we propose a classification method (ONPC) based on the ONPE, which use local label propagation method in the reduced space for face recognition. ONPC is based on the natural assumption that the local neighborhood information is also preserved in reduced space, and the label of a data point can be obtained in the reduced space by the labels of its neighbors. Experimental results on two face databases demonstrate the effectiveness of our proposed method.