scispace - formally typeset
Search or ask a question
Institution

Michigan State University

EducationEast Lansing, Michigan, United States
About: Michigan State University is a education organization based out in East Lansing, Michigan, United States. It is known for research contribution in the topics: Population & Poison control. The organization has 60109 authors who have published 137074 publications receiving 5633022 citations. The organization is also known as: MSU & Michigan State.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a non-iterative solution is given that nds a global minimum, given that in certain con gurations, local minima occur, which are avoided by the new method.
Abstract: In this paper, we consider the problem of nding the position of a point in space given its position in two images taken with cameras with known calibration and pose. This process requires the intersection of two known rays in space, and is commonly known as triangulation. In the absence of noise, this problem is trivial. When noise is present, the two rays will not generally meet, in which case it is necessary to nd the best point of intersection. This problem is especially critical in a ne and projective reconstruction in which there is no meaningful metric information about the object space. It is desirable to nd a triangulation method that is invariant to projective transformations of space. This paper solves that problem by assuming a gaussian noise model for perturbation of the image coordinates. The triangulation problem then may be formulated as a least-squares minimization problem. In this paper a non-iterative solution is given that nds a global minimum. It is shown that in certain con gurations, local minima occur, which are avoided by the new method. Extensive comparisons of the new method with several other methods show that it consistently gives superior results. 1 The Triangulation Problem We suppose that a point x in R is visible in two images. The two camera matrices P and P 0 corresponding to the two images are supposed known. Let u and u be projections of the point x in the two images. From this data, the two rays in space corresponding to the two image points may easily be computed. The triangulation problem is to nd the intersection of the two lines in space. At rst sight this is a trivial problem, since intersecting two lines in space does not present signi cant di culties. Unfortunately, in the presence of noise these rays can not be guaranteed to cross, and we need to nd the best solution under some assumed noise model. A commonly suggested method ([2]) is to choose the mid-point of the common perpendicular to the two rays (the mid-point method). Perhaps a better choice would be to divide the common perpendicular in proportion to the distance from the two camera centres, since this would more closely equalize the angular error. Nevertheless, this method will not give optimal results, because of various approximations (for instance the angles will not be precisely equal in the two cases). In the case of projective reconstruction, or a ne reconstruction however, the camera matrices, will be known in a projective frame of reference, in which concepts such as common perpendicular, or mid-point (in the projective case) have no sense. In this case, the simple mid-point method here will not work. The importance of a good method for triangulation is clearly shown by Beardsley et. al. who demonstrate that the mid-point method gives bad results. In [2, 3] they suggest an alternative method based on \\quasiEuclidean\" reconstruction. In this method, an approximation to the correct Euclidean frame is selected and the mid-point method is carried out in this frame. The disadvantage of this method is that an approximate calibration of the camera is needed. It is also clearly sub-optimal. In this paper a new algorithm is described that gives an optimal global solution to the triangulation problem, equally valid in both the a ne and projective reconstruction cases. The solution relies on the concepts of epipolar correspondence and the fundamental matrix ([4]). The algorithm is non-iterative and simple in concept, relying on techniques of elementary calculus to minimize the chosen cost function. It is also moderate in computation requirements. In a series of experiments, the algorithm is extensively tested against many other methods of triangulation, and found to give consistent superior performance. No knowledge of camera calibration is needed. The triangulation problem is a small cog in the machinery of computer vision, but in many applications of scene reconstruction it is a critical one, on which ultimate accuracy depends ([2]). 2 Transformational Invariance In the last few years, there has been considerable interest in the subject of a ne or projective reconstruction ([4, 5, 9, 11, 15, 12, 14]). In such reconstruction methods, a 3D scene is to be reconstructed up to an unknown transformation from the given class. Normally, in such a situation, instead of knowing the correct pair of camera matrices P and P , one has a pair PH 1 and P H 1 where H is an unknown transformation. A desirable feature of a method of triangulation used is that it should be invariant under transformations of the appropriate class. Thus, denote by a triangulation method used to compute a 3D space point x from a point correspondence u$ u and a pair of camera matrices P and P . We write x = (u;u0; P; P 0) The triangulation is said to be invariant under a transformation H if (u;u0; P; P 0) = H 1 (u;u0; PH ; P 0H ) This means that triangulation using the transformed cameras results in the transformed point. If the camera matrices are known only up to an a ne (or projective) transformation, then it is clearly desirable to use an a ne (resp. projective) invariant triangulation method to compute the 3D space points. 3 The Minimization Criterion We assume that the camera matrices, and hence the fundamental matrix, are known exactly, or at least with great accuracy compared with a pair of matching points in the two images. A formula is given in [6] for computing the fundamental matrix given a pair of camera matrices. The two rays corresponding to a matching pair of points u $ u will meet in space if and only if the points satisfy the familiar ([10]) relationship u 0Fu = 0 : (1) It is clear, particularly for projective reconstruction, that it is inappropriate to minimize errors in the 3D projective space, P. For instance, the method that nds the midpoint of the common perpendicular to the two rays in space is not suitable for projective reconstruction, since concepts such as distance and perpendicularity are not valid in the context of projective geometry. In fact, in projective reconstruction, this method will give di erent results depending on which particular projective reconstruction is considered { the method is not projective-invariant. Normally, errors occur not in placement of a feature in space, but in its location in the two images, due to digitization errors, or the exact identi cation of a feature in the image. It is common to assume that features in the images are subject to Gaussian noise which displaces the feature from its correct location in the image. We assume that noise model in this paper. A typical observation consists of a noisy point correspondence u$ u which does not in general satisfy the epipolar constraint (1). In reality, the correct values of the corresponding image points should be points û$ û 0 lying close to the measured points u$ u and satisfying the equation û0F û exactly. We seek the points û and û0 that minimize the function d(u; û) + d(u0; û0) ; (2) where d( ; ) represents Euclidean distance, subject to the epipolar constraint û 0F û = 0 : Assuming a Gaussian error distribution, the points û0 and û are the most likely values for true image point correspondences. Once û0 and û are found, the point x may be found by any triangulation method, since the corresponding rays will meet precisely in space. 4 An Optimal Method of Triangulation. In this section, we describe a method of triangulation that nds the global minimum of the cost function (2) using a non-iterative algorithm. If the gaussian noise model can be assumed to be correct, this triangulation method is then provably optimal. This new method will be referred to as the Polynomialmethod, since it requires the solution of a sixth order polynomial. 4.1 Reformulation of the Minimization Problem Given a measured correspondence u $ u0, we seek a pair of points û 0 and û that minimize the sum of squared distances (2) subject to the epipolar constraint û0F û = 0. Any pair of points satisying the epipolar constraint must lie on a pair of corresponding epipolar lines in the two images. Thus, in particular, the optimum point û lies on an epipolar line and û0 lies on the corresponding epipolar line . On the other hand, any other pair of points lying on the lines 0 and 0 will also satisfy the epipolar constraint. This is true in particular for the point u on lying closest to the measured point u, and the correspondingly de ned point u on . Of all pairs of points on the lines and , the points u and u minimize the squared distance sum (2). It follows that û0 = u and û = u, where u and u are de ned with respect to a pair of matching epipolar lines and . Consequently, we may write d(u; û) = d(u; ), where d(u; ) represents the perpendicular distance from the point u to the line . A similar expression holds for d(u; û0). In view of the previous paragraph, we may formulate the minimization problem di erently as follows. We seek to minimize d(u; ) + d(u0; ) (3) where and 0 range over all choices of corresponding epipolar lines. The point û is then the closest point on the line to the point u and the point û0 is similarly de ned. Our strategy for minimizing (3) is as follows 1. Parametrize the pencil of epipolar lines in the rst image by a parameter t. Thus an epipolar line in the rst image may be written as (t). 2. Using the fundamental matrix F , compute the corresponding epipolar line (t) in the second image. 3. Express the distance function d(u; (t)) + d(u; (t)) explicitly as a function of t. 4. Find the value of t that minimizes this function. In this way, the problem is reduced to that of nding the minimum of a function of a single variable, t. It will be seen that for a suitable parametrization of the pencil of epipolar lines the distance function is a rational polynomial function of t. Using techniques of elementary calculus, the minimization problem reduces to nding the real roots of a polynomial of degree 6. 4.2 Details of Minimization. If both of the image p

1,121 citations

Journal ArticleDOI
TL;DR: In this article, the authors report data from a faculty development design seminar in which faculty members worked together with masters students to develop online courses and report that participants perceived that working in design teams to solve authentic problems of practice to be useful, challenging and fun.
Abstract: We introduce Technological Pedagogical Content Knowledge (TPCK) as a way of representing what teachers need to know about technology, and argue for the role of authentic design-based activities in the development of this knowledge. We report data from a faculty development design seminar in which faculty members worked together with masters students to develop online courses. We developed and administered a survey that assessed the evolution of student- and faculty-participants’ learning and perceptions about the learning environment, theoretical and practical knowledge of technology, course content (the design of online courses), group dynamics, and the growth of TPCK. Analyses focused on observed changes between the beginning and end of the semester. Results indicate that participants perceived that working in design teams to solve authentic problems of practice to be useful, challenging and fun. More importantly, the participants, both as individuals and as a group, appeared to have developed significantly in their knowledge of technology application, as well as in their TPCK. In brief, learning by design appears to be an effective instructional technique to develop deeper understandings of the complex web of relationships between content, pedagogy and technology and the contexts in which they function.

1,120 citations

Journal ArticleDOI
22 Nov 2002-Science
TL;DR: This work analyzed the flow of matter to extract pressures in excess of 1034 pascals, the highest recorded under laboratory-controlled conditions, and ruled out strongly repulsive nuclear equations of state from relativistic mean field theory and weakly repulsive equation of state with phase transitions at densities less than three times that of stable nuclei.
Abstract: Nuclear collisions can compress nuclear matter to densities achieved within neutron stars and within core-collapse supernovae. These dense states of matter exist momentarily before expanding. We analyzed the flow of matter to extract pressures in excess of 10 34 pascals, the highest recorded under laboratory-controlled conditions. Using these analyses, we rule out strongly repulsive nuclear equations of state from relativistic mean field theory and weakly repulsive equations of state with phase transitions at densities less than three times that of stable nuclei, but not equations of state softened at higher densities because of a transformation to quark matter.

1,119 citations

Journal ArticleDOI
TL;DR: This work presents a high-level categorization of the various vulnerabilities of a biometric system and discusses countermeasures that have been proposed to address these vulnerabilities.
Abstract: Biometric recognition offers a reliable solution to the problem of user authentication in identity management systems. With the widespread deployment of biometric systems in various applications, there are increasing concerns about the security and privacy of biometric technology. Public acceptance of biometrics technology will depend on the ability of system designers to demonstrate that these systems are robust, have low error rates, and are tamper proof. We present a high-level categorization of the various vulnerabilities of a biometric system and discuss countermeasures that have been proposed to address these vulnerabilities. In particular, we focus on biometric template security which is an important issue because, unlike passwords and tokens, compromised biometric templates cannot be revoked and reissued. Protecting the template is a challenging task due to intrauser variability in the acquired biometric traits. We present an overview of various biometric template protection schemes and discuss their advantages and limitations in terms of security, revocability, and impact on matching accuracy. A template protection scheme with provable security and acceptable recognition performance has thus far remained elusive. Development of such a scheme is crucial as biometric systems are beginning to proliferate into the core physical and information infrastructure of our society.

1,119 citations

Journal ArticleDOI
16 Feb 2017
TL;DR: Results of a large-scale simulation study substantiate that PLS and generalized structured component analysis are consistent estimators when the underlying population is composite model-based, and while both methods outperform sum scores regression in terms of parameter recovery, PLS achieves slightly greater statistical power.
Abstract: Composite-based structural equation modeling (SEM), and especially partial least squares path modeling (PLS), has gained increasing dissemination in marketing. To fully exploit the potential of these methods, researchers must know about their relative performance and the settings that favor each method’s use. While numerous simulation studies have aimed to evaluate the performance of composite-based SEM methods, practically all of them defined populations using common factor models, thereby assessing the methods on erroneous grounds. This study is the first to offer a comprehensive assessment of composite-based SEM techniques on the basis of composite model data, considering a broad range of model constellations. Results of a large-scale simulation study substantiate that PLS and generalized structured component analysis are consistent estimators when the underlying population is composite model-based. While both methods outperform sum scores regression in terms of parameter recovery, PLS achieves slightly greater statistical power.

1,113 citations


Authors

Showing all 60636 results

NameH-indexPapersCitations
David Miller2032573204840
Anil K. Jain1831016192151
D. M. Strom1763167194314
Feng Zhang1721278181865
Derek R. Lovley16858295315
Donald G. Truhlar1651518157965
Donald E. Ingber164610100682
J. E. Brau1621949157675
Murray F. Brennan16192597087
Peter B. Reich159790110377
Wei Li1581855124748
Timothy C. Beers156934102581
Claude Bouchard1531076115307
Mercouri G. Kanatzidis1521854113022
James J. Collins15166989476
Network Information
Related Institutions (5)
University of California, Davis
180K papers, 8M citations

97% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

97% related

University of Minnesota
257.9K papers, 11.9M citations

97% related

University of Wisconsin-Madison
237.5K papers, 11.8M citations

97% related

Cornell University
235.5K papers, 12.2M citations

97% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023250
2022752
20217,041
20206,870
20196,548
20185,779