scispace - formally typeset
Search or ask a question
Topic

Orthonormal basis

About: Orthonormal basis is a research topic. Over the lifetime, 6014 publications have been published within this topic receiving 174416 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the Hamilton-Jacobi equation for the geodesies is solved by separation of variables in such a way that a certain natural canonical orthonormal tetrad is determined.
Abstract: This paper contains an investigation of spaces with a two parameter Abelian isometry group in which the Hamilton-Jacobi equation for the geodesies is soluble by separation of variables in such a way that a certain natural canonical orthonormal tetrad is determined. The spaces satisfying the stronger condition that the corresponding Schrodinger equation is separable are isolated in a canonical form for which Einstein’s vacuum equations and the source-free Einstein-Maxwell equations (with or without a Λ term) can be solved explicitly. A fairly extensive family of new solutions is obtained including the previously known solutions of de Sitter, Kasner, Taub-NUT, and Kerr as special cases.

1,149 citations

Journal ArticleDOI
TL;DR: In this paper, a closed-form solution to the least square problem for three or more points is presented, which requires the computation of the square root of a symmetric matrix, and the best scale is equal to the ratio of the root-mean-square deviations of the coordinates in the two systems from their respective centroids.
Abstract: Finding the relationship between two coordinate systems by using pairs of measurements of the coordinates of a number of points in both systems is a classic photogrammetric task. The solution has applications in stereophotogrammetry and in robotics. We present here a closed-form solution to the least-squares problem for three or more points. Currently, various empirical, graphical, and numerical iterative methods are in use. Derivation of a closed-form solution can be simplified by using unit quaternions to represent rotation, as was shown in an earlier paper [ J. Opt. Soc. Am. A4, 629 ( 1987)]. Since orthonormal matrices are used more widely to represent rotation, we now present a solution in which 3 × 3 matrices are used. Our method requires the computation of the square root of a symmetric matrix. We compare the new result with that obtained by an alternative method in which orthonormality is not directly enforced. In this other method a best-fit linear transformation is found, and then the nearest orthonormal matrix is chosen for the rotation. We note that the best translational offset is the difference between the centroid of the coordinates in one system and the rotated and scaled centroid of the coordinates in the other system. The best scale is equal to the ratio of the root-mean-square deviations of the coordinates in the two systems from their respective centroids. These exact results are to be preferred to approximate methods based on measurements of a few selected points.

1,101 citations

Journal ArticleDOI
TL;DR: A nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coefficients is developed, andVariants of this method based on simple threshold nonlinear estimators are nearly minimax.
Abstract: We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets, we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coefficients. The shrinkage can be tuned to be nearly minimax over any member of a wide range of Triebel- and Besov-type smoothness constraints and asymptotically mini-max over Besov bodies with $p \leq q$. Linear estimates cannot achieve even the minimax rates over Triebel and Besov classes with $p<2$, so the method can significantly outperform every linear method (e.g., kernel, smoothing spline, sieve in a minimax sense). Variants of our method based on simple threshold nonlinear estimators are nearly minimax. Our method possesses the interpretation of spatial adaptivity; it reconstructs using a kernel which may vary in shape and bandwidth from point to point, depending on the data. Least favorable distributions for certain of the Triebel and Besov scales generate objects with sparse wavelet transforms. Many real objects have similarly sparse transforms, which suggests that these minimax results are relevant for practical problems. Sequels to this paper, which was first drafted in November 1990, discuss practical implementation, spatial adaptation properties, universal near minimaxity and applications to inverse problems.

1,066 citations

Journal ArticleDOI
TL;DR: It is proved that the result of Donoho and Huo, concerning the replacement of the /spl lscr//sup 0/ optimization problem with a linear programming problem when searching for sparse representations has an analog for dictionaries that may be highly redundant.
Abstract: The purpose of this correspondence is to generalize a result by Donoho and Huo and Elad and Bruckstein on sparse representations of signals in a union of two orthonormal bases for R/sup N/ We consider general (redundant) dictionaries for R/sup N/, and derive sufficient conditions for having unique sparse representations of signals in such dictionaries The special case where the dictionary is given by the union of L/spl ges/2 orthonormal bases for R/sup N/ is studied in more detail In particular, it is proved that the result of Donoho and Huo, concerning the replacement of the /spl lscr//sup 0/ optimization problem with a linear programming problem when searching for sparse representations, has an analog for dictionaries that may be highly redundant

1,049 citations

Proceedings ArticleDOI
Yu1, Shi
13 Oct 2003
TL;DR: This work proposes a principled account on multiclass spectral clustering by solving a relaxed continuous optimization problem by eigen-decomposition and clarifying the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms.
Abstract: We propose a principled account on multiclass spectral clustering Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigen-decomposition We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression The resulting discrete solutions are nearly global-optimal Our method is robust to random initialization and converges faster than other clustering methods Experiments on real image segmentation are reported

1,028 citations


Network Information
Related Topics (5)
Matrix (mathematics)
105.5K papers, 1.9M citations
90% related
Bounded function
77.2K papers, 1.3M citations
87% related
Nonlinear system
208.1K papers, 4M citations
87% related
Differential equation
88K papers, 2M citations
86% related
Partial differential equation
70.8K papers, 1.6M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023170
2022361
2021222
2020251
2019237
2018206