scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 1996"


Book ChapterDOI
15 Apr 1996
TL;DR: A practical method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views, using only fundamental matrices and epipoles estimated from the image data.
Abstract: We propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of this paper is a practical method for the recovery of these scalings, using only fundamental matrices and epipoles estimated from the image data. The resulting projective reconstruction algorithm runs quickly and provides accurate reconstructions. Results are presented for simulated and real images.

686 citations


Journal ArticleDOI
TL;DR: In this article, the Singular Value Decomposition (SVD) of the response matrix is used for the unfolding of high energy physics distributions, and a relatively simple, yet quite efficient unfolding procedure is described.
Abstract: Distributions measured in high energy physics experiments are usually distorted and/or transformed by various detector effects. A regularization method for unfolding these distributions is re-formulated in terms of the Singular Value Decomposition (SVD) of the response matrix. A relatively simple, yet quite efficient unfolding procedure is explained in detail. The concise linear algorithm results in a straightforward implementation with full error propagation, including the complete covariance matrix and its inverse. Several improvements upon widely used procedures are proposed, and recommendations are given how to simplify the task by the proper choice of the matrix. Ways of determining the optimal value of the regularization parameter are suggested and discussed, and several examples illustrating the use of the method are presented.

521 citations


Journal ArticleDOI
TL;DR: In this article, a singularly valuable decomposition (SVD) of a matrix has been proposed, where the SVD is defined as the sum of the number of vertices in the matrix.
Abstract: (1996). A Singularly Valuable Decomposition: The SVD of a Matrix. The College Mathematics Journal: Vol. 27, No. 1, pp. 2-23.

425 citations


Journal ArticleDOI
TL;DR: A new subspace identification algorithm for the identification of multi-input multi-output linear time-invariant continuous-time systems from measured frequency response data is presented and it is shown that, when the error distribution on the measurements is given, the algorithm can be made asymptotically unbiased.

159 citations


Journal ArticleDOI
TL;DR: In this paper, the relationship between singular value decomposition analysis (SVD) and canonical correlation analysis (CCA) is clarified and some problems associated with interpreting results of both SVD and CCA are discussed.
Abstract: The goal of singular value decomposition analysis (SVD) and canonical correlation analysis (CCA) is to isolate important coupled modes between two geophysical fields of interest. In this paper the relationship between SVD and CCA is clarified. They should be considered as two distinct methods (possibly) suitable for answering two different questions. Some problems associated with interpreting results of both SVD and CCA are discussed. Both methods have a high potential to produce spurious spatial patterns. Caution is always called for in interpreting results from either method.

154 citations


Journal ArticleDOI
TL;DR: In this paper, two new methods for solution of the eigenvalue assignment problem associated with the second-order control system were proposed, which construct feedback matrices F 1 and F 2 such that the closed-loop quadratic pencil has a desired set of eigenvalues and the associated eigenvectors are well conditioned.
Abstract: We propose two new methods for solution of the eigenvalue assignment problem associated with the second-order control system \global\hsize=30pc Specifically, the methods construct feedback matrices F 1 and F 2 such that the closed-loop quadratic pencil has a desired set of eigenvalues and the associated eigenvectors are well conditioned. Method 1 is a modification of the singular value decomposition-based method proposed by Juang and Maghami which is a second-order adaptation of the well-known robust eigenvalue assignment method by Kautsky et al. for first-order systems. Method 2 is an extension of the recent non-modal approach of Datta and Rincon for feedback stabilization of second-order systems. Robustness to numerical round-off errors is achieved by minimizing the condition numbers of the eigenvectors of the closed-loop second-order pencil. Control robustness to large plant uncertainty will not be explicitly considered in this paper. Numerical results for both the two methods are favourable. A compara...

151 citations


Journal ArticleDOI
TL;DR: In this paper, a method for computing near-and far-field patterns of an antenna from its near-field measurements taken over an arbitrary geometry is presented, where an electric field integral equation is developed to relate the near fields to the equivalent magnetic current, and a moment method procedure is employed to solve the integral equation by transforming it into a matrix equation.
Abstract: A method is presented for computing near- and far-field patterns of an antenna from its near-field measurements taken over an arbitrary geometry. This method utilizes near-field data to determine an equivalent magnetic current source over a fictitious surface which encompasses the antenna. This magnetic current, once determined, can be used to ascertain the near and the far fields. This method demonstrates that once the values of the electromagnetic field are known over an arbitrary geometry, its values for any other region can be obtained. An electric field integral equation is developed to relate the near fields to the equivalent magnetic current. A moment method procedure is employed to solve the integral equation by transforming it into a matrix equation. A least squares solution via singular value decomposition is used to solve the matrix equation. Computations with both synthetic and experimental data, where the near field of several antenna configurations are measured over various geometric surfaces, illustrate the accuracy of this method.

134 citations


Journal ArticleDOI
TL;DR: In this article, two suboptimal data assimilation schemes for stable and unstable dynamics are introduced, which rely on iterative procedures like the Lanczos algorithm to compute the relevant modes.
Abstract: Two suboptimal data assimilation schemes for stable and unstable dynamics are introduced. The first scheme, the partial singular value decomposition filter, is based on the most dominant singular modes of the tangent linear propagator. The second scheme, the partial eigendecomposition filter, is based on the most dominant eigenmodes of the propagated analysis error covariance matrix. Both schemes rely on iterative procedures like the Lanczos algorithm to compute the relevant modes. The performance of these schemes is evaluated for a shallow-water model linearized about an unstable Bickley jet. The results are contrasted against those of a reduced resolution filter, in which the gains used to update the state vector are calculated from a lower-dimensional dynamics than the dynamics that evolve the state itself. The results are also contrasted against the exact results given by the Kalman filter. These schemes are validated for the case of stable dynamics as well. The two new approximate assimilation schemes are shown to perform well with relatively few modes computed. Adaptive tuning of a modeled trailing error covariance for all three of these low-rank approximate schemes enhances performance and compensates for the approximation employed.

120 citations


Journal ArticleDOI
TL;DR: The lateral resolution of the new system can exceed the diffraction limit imposed on conventional imaging systems utilizing delay-and-sum beamformers and the range resolution is compared to that of conventional pulse-echo systems with resolution enhancement (the PIO behaves as a pseudo-inverse Wiener filter in the range direction).
Abstract: A new approach to ultrasound imaging with coded-excitation is presented. The imaging is performed by reconstruction of the scatterer strength on an assumed grid covering the region of interest (ROI). Our formulation is based on an assumed discretized signal model which represents the received sampled data vector as a superposition of impulse responses of all scatterers in the ROI. The reconstruction operator is derived from the pseudo-inverse of the linear operator (system matrix) that produces the received data vector. The singular value decomposition (SVD) method with appropriate regularization techniques is used for obtaining a robust realization of the pseudo-inverse. Under simplifying (but realistic) assumptions, the pseudo-inverse operator (PIO) can be implemented using a bank of transversal filters with each filter designed to extract echoes from a specified image line. This approach allows for the simultaneous acquisition of a large number of image lines. This could be useful in increasing frame rates for two-dimensional imaging systems or allowing for real-time implementation of three-dimensional imaging systems. When compared to the matched filtering approach to similar coded-excitation systems, our approach eliminates correlation artifacts that are known to plague such systems. Furthermore, the lateral resolution of the new system can exceed the diffraction limit imposed on conventional imaging systems utilizing delay-and-sum beamformers. The range resolution is compared to that of conventional pulse-echo systems with resolution enhancement (our PIO behaves as a pseudo-inverse Wiener filter in the range direction). Both simulation and experimental verification of these statements are given.

117 citations


Journal ArticleDOI
TL;DR: In this paper, the symmetric, positive semidefinite, and positive definite real solutions of matrix equations A T XA = D and (AT XA, XA − Y AD ) = (D, 0) are considered.

104 citations


Journal ArticleDOI
TL;DR: The relationship between a previously published sensor placement technique, called Effective Independence, and system-realization methods of modal identification is presented and it is shown that the Effective Independence sensor configuration provides superior modal Identification results as p~tedicted by the analytical work.
Abstract: The relationship between a previously published sensor placement technique, called Effective Independence, and system-realization methods of modal identification is presented. The sensor placement method maximizes spatial independence and signal strength of targeted mode shapes by maximizing the determinant of an associated Fisher information matrix. It is shown that the sensor placement method also enhances modal identification using system realization techniques by minimizing the size of the required test data matrix, maximizing the modal observability, enhancing the separation of target modes from computational or noise modes, and optimizing the estimation of the discrete system plant matrix. Three currently popular system-realization methods are considered in the analysis, including the Eigensystem Realization Algorithm, the Q-Markov COVER method, and the Polyreference method. A numerical example is also presented using the plyreference modal identification technique in conjunction with several sensor configurations selected using differing placement methods. The corresponding test data is from a modal survey performed on the Controls-Structures-Interaction Evolutionary Model testbed at the NASA LaRC Space Structures Research Laboratory. It is shown that the Effective Independence sensor configuration provides superior modal identification results as p~tedicted by the analytical work.

Proceedings ArticleDOI
05 May 1996
TL;DR: A parallel and recursive total least squares algorithm (PRTLS) for solving a time variant TLS problem is proposed, based on the updating technique of the GMRQI-JKL to update the SVD.
Abstract: In this paper, a new technique for updating the SVD is described. It starts from the fact that the SVD can be reduced to a corresponding symmetric eigenvalue problem and utilizes an efficient eigensystem solver, called the GMRQI-JKL, to update the SVD. Based on the updating technique, a parallel and recursive total least squares algorithm (PRTLS) for solving a time variant TLS problem is proposed. Some numerical examples are given to confirm the performance of the algorithms.

Patent
23 Sep 1996
TL;DR: In this article, a multi-dimensional processing and display system is used with textual data to provide a system by which large volumes of such textual data may be efficiently sorted and searched.
Abstract: A multi-dimensional processing and display system that is used with textual data to provide a system by which large volumes of such textual data may be efficiently sorted and searched. Textual data that is input to the multi-dimensional processing and display system is from one or more documents that are reformatted and translated into one or more numeric matrices. The matrices are modified to enhance and/or suppress certain words, phrases, subjects, etc. Thereafter, a single two-dimensional data is formed by concatenating the numeric matrices. The multi-dimensional processing and display system creates and maintains a historical database which is also concatenated in the two-dimensional matrix. Once the textual data is in the form of a two-dimensional matrix, the data can be analyzed efficiently, for example, using singular value decomposition (SVD). In doing so, the two-dimensional concatenated matrix is decomposed to obtain a compressed form of the numeric matrix. Certain data elements in the two-dimensional matrix may be enhanced, while certain other data elements may be suppressed. After data enhancement and/or suppression, the two-dimensional matrix is partitioned and rearranged to form an enhanced multi-dimensional matrix. All or portions of the enhanced multi-dimensional matrix are then visually displayed. Lexical, semantic, and/ or textual constructs of interest may be displayed as opaque objects within a three-dimensional transparent cube, enabling a user to review many documents quickly and easily.

Journal ArticleDOI
TL;DR: In this paper, it is shown that the strain energy norm can be used to define optimal projections of both loads and displacements onto the exactly represented subspace of a given reduced model.
Abstract: A fundamental property of the Rayleigh-Ritz method is that a reduced model gives exact predictions if the solutions lie in the range of the reduction basis. For cases with and without rigid-body modes, it is shown that the strain energy norm can be used to define optimal projections of both loads and displacements onto the exactly represented subspace of a given reduced model. In practice, multiple loads or displacements are considered, and variants of the singular value decomposition based on the strain and kinetic energy norms are shown to provide ways to select important contributions. The proposed framework is used to define two new methods for optimal reduction and reduced model correction. The validity and usefulness of the proposed methods are illustrated for the example of two square plates assembled at a right angle with two very different plate thickness configurations.

Journal ArticleDOI
TL;DR: A fivefold speedup can be achieved over the fastest alternative approach over the basic SVD process when 3-D and 4-D extensions of the CORDIC algorithm for plane rotations are used.
Abstract: The singular value decomposition (SVD) of complex matrices is computed in a highly parallel fashion on a square array of processors using Kogbetliantz's analog of Jacobi's eigenvalue decomposition method. To gain further speed, new algorithms for the basic SVD operations are proposed and their implementation as specialized processors is presented. The algorithms are 3-D and 4-D extensions of the CORDIC algorithm for plane rotations. When these extensions are used in concert with an additive decomposition of 2/spl times/2 complex matrices, which enhances parallelism, and with low resolution rotations early on in the SVD process, which reduce operation count, a fivefold speedup can be achieved over the fastest alternative approach.

01 Aug 1996
TL;DR: The novel definition of approximate polynomial gcds is related to the older and weaker ones, based on perturbation of the coefficients of the input polynomials, some deficiency of the latter definitions are demonstrated, and new effective sequential and parallel (RNC and NC) algorithms for computing approximate g cds and extended ccds are proposed.
Abstract: In the first part of this paper, we define approximate polynomial gcds (greatest common divisors) and extended gcds provided that approximations to the zeros of the input polynomials are available. We relate our novel definition to the older and weaker ones, based on perturbation of the coefficients of the input polynomials, we demonstrate some deficiency of the latter definitions (which our definition avoids), and we propose new effective sequential and parallel (RNC and NC) algorithms for computing approximate gcds and extended gcds. Our stronger results are obtained with no increase of the asymptotic bounds on the computational cost. This is partly due to application of our recent nearly optimal algorithms for approximating polynomial zeros. In the second part of our paper, working under the older and more customary definition of approximate gcds, we modify and develop an alternative approach, which was previously based on the computation of the Singular Value Decomposition (SVD) of the associated Sylvester (resultant) matrix. We observe that only a small part of the SVD computation is needed in our case, and we also yield further simplification by using the techniques of Pade approximation and computations with Hankel and Bezout matrices. Finally, in the last part of our paper, we show an extension of the numerical computation of the gcd to the problem of computing numerical rank of a Hankel matrix, which is a bottleneck of Pade and Berlekamp-Massey computations, having important applications to coding and transmission of information.

Journal ArticleDOI
TL;DR: The effectiveness of this algorithm for structural system identification is demonstrated using the MIT Middeck Active Control Experiment (MACE) and results of ground experiments using this algorithm will be discussed.
Abstract: This paper presents the Frequency Domain Observability Range Space Extraction (FORSE) identification algorithm. FORSE is a singular value decomposition based identification algorithm which constructs a state space model directly from frequency domain data. The concept of system identification by observability range space extraction was developed by generalizing the Q-Markov Covariance Equivalent Realization and Eigensystem Realization Algorithm. The numerical properties of FORSE are well behaved when applied to multi-variable and high dimensional structural systems. It can achieve high modeling accuracy by properly overparameterizing the system. The effectiveness of this algorithm for structural system identification is demonstrated using the MIT Middeck Active Control Experiment (MACE). MACE is an active structural control experiment to be conducted in the Space Shuttle middeck. Results of ground experiments using this algorithm will be discussed.

Journal ArticleDOI
TL;DR: The HTLS method is extended to the HTLS-PK method, which has the capability to accommodate prior knowledge of some known signal poles, and simulated and real-world NMR signals are processed using the HTL-PK methods to demonstrate the advantage of the new method.

Proceedings ArticleDOI
08 Sep 1996
TL;DR: The proposed SVD-QR method selects subsets of independent basis functions which are sufficient to represent a given system, through operations on a nonsingleton fuzzy basis function matrix, and provides an estimate of the number of necessary basis functions.
Abstract: Nonsingleton fuzzy logic systems (NSFLSs) are generalizations of singleton fuzzy logic systems (FLSs), that are capable of handling set-valued input. In this paper, we extend the theory of NSFLSs by presenting an algorithm to design and train such systems. Since they generalize singleton FLSs, the algorithm is equally applicable to both types of systems. The proposed SVD-QR method selects subsets of independent basis functions which are sufficient to represent a given system, through operations on a nonsingleton fuzzy basis function matrix. In addition, it provides an estimate of the number of necessary basis functions. We present examples to illustrate the ability of the SVD-QR method to operate in uncertain environments.

Journal ArticleDOI
TL;DR: An approach to non-linear principal components using radially symmetric kernel basis functions is described and can be related to the homogeneity analysis approach of Gifi through the minimization of a loss function.
Abstract: An approach to non-linear principal components using radially symmetric kernel basis functions is described. The procedure consists of two steps: a projection of the data set to a reduced dimension using a non-linear transformation whose parameters are determined by the solution of a generalized symmetric eigenvector equation. This is achieved by demanding a maximum variance transformation subject to a normalization condition (Hotelling's approach) and can be related to the homogeneity analysis approach of Gifi through the minimization of a loss function. The transformed variables are the principal components whose values define contours, or more generally hypersurfaces, in the data space. The second stage of the procedure defines the fitting surface, the principal surface, in the data space (again as a weighted sum of kernel basis functions) using the definition of self-consistency of Hastie and Stuetzle. The parameters of this principal surface are determined by a singular value decomposition and crossvalidation is used to obtain the kernel bandwidths. The approach is assessed on four data sets.

Journal ArticleDOI
TL;DR: It is demonstrated that, for this type of application, the SVD encoding technique adequately follows dynamic changes with even a small number of encodes, and is shown to provide improved spatial resolution of dynamic events when updating with the same number ofencodes.
Abstract: In the paper, the results of a fast gradient-echo implementation of the singular value decomposition (SVD) encoding technique for dynamic imaging are presented. The method used is an adaptation with several critical modifications of a keyhole-type approach previously proposed but not implemented. The method was tested by imaging the events following injection of a contrast agent into a phantom, producing a series of dynamic image updates. It is demonstrated that, for this type of application, the SVD encoding technique adequately follows dynamic changes with even a small number of encodes. The result is comared qualitatively to that obtained by standard Fourier-based keyhole imaging and is shown to provide improved spatial resolution of dynamic events when updating with the same number of encodes.

Journal ArticleDOI
TL;DR: In this article, a design technique for satisfying the global reaching condition when applying variable structure control to a linear system with output feedback is presented, and a specific algorithm is given along with a numerical example.
Abstract: This paper develops a design technique for satisfying the global reaching condition when applying variable structure control to a linear system with output feedback. A specific algorithm is given along with a numerical example. >

Book ChapterDOI
13 Apr 1996
TL;DR: It is shown that when the integrability constraint is applied to objects with varying albedo it leads to an ambiguity in depth estimation similar to the bas relief ambiguity.
Abstract: Realistic representation of objects requires models which can synthesize the image of an object under all possible viewing conditions. We propose to learn these models from examples. Methods for learning surface geometry and albedo from one or more images under fixed posed and varying lighting conditions are described. Singular value decomposition (SVD) is used to determine shape, albedo, and lighting conditions up to an unknown 3×3 matrix, which is sufficient for recognition. The use of class-specific knowledge and the integrability constraint to determine this matrix is explored. We show that when the integrability constraint is applied to objects with varying albedo it leads to an ambiguity in depth estimation similar to the bas relief ambiguity. The integrability constraint, however, is useful for resolving ambiguities which arise in current photometric theories.

Journal ArticleDOI
TL;DR: In this article, the form and parameters of force fields are optimized for molecular dynamics simulations through utilizing information about properties such as the geometry, Hessian, polarizability, stress (crystals), and elastic constants.
Abstract: We present methodology (HBFF/SVD) for optimizing the form and parameters of force fields (FF) for molecular dynamics simulations through utilizing information about properties such as the geometry, Hessian, polarizability, stress (crystals), and elastic constants (crystals). This method is based on singular value decomposition (SVD) of the Jacobian describing the partial derivatives in various properties with respect to FF parameters. HBFF/SVD is effective for optimizing the parameters for accurate FFs of organic, inorganic, and transition metal compounds. In addition it provides information on the validity of the functional form of the FF for describing the properties of interest. This method is illustrated by application to organic molecules (CH2O, C2H4, C4H6, C6H8, C6H6, and naphthalene) and inorganic molecules (Cl2CrO2 and Cl2MoO2).

Journal ArticleDOI
TL;DR: This paper describes a much simpler generalized Schur-type algorithm to compute similar low-rank approximants of a matrix H such that H - \Ha has 2-norm less than $\epsilon$.
Abstract: The usual way to compute a low-rank approximant of a matrix $H$ is to take its singular value decomposition (SVD) and truncate it by setting the small singular values equal to 0. However, the SVD is computationally expensive. This paper describes a much simpler generalized Schur-type algorithm to compute similar low-rank approximants. For a given matrix $H$ which has $d$ singular values larger than $\epsilon$, we find all rank $d$ approximants $\Ha$ such that $H - \Ha$ has 2-norm less than $\epsilon$. The set of approximants includes the truncated SVD approximation. The advantages of the Schur algorithm are that it has a much lower computational complexity (similar to a QR factorization), and directly produces a description of the column space of the approximants. This column space can be updated and downdated in an on-line scheme, amenable to implementation on a parallel array of processors.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a new matrix decomposition for a matrix pair (A, B) with A Hermitian, where the hyperbolic SVD comes as a special case of the decomposition with A set to be the signature matrix.

Journal ArticleDOI
Susan E. Minkoff1
TL;DR: In this article, the conjugate gradient and Lanczos algorithm are used to construct an extremely inexpensive approximation to the model resolution matrix, which is very good in the vicinity of large events in the data.
Abstract: SUMMARY Seismic inversion produces model estimates which arc at most unique in an average sense. The model resolution matrix quantifies the spatial extent over which the estimate averages the true model. Although the resolution matrix has traditionally been defined in terms of the singular value decomposition of the discretized forward problem, this computation is prohibitive for inverse problems of realistic size. Inversion requires one to solve a large normal matrix system which is best tackled by an iterative technique such as the conjugate gradient method. The close connection between the conjugate gradient and Lanczos algorithms allows us to construct an extremely inexpensive approximation to the model resolution matrix. Synthetic experiments indicate the data dependence of this particular approximation. The approximation is very good in the vicinity of large events in the data. Two large linear viscoelastic inversion experiments on p-τ marine data from the Gulf of Mexico provide estimates of the elastic parameter reflectivities corresponding to two different seismic sources. Traditionally, one evaluates the accuracy of the two reflectivity estimates by comparing them with measured well logs. The approximate model resolution matrices agree with the well-log ranking of the two models and provide us with a way to compare different model estimates when, for example, such well-log measurements are not available.

Journal ArticleDOI
TL;DR: In this paper, a singular value decomposition of the tomographic matrix enables one to recognize the causes of this mathematical ambiguity in the model space, and allows one to control the non-uniqueness of solutions by modifying the pixel distribution or the acquisition geometry.
Abstract: SUMMARY Tomographic inversion of traveltimes is often carried out by discretizing the Earth as a grid of regular pixels. This choice simplifies the related ray-tracing algorithms, but contributes significantly to the non-uniqueness of the estimated velocity distribution. A singular value decomposition of the tomographic matrix enables one to recognize the causes of this mathematical ambiguity in the model space. This is more intuitive than introducing arbitrary damping factors and spatial filters, and allows one to control the non-uniqueness of solutions by modifying the pixel distribution or the acquisition geometry. This approach lends itself to the adoption of irregular grids and to the definition of a new ray-tracing algorithm, based on Fermat's principle of minimum time, which is able to simulate transmitted, reflected, refracted and diffracted waves. The joint tomographic inversion of these different types of waves potentially provides an additional improvement to the quality and reliability of the estimated velocities.

Journal ArticleDOI
TL;DR: In this paper, a downdating algorithm is proposed that preserves the structure in the downdated matrix to the extent possible, and strong stability results are proven for these algorithms based upon a new perturbation theory.
Abstract: An alternative to performing the singular value decomposition is to factor a matrixA into $$A = U\left( {\begin{array}{*{20}c} C \\ 0 \\ \end{array} } \right)V^T $$ , whereU andV are orthogonal matrices andC is a lower triangular matrix which indicates a separation between two subspaces by the size of its columns. These subspaces are denoted byV = (V 1,V 2), where the columns ofC are partitioned conformally intoC = (C 1,C 2) with ‖C 2 ‖ F ≤ e. Here e is some tolerance. In recent years, this has been called the ULV decomposition (ULVD). If the matrixA results from statistical observations, it is often desired to remove old observations, thus deleting a row fromA and its ULVD. In matrix terms, this is called a downdate. A downdating algorithm is proposed that preserves the structure in the downdated matrix $$\bar C$$ to the extent possible. Strong stability results are proven for these algorithms based upon a new perturbation theory.

Journal ArticleDOI
TL;DR: A Radon-based invariant image analysis method is introduced to achieve invariant features to translation, rotation, and scaling in a back-propagation neural network followed by a maximum-output-selector.
Abstract: A Radon-based invariant image analysis method is introduced. The linearity, shift, rotation, and scaling properties of the Radon transform are utilized to achieve invariant features to translation, rotation, and scaling. The singular values of a matrix, constructed by row-stacking of projections, are used to construct the invariant feature vector. This feature vector will be used as input to a classifier, which is here, the back-propagation neural network followed by a maximum-output-selector. A performance function is introduced to evaluate the performance of the recognition system. This performance function can also be used to indicate how closely the pattern matches the decision template. The effectiveness of this method is illustrated by a simulation example and it is compared with the method of Zernike moments.