scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 2004"


Proceedings ArticleDOI
04 Jul 2004
TL;DR: It is proved that principal components are the continuous solutions to the discrete cluster membership indicators for K-means clustering, which indicates that unsupervised dimension reduction is closely related to unsuper supervised learning.
Abstract: Principal component analysis (PCA) is a widely used statistical technique for unsupervised dimension reduction. K-means clustering is a commonly used data clustering for performing unsupervised learning tasks. Here we prove that principal components are the continuous solutions to the discrete cluster membership indicators for K-means clustering. New lower bounds for K-means objective function are derived, which is the total variance minus the eigenvalues of the data covariance matrix. These results indicate that unsupervised dimension reduction is closely related to unsupervised learning. Several implications are discussed. On dimension reduction, the result provides new insights to the observed effectiveness of PCA-based data reductions, beyond the conventional noise-reduction explanation that PCA, via singular value decomposition, provides the best low-dimensional linear approximation of the data. On learning, the result suggests effective techniques for K-means data clustering. DNA gene expression and Internet newsgroups are analyzed to illustrate our results. Experiments indicate that the new bounds are within 0.5-1.5% of the optimal values.

1,431 citations


Journal ArticleDOI
TL;DR: An algorithm is developed that is qualitatively faster, provided the authors may sample the entries of the matrix in accordance with a natural probability distribution, and implies that in constant time, it can be determined if a given matrix of arbitrary size has a good low-rank approximation.
Abstract: We consider the problem of approximating a given m × n matrix A by another matrix of specified rank k, which is smaller than m and n. The Singular Value Decomposition (SVD) can be used to find the "best" such approximation. However, it takes time polynomial in m, n which is prohibitive for some modern applications. In this article, we develop an algorithm that is qualitatively faster, provided we may sample the entries of the matrix in accordance with a natural probability distribution. In many applications, such sampling can be done efficiently. Our main result is a randomized algorithm to find the description of a matrix D* of rank at most k so that holds with probability at least 1 − δ (where v·vF is the Frobenius norm). The algorithm takes time polynomial in k,1/e, log(1/δ) only and is independent of m and n. In particular, this implies that in constant time, it can be determined if a given matrix of arbitrary size has a good low-rank approximation.

613 citations


Journal ArticleDOI
TL;DR: This paper considers the problem of partitioning a set of m points in the n-dimensional Euclidean space into k clusters, and considers a continuous relaxation of this discrete problem: find the k-dimensional subspace V that minimizes the sum of squared distances to V of the m points, and argues that the relaxation provides a generalized clustering which is useful in its own right.
Abstract: We consider the problem of partitioning a set of m points in the n-dimensional Euclidean space into k clusters (usually m and n are variable, while k is fixed), so as to minimize the sum of squared distances between each point and its cluster center. This formulation is usually the objective of the k-means clustering algorithm (Kanungo et al. (2000)). We prove that this problem in NP-hard even for k e 2, and we consider a continuous relaxation of this discrete problem: find the k-dimensional subspace V that minimizes the sum of squared distances to V of the m points. This relaxation can be solved by computing the Singular Value Decomposition (SVD) of the m × n matrix A that represents the m pointss this solution can be used to get a 2-approximation algorithm for the original problem. We then argue that in fact the relaxation provides a generalized clustering which is useful in its own right. Finally, we show that the SVD of a random submatrix—chosen according to a suitable probability distribution—of a given matrix provides an approximation to the SVD of the whole matrix, thus yielding a very fast randomized algorithm. We expect this algorithm to be the main contribution of this paper, since it can be applied to problems of very large size which typically arise in modern applications.

523 citations


Journal ArticleDOI
TL;DR: A new fast multipole method for particle simulations that does not require the implementation of multipole expansions of the underlying kernel, and it is based only on kernel evaluations that matches its potential to the potential of the original sources at a surface, in the far field.

501 citations


Journal ArticleDOI
TL;DR: This work examines a number of optimization criteria, and extends their applicability by using the generalized singular value decomposition to circumvent the nonsingularity requirement.
Abstract: Discriminant analysis has been used for decades to extract features that preserve class separability. It is commonly defined as an optimization problem involving covariance matrices that represent the scatter within and between clusters. The requirement that one of these matrices be nonsingular limits its application to data sets with certain relative dimensions. We examine a number of optimization criteria, and extend their applicability by using the generalized singular value decomposition to circumvent the nonsingularity requirement. The result is a generalization of discriminant analysis that can be applied even when the sample size is smaller than the dimension of the sample data. We use classification results from the reduced representation to compare the effectiveness of this approach with some alternatives, and conclude with a discussion of their relative merits.

358 citations


Journal ArticleDOI
TL;DR: The concept of quaternionic signal is introduced, and the SVDQ allows to calculate the best rank-α approximation of a quaternion matrix and can be used in subspace method for wave separation over vector-sensor array.

353 citations


Proceedings ArticleDOI
04 Jul 2004
TL;DR: This paper did extensive experiments on face image data to evaluate the effectiveness of the proposed algorithm and compare the computed low rank approximations with those obtained from traditional Singular Value Decomposition based method.
Abstract: We consider the problem of computing low rank approximations of matrices. The novelty of our approach is that the low rank approximations are on a sequence of matrices. Unlike the problem of low rank approximations of a single matrix, which was well studied in the past, the proposed algorithm in this paper does not admit a closed form solution in general. We did extensive experiments on face image data to evaluate the effectiveness of the proposed algorithm and compare the computed low rank approximations with those obtained from traditional Singular Value Decomposition based method.

344 citations


Book ChapterDOI
01 Jan 2004
TL;DR: This paper presents Lingo—a novel algorithm for clustering search results, which emphasizes cluster description quality, and describes methods used in the algorithm: algebraic transformations of the term-document matrix and frequent phrase extraction using suffix arrays.
Abstract: Search results clustering problem is defined as an automatic, on-line grouping of similar documents in a search results list returned from a search engine. In this paper we present Lingo—a novel algorithm for clustering search results, which emphasizes cluster description quality. We describe methods used in the algorithm: algebraic transformations of the term-document matrix and frequent phrase extraction using suffix arrays. Finally, we discuss results acquired from an empirical evaluation of the algorithm.

309 citations


Proceedings ArticleDOI
22 Aug 2004
TL;DR: This work furnishes a clear, information-theoretic criterion to choose a good cross-association as well as its parameters, namely, the number of row and column groups, and provides scalable algorithms to approach the optimal.
Abstract: Large, sparse binary matrices arise in numerous data mining applications, such as the analysis of market baskets, web graphs, social networks, co-citations, as well as information retrieval, collaborative filtering, sparse matrix reordering, etc. Virtually all popular methods for the analysis of such matrices---e.g., k-means clustering, METIS graph partitioning, SVD/PCA and frequent itemset mining---require the user to specify various parameters, such as the number of clusters, number of principal components, number of partitions, and "support." Choosing suitable values for such parameters is a challenging problem.Cross-association is a joint decomposition of a binary matrix into disjoint row and column groups such that the rectangular intersections of groups are homogeneous. Starting from first principles, we furnish a clear, information-theoretic criterion to choose a good cross-association as well as its parameters, namely, the number of row and column groups. We provide scalable algorithms to approach the optimal. Our algorithm is parameter-free, and requires no user intervention. In practice it scales linearly with the problem size, and is thus applicable to very large matrices. Finally, we present experiments on multiple synthetic and real-life datasets, where our method gives high-quality, intuitive results.

296 citations


Proceedings ArticleDOI
24 Oct 2004
TL;DR: These algorithms first construct a secondary image, derived from input image by pseudo-randomly extracting features that approximately capture semi-global geometric characteristics, and propose novel hashing algorithms employing transforms that are based on matrix invariants.
Abstract: In this paper we suggest viewing images (as well as attacks on them) as a sequence of linear operators and propose novel hashing algorithms employing transforms that are based on matrix invariants. To derive this sequence, we simply cover a two dimensional representation of an image by a sequence of (possibly overlapping) rectangles R/sub i/ whose sizes and locations are chosen randomly/sup 1/ from a suitable distribution. The restriction of the image (representation) to each R/sub i/ gives rise to a matrix A/sub i/. The fact that A/sub i/'s will overlap and are random, makes the sequence (respectively) a redundant and non-standard representation of images, but is crucial for our purposes. Our algorithms first construct a secondary image, derived from input image by pseudo-randomly extracting features that approximately capture semi-global geometric characteristics. From the secondary image (which does not perceptually resemble the input), we further extract the final features which can be used as a hash value (and can be further suitably quantized). In this paper, we use spectral matrix invariants as embodied by singular value decomposition. Surprisingly, formation of the secondary image turns out be quite important since it not only introduces further robustness (i.e., resistance against standard signal processing transformations), but also enhances the security properties (i.e. resistance against intentional attacks). Indeed, our experiments reveal that our hashing algorithms extract most of the geometric information from the images and hence are robust to severe perturbations (e.g. up to %50 cropping by area with 20 degree rotations) on images while avoiding misclassification. Our methods are general enough to yield a watermark embedding scheme, which will be studied in another paper.

274 citations


Journal ArticleDOI
TL;DR: An optimization criterion is presented for discriminant analysis that extends the optimization criteria of the classical Linear Discriminant Analysis through the use of the pseudoinverse when the scatter matrices are singular, overcoming a limitation of classical LDA.
Abstract: An optimization criterion is presented for discriminant analysis. The criterion extends the optimization criteria of the classical Linear Discriminant Analysis (LDA) through the use of the pseudoinverse when the scatter matrices are singular. It is applicable regardless of the relative sizes of the data dimension and sample size, overcoming a limitation of classical LDA. The optimization problem can be solved analytically by applying the Generalized Singular Value Decomposition (GSVD) technique. The pseudoinverse has been suggested and used for undersampled problems in the past, where the data dimension exceeds the number of data points. The criterion proposed in this paper provides a theoretical justification for this procedure. An approximation algorithm for the GSVD-based approach is also presented. It reduces the computational complexity by finding subclusters of each cluster and uses their centroids to capture the structure of each cluster. This reduced problem yields much smaller matrices to which the GSVD can be applied efficiently. Experiments on text data, with up to 7,000 dimensions, show that the approximation algorithm produces results that are close to those produced by the exact algorithm.

Journal ArticleDOI
TL;DR: Kruskal's permutation lemma is revisited, and a similar necessary and sufficient condition for unique bilinear factorization under constant modulus (CM) constraints is derived, thus providing an interesting link to (and unification with) CP.
Abstract: CANDECOMP/PARAFAC (CP) analysis is an extension of low-rank matrix decomposition to higher-way arrays, which are also referred to as tensors. CP extends and unifies several array signal processing tools and has found applications ranging from multidimensional harmonic retrieval and angle-carrier estimation to blind multiuser detection. The uniqueness of CP decomposition is not fully understood yet, despite its theoretical and practical significance. Toward this end, we first revisit Kruskal's permutation lemma, which is a cornerstone result in the area, using an accessible basic linear algebra and induction approach. The new proof highlights the nature and limits of the identification process. We then derive two equivalent necessary and sufficient uniqueness conditions for the case where one of the component matrices involved in the decomposition is full column rank. These new conditions explain a curious example provided recently in a previous paper by Sidiropoulos, who showed that Kruskal's condition is in general sufficient but not necessary for uniqueness and that uniqueness depends on the particular joint pattern of zeros in the (possibly pretransformed) component matrices. As another interesting application of the permutation lemma, we derive a similar necessary and sufficient condition for unique bilinear factorization under constant modulus (CM) constraints, thus providing an interesting link to (and unification with) CP.

Journal ArticleDOI
01 Aug 2004
TL;DR: An algorithm called TensorTextures is developed that learns a parsimonious model of the bidirectional texture function (BTF) from observational data and is computed through a decomposition known as the N-mode SVD, an extension to tensors of the conventional matrix singular value decomposition (SVD).
Abstract: This paper introduces a tensor framework for image-based rendering. In particular, we develop an algorithm called TensorTextures that learns a parsimonious model of the bidirectional texture function (BTF) from observational data. Given an ensemble of images of a textured surface, our nonlinear, generative model explicitly represents the multifactor interaction implicit in the detailed appearance of the surface under varying photometric angles, including local (per-texel) reflectance, complex mesostructural self-occlusion, interreflection and self-shadowing, and other BTF-relevant phenomena. Mathematically, TensorTextures is based on multilinear algebra, the algebra of higher-order tensors, hence its name. It is computed through a decomposition known as the N-mode SVD, an extension to tensors of the conventional matrix singular value decomposition (SVD). We demonstrate the application of TensorTextures to the image-based rendering of natural and synthetic textured surfaces under continuously varying viewpoint and illumination conditions.

Journal ArticleDOI
TL;DR: A multilinear generalization of the singular value decomposition and the best rank-R 1, R 2,…, R N approximation of higher-order tensors are reviewed.

Proceedings ArticleDOI
19 Jul 2004
TL;DR: This paper presents a simple but robust visual tracking algorithm based on representing the appearances of objects using affine warps of learned linear subspaces of the image space, and argues that a variant of it, the uniform L/sup 2/-reconstruction error norm, is the right one for tracking.
Abstract: This paper presents a simple but robust visual tracking algorithm based on representing the appearances of objects using affine warps of learned linear subspaces of the image space. The tracker adaptively updates this subspace while tracking by finding a linear subspace that best approximates the observations made in the previous frames. Instead of the traditional L/sup 2/-reconstruction error norm which leads to subspace estimation using PCA or SVD, we argue that a variant of it, the uniform L/sup 2/-reconstruction error norm, is the right one for tracking. Under this framework we provide a simple and a computationally inexpensive algorithm for finding a subspace whose uniform L/sup 2/-reconstruction error norm for a given collection of data samples is below some threshold, and a simple tracking algorithm is an immediate consequence. We show experimental results on a variety of image sequences of people and man-made objects moving under challenging imaging conditions, which include drastic illumination variation, partial occlusion and extreme pose variation.

Journal ArticleDOI
TL;DR: New noniterative algorithms for the identification of (multivariable) block-oriented nonlinear models consisting of the interconnection of linear time invariant systems and static nonlinearities are presented.

Journal ArticleDOI
TL;DR: It is proved that the rank one left and right singular vectors, that is the vectors associated with the largest singular value, yield theoretically justified weights, and suggested that an inconsistency measure for these weights is the Frobenius norm of the difference between the original pairwise comparison matrix and one formed by the SVD determined weights.

Journal ArticleDOI
01 Feb 2004
TL;DR: The proposed method generates a Takagi-Sugeno fuzzy model, characterized with transparency, high accuracy and a small number of rules, which uses numerical data as a starting point for neuro-fuzzy identification.
Abstract: The paper describes a neuro-fuzzy identification approach, which uses numerical data as a starting point. The proposed method generates a Takagi-Sugeno fuzzy model, characterized with transparency, high accuracy and a small number of rules. The process of self-organizing of the identification model consists of three phases: clustering of the input-output space using a self-organized neural network; determination of the parameters of the consequent part of a rule from over-determined batch least-squares formulation of the problem, using singular value decomposition algorithm; and on-line adaptation of these parameters using recursive least-squares method. The verification of the proposed identification approach is provided using four different problems: two benchmark identification problems, speed estimation for a DC motor drive, and estimation of the temperature in a tunnel furnace for clay baking.

Patent
27 Oct 2004
TL;DR: In this article, a low rank approximation for an m×n matrix A is described, which can be used in connection with Singular Value Decomposition techniques to greatly benefit the processing of high-dimensional data sets in terms of storage, transmission and computation.
Abstract: Methods and systems for finding a low rank approximation for an m×n matrix A are described. The described embodiments can independently sample and/or quantize the entries of an input matrix A, and can thus speed up computation by reducing the number of non-zero entries and/or their representation length. The embodiments can be used in connection with Singular Value Decomposition techniques to greatly benefit the processing of high-dimensional data sets in terms of storage, transmission and computation.

Proceedings ArticleDOI
19 Jul 2004
TL;DR: This work describes an algorithm that uses an extremely simple form of prior knowledge to perform the decomposition of a single input image into two images that minimize the total amount of edges and comers and shows that this simple prior is surprisingly powerful.
Abstract: When we take a picture through a window, the image we obtain is often a linear superposition of two images: the image of the scene beyond the window plus the image of the scene reflected by the window. Decomposing the single input image into two images is a massively ill-posed problem: in the absence of additional knowledge about the scene being viewed there is an infinite number of valid decompositions. We describe an algorithm that uses an extremely simple form of prior knowledge to perform the decomposition. Given a single image as input, the algorithm searches for a decomposition into two images that minimize the total amount of edges and comers. The search is performed using belief propagation on a patch representation of the image. We show that this simple prior is surprisingly powerful: our algorithm obtains "correct" separations on challenging reflection scenes using only a single image.

Proceedings ArticleDOI
25 Oct 2004
TL;DR: In this paper, the authors propose a model for representing and predicting distances in large-scale networks by matrix factorization, which is useful for network distance sensitive applications, such as content distribution networks, topology-aware overlays, and server selections.
Abstract: In this paper, we propose a model for representing and predicting distances in large-scale networks by matrix factorization. The model is useful for network distance sensitive applications, such as content distribution networks, topology-aware overlays, and server selections. Our approach overcomes several limitations of previous coordinates-based mechanisms, which cannot model sub-optimal routing or asymmetric routing policies. We describe two algorithms --- singular value decomposition (SVD) and nonnegative matrix factorization (NMF)---for representing a matrix of network distances as the product of two smaller matrices. With such a representation, we build a scalable system--- Internet Distance Estimation Service (IDES)---that predicts large numbers of network distances from limited numbers of measurements. Extensive simulations on real-world data sets show that IDES leads to more accurate, efficient and robust predictions of latencies in large-scale networks than previous approaches.

Book
01 Jan 2004
TL;DR: This chapter discusses Newton Methods for Nonlinear Optimization, Iterative Methods, and Applications of the Chebyshev Polynomials, which deals with the effects of Finite Precision Arithmetic.
Abstract: 1. Nonlinear Equations. Biscetion and Inverse Linear Interpolation. Newton's Method. The Fixed Point Theorem. Quadratic Convergence of Newton's Method. Variants of Newton's Method. Brent's Method. Effects of Finite Precision Arithmetic. Newton's Method for Systems. Broyden's Method. 2. Linear Systems. Gaussian Elimination with Partial Pivoting. The LU Decomposition. The LU Decomposition with Pivoting. The Cholesky Decomposition. Condition Numbers. The QR Decomposition. Householder Triangularization and the QR Decomposition. Gram-Schmidt Orthogonalization and the QR Decomposition. The Singular Value Decomposition. 3. Iterative Methods. Jacobi and Gauss-Seidel Iteration. Sparsity. Iterative Refinement. Preconditioning. Krylov Space Methods. Numerical Eigenproblems. 4. Polynomial Interpolation. Lagrange Interpolating Polynomials. Piecewise Linear Interpolation. Cubic Splines. Computation of the Cubic Spline Coefficients. 5. Numerical Integration. Closed Newton-Cotes Formulas. Open Newton-Cotes Formulas and Undetermined Coeffients. Gaussian Quadrature. Gauss-Chebyshev Quadrature. Radau and Lobatto Quadrature. Adaptivity and Automatic Integration. Romberg Integration. 6. Differential Equations. Numerical Differentiation. Euler's Method. Improved Euler's Method. Analysis of Explicit One-Step Methods. Taylor and Runge-Kutta Methods. Adaptivity and Stiffness. Multi-Step Methods. 7. Nonlinear Optimization. One-Dimensional Searches. The Method of Steepest Descent. Newton Methods for Nonlinear Optimization. Multiple Random Start Methods. Direct Search Methods. The Nelder-Mead Method. Conjugate Direction Methods. 8. Approximation Methods. Linear and Nonlinear Least Squares. The Best Approximation Problem. Best Uniform Approximation. Applications of the Chebyshev Polynomials. Afterword. Bibliography. Answers. Index.

01 Jan 2004
TL;DR: In this article, a new time-frequency-based EEG seizure detection technique was proposed, which uses an estimate of the distribution function of the singular vectors associated with the timefrequency distribution of an EEG epoch to characterise the patterns embedded in the signal.
Abstract: The nonstationary and multicomponent nature of newborn EEG seizures tends to increase the complexity of the seizure detection problem. In dealing with this type of problems, time-frequency-based techniques were shown to outperform classical techniques. This paper presents a new time-frequency-based EEG seizure detection technique. The technique uses an estimate of the distribution function of the singular vectors associated with the time-frequency distribution of an EEG epoch to characterise the patterns embedded in the signal. The estimated distribution functions related to seizure and nonseizure epochs were used to train a neural network to discriminate between seizure and nonseizure patterns.

Journal ArticleDOI
TL;DR: A new watermarking method which combines the singular value decomposition (SVD) and the discrete cosine transform (DCT) is presented and the local peak signal-to-noise ratio (LPSNR) is adopted in this method to achieve the highest possible robustness without losing the transparency.

Journal ArticleDOI
TL;DR: Covariance NMR is demonstrated for homonuclear 2D NMR data collected using the hypercomplex and TPPI methods and an efficient method is introduced for the calculation of the square root of the covariance spectrum by applying a singular value decomposition (SVD) directly to the mixed time-frequency domain data matrix.

Journal ArticleDOI
TL;DR: It is shown that the problem of designing one-dimensional (1-D) variable fractional-delay (VFD) digital filter can be elegantly reduced to the easier subproblems that involve one- dimensional constant filter (subfilter) designs and 1-D polynomial approximations.
Abstract: This paper shows that the problem of designing one-dimensional (1-D) variable fractional-delay (VFD) digital filter can be elegantly reduced to the easier subproblems that involve one-dimensional (1-D) constant filter (subfilter) designs and 1-D polynomial approximations. By utilizing the singular value decomposition (SVD) of the variable design specification, we prove that both 1-D constant filters and 1-D polynomials possess either symmetry or anti-symmetry simultaneously. Therefore, a VFD filter can be efficiently obtained by designing 1-D constant filters with symmetrical or antisymmetrical coefficients and performing 1-D symmetrical or antisymmetrical approximations. To perform the weighted-least-squares (WLS) VFD filter design, a new WLS-SVD method is also developed. Moreover, an objective criterion is proposed for selecting appropriate subfilter orders and polynomial degrees. Our computer simulations have shown that the SVD-based design and WLS-SVD design can achieve much higher design accuracy with significantly reduced filter, complexity than the existing WLS design method. Another important part of the paper proposes two new structures for efficiently implementing the resulting VFD filter, which require less computational complexity than the so-called Farrow structure.

Journal ArticleDOI
TL;DR: Two complete parametric methods for the proposed eigenstructure assignment problem are presented and both give simple completeParametric expressions for the feedback gains and the closed-loop eigenvector matrices.
Abstract: This note considers eigenstructure assignment in second-order descriptor linear systems via proportional plus derivative feedback. It is shown that the problem is closely related with a type of so-called second-order Sylvester matrix equations. Through establishing two general parametric solutions to this type of matrix equations, two complete parametric methods for the proposed eigenstructure assignment problem are presented. Both methods give simple complete parametric expressions for the feedback gains and the closed-loop eigenvector matrices. The first one mainly depends on a series of singular value decompositions, and is thus numerically simple and reliable. The second one utilizes the right factorization of the system, and allows the closed-loop eigenvalues to be set undetermined and sought via certain optimization procedures. An example shows the effect of the proposed approaches.

Journal ArticleDOI
TL;DR: This paper introduces a new tracking technique that is designed for rectangular sliding window data matrices that shows excellent performance in the context of frequency estimation and an ultra-fast tracking algorithm with comparable performance is proposed.
Abstract: The singular value decomposition (SVD) is an important tool for subspace estimation. In adaptive signal processing, we are especially interested in tracking the SVD of a recursively updated data matrix. This paper introduces a new tracking technique that is designed for rectangular sliding window data matrices. This approach, which is derived from the classical bi-orthogonal iteration SVD algorithm, shows excellent performance in the context of frequency estimation. It proves to be very robust to abrupt signal changes, due to the use of a sliding window. Finally, an ultra-fast tracking algorithm with comparable performance is proposed.

Journal ArticleDOI
TL;DR: Sampling results for certain classes of two-dimensional (2-D) signals that are not bandlimited but have a parametric representation with a finite number of degrees of freedom are presented.
Abstract: We present sampling results for certain classes of two-dimensional (2-D) signals that are not bandlimited but have a parametric representation with a finite number of degrees of freedom. While there are many such parametric signals, it is often difficult to propose practical sampling schemes; therefore, we will concentrate on those classes for which we are able to give exact sampling algorithms and reconstruction formulas. We analyze in detail a set of 2-D Diracs and extend the results to more complex objects such as lines and polygons. Unlike most multidimensional sampling schemes, the methods we propose perfectly reconstruct such signals from a finite number of samples in the noiseless case. Some of the techniques we use are already encountered in the context of harmonic retrieval and error correction coding. In particular, singular value decomposition (SVD)-based methods and the annihilating filter approach are both explored as inherent parts of the developed algorithms. Potentials and limitations of the algorithms in the noisy case are also pointed out. Applications of our results can be found in astronomical signal processing, image processing, and in some classes of identification problems.

Book
13 Apr 2004
TL;DR: A review of Binary Number Codes and its applications in Computing Arctangents, with a focus on the nonlinear systems of Orthogonal Polynomials and Linear Systems.
Abstract: Preface.1 Functional Analysis Ideas.1.1 Introduction.1.2 Some Sets.1.3 Some Special Mappings: Metrics, Norms, and Inner Products.1.3.1 Metrics and Metric Spaces.1.3.2 Norms and Normed Spaces.1.3.3 Inner Products and Inner Product Spaces.1.4 The Discrete Fourier Series (DFS).Appendix 1.A Complex Arithmetic.Appendix 1.B Elementary Logic.References.Problems.2 Number Representations.2.1 Introduction.2.2 Fixed-Point Representations.2.3 Floating-Point Representations.2.4 Rounding Effects in Dot Product Computation.2.5 Machine Epsilon.Appendix 2.A Review of Binary Number Codes.References.Problems.3 Sequences and Series.3.1 Introduction.3.2 Cauchy Sequences and Complete Spaces.3.3 Pointwise Convergence and Uniform Convergence.3.4 Fourier Series.3.5 Taylor Series.3.6 Asymptotic Series.3.7 More on the Dirichlet Kernel.3.8 Final Remarks.Appendix 3.A COordinate Rotation DIgital Computing (CORDIC).3.A.1 Introduction.3.A.2 The Concept of a Discrete Basis.3.A.3 Rotating Vectors in the Plane.3.A.4 Computing Arctangents.3.A.5 Final Remarks.Appendix 3.B Mathematical Induction.Appendix 3.C Catastrophic Cancellation.References.Problems.4 Linear Systems of Equations.4.1 Introduction.4.2 Least-Squares Approximation and Linear Systems.4.3 Least-Squares Approximation and Ill-Conditioned Linear Systems.4.4 Condition Numbers.4.5 LU Decomposition.4.6 Least-Squares Problems and QR Decomposition.4.7 Iterative Methods for Linear Systems.4.8 Final Remarks.Appendix 4.A Hilbert Matrix Inverses.Appendix 4.B SVD and Least Squares.References.Problems.5 Orthogonal Polynomials.5.1 Introduction.5.2 General Properties of Orthogonal Polynomials.5.3 Chebyshev Polynomials.5.4 Hermite Polynomials.5.5 Legendre Polynomials.5.6 An Example of Orthogonal Polynomial Least-Squares Approximation.5.7 Uniform Approximation.References.Problems.6 Interpolation.6.1 Introduction.6.2 Lagrange Interpolation.6.3 Newton Interpolation.6.4 Hermite Interpolation.6.5 Spline Interpolation.References.Problems.7 Nonlinear Systems of Equations.7.1 Introduction.7.2 Bisection Method.7.3 Fixed-Point Method.7.4 Newton-Raphson Method.7.4.1 The Method.7.4.2 Rate of Convergence Analysis.7.4.3 Breakdown Phenomena.7.5 Systems of Nonlinear Equations.7.5.1 Fixed-Point Method.7.5.2 Newton-Raphson Method.7.6 Chaotic Phenomena and a Cryptography Application.References.Problems.8 Unconstrained Optimization.8.1 Introduction.8.2 Problem Statement and Preliminaries.8.3 Line Searches.8.4 Newton's Method.8.5 Equality Constraints and Lagrange Multipliers.Appendix 8.A MATLAB Code for Golden Section Search.References.Problems.9 Numerical Integration and Differentiation.9.1 Introduction.9.2 Trapezoidal Rule.9.3 Simpson's Rule.9.4 Gaussian Quadrature.9.5 Romberg Integration.9.6 Numerical Differentiation.References.Problems.10 Numerical Solution of Ordinary Differential Equations.10.1 Introduction.10.2 First-Order ODEs.10.3 Systems of First-Order ODEs.10.4 Multistep Methods for ODEs.10.4.1 Adams-Bashforth Methods.10.4.2 Adams-Moulton Methods.10.4.3 Comments on the Adams Families.10.5 Variable-Step-Size (Adaptive) Methods for ODEs.10.6 Stiff Systems.10.7 Final Remarks.Appendix 10.A MATLAB Code for Example 10.8.Appendix 10.B MATLAB Code for Example 10.13.References.Problems.11 Numerical Methods for Eigenproblems.11.1 Introduction.11.2 Review of Eigenvalues and Eigenvectors.11.3 The Matrix Exponential.11.4 The Power Methods.11.5 QR Iterations.References.Problems.12 Numerical Solution of Partial Differential Equations.12.1 Introduction.12.2 A Brief Overview of Partial Differential Equations.12.3 Applications of Hyperbolic PDEs.12.3.1 The Vibrating String.12.3.2 Plane Electromagnetic Waves.12.4 The Finite-Difference (FD) Method.12.5 The Finite-Difference Time-Domain (FDTD) Method.Appendix 12.A MATLAB Code for Example 12.5.References.Problems.13 An Introduction to MATLAB.13.1 Introduction.13.2 Startup.13.3 Some Basic Operators, Operations, and Functions.13.4 Working with Polynomials.13.5 Loops.13.6 Plotting and M-Files.References.Index.