scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 2005"


Journal ArticleDOI
TL;DR: The fused lasso is proposed, a generalization that is designed for problems with features that can be ordered in some meaningful way, and is especially useful when the number of features p is much greater than N, the sample size.
Abstract: Summary. The lasso penalizes a least squares regression by the sum of the absolute values (L1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L1-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features p is much greater than N, the sample size. The technique is also extended to the ‘hinge’ loss function that underlies the support vector classifier.We illustrate the methods on examples from protein mass spectroscopy and gene expression data.

2,760 citations


Journal ArticleDOI
TL;DR: This work presents a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold that has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources.
Abstract: We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the /spl lscr//sub 1/-norm. A number of recent theoretical results on sparsifying properties of /spl lscr//sub 1/ penalties justify this choice. Explicitly enforcing the sparsity of the representation is motivated by a desire to obtain a sharp estimate of the spatial spectrum that exhibits super-resolution. We propose to use the singular value decomposition (SVD) of the data matrix to summarize multiple time or frequency samples. Our formulation leads to an optimization problem, which we solve efficiently in a second-order cone (SOC) programming framework by an interior point implementation. We propose a grid refinement method to mitigate the effects of limiting estimates to a grid of spatial locations and introduce an automatic selection criterion for the regularization parameter involved in our approach. We demonstrate the effectiveness of the method on simulated data by plots of spatial spectra and by comparing the estimator variance to the Crame/spl acute/r-Rao bound (CRB). We observe that our approach has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources, as well as not requiring an accurate initialization.

2,288 citations


Journal ArticleDOI
TL;DR: A new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression, relies on expressing the effective prior which the methods are using, and highlights the relationship between existing methods.
Abstract: We provide a new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression. Our approach relies on expressing the effective prior which the methods are using. This allows new insights to be gained, and highlights the relationship between existing methods. It also allows for a clear theoretically justified ranking of the closeness of the known approximations to the corresponding full GPs. Finally we point directly to designs of new better sparse approximations, combining the best of the existing strategies, within attractive computational constraints.

1,881 citations


Proceedings Article
05 Dec 2005
TL;DR: It is shown that this new Gaussian process (GP) regression model can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime.
Abstract: We present a new Gaussian process (GP) regression model whose co-variance is parameterized by the the locations of M pseudo-input points, which we learn by a gradient based optimization. We take M ≪ N, where N is the number of real data points, and hence obtain a sparse regression method which has O(M2N) training cost and O(M2) prediction cost per test case. We also find hyperparameters of the covariance function in the same joint optimization. The method can be viewed as a Bayesian regression model with particular input dependent noise. The method turns out to be closely related to several other sparse GP approaches, and we discuss the relation in detail. We finally demonstrate its performance on some large data sets, and make a direct comparison to other sparse GP methods. We show that our method can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime.

1,708 citations


Journal ArticleDOI
TL;DR: Data Streams: Algorithms and Applications surveys the emerging area of algorithms for processing data streams and associated applications, which rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity.
Abstract: In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerged for reasoning about algorithms that work within these constraints on space, time, and number of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network traffic analysis, mining text message streams and processing massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [1].

1,598 citations


Journal ArticleDOI
TL;DR: A novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms is presented, combining the basis pursuit denoising (BPDN) algorithm and the total-variation (TV) regularization scheme.
Abstract: The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years, several pioneering works suggested such a separation be based on variational formulation and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the basis pursuit denoising (BPDN) algorithm and the total-variation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures and the other for the natural scene parts assumed to be piecewise smooth. Both dictionaries are chosen such that they lead to sparse representations over one type of image-content (either texture or piecewise smooth). The use of the BPDN with the two amalgamed dictionaries leads to the desired separation, along with noise removal as a by-product. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly efficient numerical scheme to solve the combined optimization problem posed by our model and to show several experimental results that validate the algorithm's performance.

1,032 citations


Journal ArticleDOI
TL;DR: A novel inpainting algorithm that is capable of filling in holes in overlapping texture and cartoon image layers using a direct extension of a recently developed sparse-representation-based image decomposition method called MCA (morphological component analysis).

974 citations


Journal ArticleDOI
TL;DR: A new class of bases are introduced, called bandelet bases, which decompose the image along multiscale vectors that are elongated in the direction of a geometric flow, which leads to optimal approximation rates for geometrically regular images.
Abstract: This paper introduces a new class of bases, called bandelet bases, which decompose the image along multiscale vectors that are elongated in the direction of a geometric flow. This geometric flow indicates directions in which the image gray levels have regular variations. The image decomposition in a bandelet basis is implemented with a fast subband-filtering algorithm. Bandelet bases lead to optimal approximation rates for geometrically regular images. For image compression and noise removal applications, the geometric flow is optimized with fast algorithms so that the resulting bandelet basis produces minimum distortion. Comparisons are made with wavelet image compression and noise-removal algorithms.

922 citations


Journal ArticleDOI
01 Jan 2005
TL;DR: An overview of OSKI is provided, which is based on research on automatically tuned sparse kernels for modern cache-based superscalar machines, and the primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine.
Abstract: The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

546 citations


Proceedings ArticleDOI
TL;DR: A new class of multidimensional representation systems, called shearlets, obtained by applying the actions of dilation, shear transformation and translation to a fixed function, and exhibit the geometric and mathematical properties, e.g., directionality, elongated shapes, scales, oscillations are described.
Abstract: In this paper we describe a new class of multidimensional representation systems, called shearlets. They are obtained by applying the actions of dilation, shear transformation and translation to a fixed function, and exhibit the geometric and mathematical properties, e.g., directionality, elongated shapes, scales, oscillations, recently advocated by many authors for sparse image processing applications. These systems can be studied within the framework of a generalized multiresolution analysis. This approach leads to a recursive algorithm for the implementation of these systems, that generalizes the classical cascade algorithm.

530 citations


Journal ArticleDOI
TL;DR: This paper proposes an alternating projection method that is versatile enough to solve a huge class of inverse eigenvalue problems (IEPs), which includes the frame design problem, and addresses the most basic design problem: constructing tight frames with prescribed vector norms.
Abstract: Tight frames, also known as general Welch-bound- equality sequences, generalize orthonormal systems. Numerous applications - including communications, coding, and sparse approximation- require finite-dimensional tight frames that possess additional structural properties. This paper proposes an alternating projection method that is versatile enough to solve a huge class of inverse eigenvalue problems (IEPs), which includes the frame design problem. To apply this method, one needs only to solve a matrix nearness problem that arises naturally from the design specifications. Therefore, it is the fast and easy to develop versions of the algorithm that target new design problems. Alternating projection will often succeed even if algebraic constructions are unavailable. To demonstrate that alternating projection is an effective tool for frame design, the paper studies some important structural properties in detail. First, it addresses the most basic design problem: constructing tight frames with prescribed vector norms. Then, it discusses equiangular tight frames, which are natural dictionaries for sparse approximation. Finally, it examines tight frames whose individual vectors have low peak-to-average-power ratio (PAR), which is a valuable property for code-division multiple-access (CDMA) applications. Numerical experiments show that the proposed algorithm succeeds in each of these three cases. The appendices investigate the convergence properties of the algorithm.

Journal ArticleDOI
TL;DR: The purpose of this contribution is to extend some recent results on sparse representations of signals in redundant bases developed in the noise-free case to the case of noisy observations, finding a bound on the number of nonzero entries in xo.
Abstract: The purpose of this contribution is to extend some recent results on sparse representations of signals in redundant bases developed in the noise-free case to the case of noisy observations. The type of question addressed so far is as follows: given an (n,m)-matrix A with m>n and a vector b=Axo, i.e., admitting a sparse representation xo, find a sufficient condition for b to have a unique sparsest representation. The answer is a bound on the number of nonzero entries in xo. We consider the case b=Axo+e where xo satisfies the sparsity conditions requested in the noise-free case and e is a vector of additive noise or modeling errors, and seek conditions under which xo can be recovered from b in a sense to be defined. The conditions we obtain relate the noise energy to the signal level as well as to a parameter of the quadratic program we use to recover the unknown sparsest representation. When the signal-to-noise ratio is large enough, all the components of the signal are still present when the noise is deleted; otherwise, the smallest components of the signal are themselves erased in a quite rational and predictable way

Proceedings ArticleDOI
20 Jun 2005
TL;DR: A "parts and structure" model for object category recognition that can be learnt efficiently and in a semi-supervised manner is presented, learnt from example images containing category instances, without requiring segmentation from background clutter.
Abstract: We present a "parts and structure" model for object category recognition that can be learnt efficiently and in a semi-supervised manner: the model is learnt from example images containing category instances, without requiring segmentation from background clutter. The model is a sparse representation of the object, and consists of a star topology configuration of parts modeling the output of a variety of feature detectors. The optimal choice of feature types (whose repertoire includes interest points, curves and regions) is made automatically. In recognition, the model may be applied efficiently in an exhaustive manner, bypassing the need for feature detectors, to give the globally optimal match within a query image. The approach is demonstrated on a wide variety of categories, and delivers both successful classification and localization of the object within the image.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: A greedy pursuit algorithm called simultaneous orthogonal matching pursuit is presented, which proves that the algorithm calculates simultaneous approximations whose error is within a constant factor of the optimal simultaneous approximation error.
Abstract: A simple sparse approximation problem requests an approximation of a given input signal as a linear combination of T elementary signals drawn from a large, linearly dependent collection An important generalization is simultaneous sparse approximation Now one must approximate several input signals at once using different linear combinations of the same T elementary signals This formulation appears, for example, when analyzing multiple observations of a sparse signal that have been contaminated with noise A new approach to this problem is presented here: a greedy pursuit algorithm called simultaneous orthogonal matching pursuit The paper proves that the algorithm calculates simultaneous approximations whose error is within a constant factor of the optimal simultaneous approximation error This result requires that the collection of elementary signals be weakly correlated, a property that is also known as incoherence Numerical experiments demonstrate that the algorithm often succeeds, even when the inputs do not meet the hypotheses of the proof

Proceedings ArticleDOI
18 Mar 2005
TL;DR: This work describes a homotopy continuation-based algorithm to find and trace efficiently all solutions of basis pursuit as a function of the regularization parameter, and shows the effectiveness of this algorithm in accurately and efficiently generating entire solution paths for basis pursuit.
Abstract: We explore the application of a homotopy continuation-based method for sparse signal representation in overcomplete dictionaries. Our problem setup is based on the basis pursuit framework, which involves a convex optimization problem consisting of terms enforcing data fidelity and sparsity, balanced by a regularization parameter. Choosing a good regularization parameter in this framework is a challenging task. We describe a homotopy continuation-based algorithm to find and trace efficiently all solutions of basis pursuit as a function of the regularization parameter. In addition to providing an attractive alternative to existing optimization methods for solving the basis pursuit problem, this algorithm can also be used to provide an automatic choice for the regularization parameter, based on prior information about the desired number of non-zero components in the sparse representation. Our numerical examples demonstrate the effectiveness of this algorithm in accurately and efficiently generating entire solution paths for basis pursuit, as well as producing reasonable regularization parameter choices. Furthermore, exploring the resulting solution paths in various operating conditions reveals insights about the nature of basis pursuit solutions.

Journal ArticleDOI
01 Jun 2005
TL;DR: This paper addresses the problem of human-action recognition by introducing a sparse representation of image sequences as a collection of spatiotemporal events that are localized at points that are salient both in space and time.
Abstract: This paper addresses the problem of human-action recognition by introducing a sparse representation of image sequences as a collection of spatiotemporal events that are localized at points that are salient both in space and time. The spatiotemporal salient points are detected by measuring the variations in the information content of pixel neighborhoods not only in space but also in time. An appropriate distance metric between two collections of spatiotemporal salient points is introduced, which is based on the chamfer distance and an iterative linear time-warping technique that deals with time expansion or time-compression issues. A classification scheme that is based on relevance vector machines and on the proposed distance measure is proposed. Results on real image sequences from a small database depicting people performing 19 aerobic exercises are presented.

Journal ArticleDOI
TL;DR: Experimental timings of an actual parallel sparse matrix-vector multiplication on an SGI Origin 3800 computer show that a sufficiently large reduction in communication volume leads to savings in execution time.
Abstract: A new method is presented for distributing data in sparse matrix-vector multiplication. The method is two-dimensional, tries to minimize the true communication volume, and also tries to spread the computation and communication work evenly over the processors. The method starts with a recursive bipartitioning of the sparse matrix, each time splitting a rectangular matrix into two parts with a nearly equal number of nonzeros. The communication volume caused by the split is minimized. After the matrix partitioning, the input and output vectors are partitioned with the objective of minimizing the maximum communication volume per processor. Experimental results of our implementation, Mondriaan, for a set of sparse test matrices show a reduction in communication volume compared to one-dimensional methods, and in general a good balance in the communication work. Experimental timings of an actual parallel sparse matrix-vector multiplication on an SGI Origin 3800 computer show that a sufficiently large reduction in communication volume leads to savings in execution time.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: It is argued that considerable computational benefits can be gained by substituting the sparse Levenberg-Marquardt algorithm in the implementation of bundle adjustment with a sparse variant of Powell's dog leg non-linear least squares technique.
Abstract: In order to obtain optimal 3D structure and viewing parameter estimates, bundle adjustment is often used as the last step of feature-based structure and motion estimation algorithms. Bundle adjustment involves the formulation of a large scale, yet sparse minimization problem, which is traditionally solved using a sparse variant of the Levenberg-Marquardt optimization algorithm that avoids storing and operating on zero entries. This paper argues that considerable computational benefits can be gained by substituting the sparse Levenberg-Marquardt algorithm in the implementation of bundle adjustment with a sparse variant of Powell's dog leg non-linear least squares technique. Detailed comparative experimental results provide strong evidence supporting this claim

Journal ArticleDOI
TL;DR: For functions that are uniformly regular outside a set of edge curves that are geometrically regular, the main theorem proves that bandelet approximations satisfy an optimal asymptotic error decay rate.
Abstract: Finding efficient geometric representations of images is a central issue to improving image compression and noise removal algorithms. We introduce bandelet orthogonal bases and frames that are adapted to the geometric regularity of an image. Images are approximated by finding a best bandelet basis or frame that produces a sparse representation. For functions that are uniformly regular outside a set of edge curves that are geometrically regular, the main theorem proves that bandelet approximations satisfy an optimal asymptotic error decay rate. A bandelet image compression scheme is derived. For computational applications, a fast discrete bandelet transform algorithm is introduced, with a fast best basis search which preserves asymptotic approximation and coding error decay rates.

Journal Article
TL;DR: In this paper, the authors use performance profiles as a tool for evaluating and comparing the performance of serial sparse direct solvers on an extensive set of symmetric test problems taken from a range of practical applications.
Abstract: In recent years a number of solvers for the direct solution of large sparse symmetric linear systems of equations have been developed. These include solvers that are designed for the solution of positive definite systems as well as those that are principally intended for solving indefinite problems. In this study, we use performance profiles as a tool for evaluating and comparing the performance of serial sparse direct solvers on an extensive set of symmetric test problems taken from a range of practical applications.

Proceedings ArticleDOI
TL;DR: A simple and yet efficient variation of the K-SVD that handles such extraction of non-negative dictionaries is presented, and its generalization to nonnegative matrix factorization problem that suits signals generated under an additive model with positive atoms is described.
Abstract: In recent years there is a growing interest in the study of sparse representation for signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described as sparse linear combinations of these atoms. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting pre-specified transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered in recent years, however this topic is largely still open. In this paper we address the latter problem of designing dictionaries, and introduce the K-SVD algorithm for this task. We show how this algorithm could be interpreted as a generalization of the K-Means clustering process, and demonstrate its behavior in both synthetic tests and in applications on real data. Finally, we turn to describe its generalization to nonnegative matrix factorization problem that suits signals generated under an additive model with positive atoms. We present a simple and yet efficient variation of the K-SVD that handles such extraction of non-negative dictionaries.

Proceedings ArticleDOI
TL;DR: This paper presents the first 3D discrete curvelet transform, an extension to the 2D transform described in Candes et al..1, and describes three different implementations: in-core, out-of-core and MPI-based parallel implementations.
Abstract: In this paper, we present the first 3D discrete curvelet transform. This transform is an extension to the 2D transform described in Candes et al..1 The resulting curvelet frame preserves the important properties, such as parabolic scaling, tightness and sparse representation for singularities of codimension one. We describe three different implementations: in-core, out-of-core and MPI-based parallel implementations. Numerical results verify the desired properties of the 3D curvelets and demonstrate the efficiency of our implementations.

Book ChapterDOI
19 Jun 2005
TL;DR: A new variant of the NMF method for learning spatially localized, sparse, part-based subspace representations of visual patterns, based on positively constrained projections and related both to NMF and to the conventional SVD or PCA decomposition is proposed.
Abstract: In image compression and feature extraction, linear expansions are standardly used. It was recently pointed out by Lee and Seung that the positivity or non-negativity of a linear expansion is a very powerful constraint, that seems to lead to sparse representations for the images. Their technique, called Non-negative Matrix Factorization (NMF), was shown to be a useful technique in approximating high dimensional data where the data are comprised of non-negative components. We propose here a new variant of the NMF method for learning spatially localized, sparse, part-based subspace representations of visual patterns. The algorithm is based on positively constrained projections and is related both to NMF and to the conventional SVD or PCA decomposition. Two iterative positive projection algorithms are suggested, one based on minimizing Euclidean distance and the other on minimizing the divergence of the original data matrix and its non-negative approximation. Experimental results show that P-NMF derives bases which are somewhat better suitable for a localized representation than NMF.

Journal ArticleDOI
TL;DR: The LDL software package is a set of short, concise routines for factorizing symmetric positive-definite sparse matrices, with some applicability to symmetric indefinite matrices.
Abstract: The LDL software package is a set of short, concise routines for factorizing symmetric positive-definite sparse matrices, with some applicability to symmetric indefinite matrices. Its primary purpose is to illustrate much of the basic theory of sparse matrix algorithms in as concise a code as possible, including an elegant method of sparse symmetric factorization that computes the factorization row-by-row but stores it column-by-column. The entire symbolic and numeric factorization consists of less than 50 executable lines of code. The package is written in C, and includes a MATLAB interface.

Journal ArticleDOI
TL;DR: In this paper, the relative Newton algorithm was generalized for blind deconvolution of one-dimensional signals and sparsification of arbitrary sources, and a method for finding optimal sparsifying transformations by supervised learning was proposed.
Abstract: The relative Newton algorithm, previously proposed for quasi-maximum likelihood blind source separation and blind deconvolution of one-dimensional signals is generalized for blind deconvolution of images. Smooth approximation of the absolute value is used as the nonlinear term for sparse sources. In addition, we propose a method of sparsification, which allows blind deconvolution of arbitrary sources, and show how to find optimal sparsifying transformations by supervised learning.

Proceedings ArticleDOI
TL;DR: In this article, a tree-structured sparse representation is proposed as additional prior information for linear inverse problems with limited numbers of measurements, which leads to better reconstruction while requiring less time compared to methods that only assume sparse representations.
Abstract: Recent studies in linear inverse problems have recognized the sparse representation of unknown signal in a certain basis as an useful and effective prior information to solve those problems. In many multiscale bases (e.g. wavelets), signals of interest (e.g. piecewise-smooth signals) not only have few significant coefficients, but also those significant coefficients are well-organized in trees. We propose to exploit the tree-structured sparse representation as additional prior information for linear inverse problems with limited numbers of measurements. We present numerical results showing that exploiting the sparse tree representations lead to better reconstruction while requiring less time compared to methods that only assume sparse representations.

Journal ArticleDOI
TL;DR: This work describes an unsupervised algorithm for learning both localized features and their transformations directly from images using a sparse bilinear generative model and shows that from an arbitrary set of natural images, the algorithm produces oriented basis filters that can simultaneously represent features in an image and their transformation.
Abstract: Recent algorithms for sparse coding and independent component analysis (ICA) have demonstrated how localized features can be learned from natural images. However, these approaches do not take image transformations into account. We describe an unsupervised algorithm for learning both localized features and their transformations directly from images using a sparse bilinear generative model. We show that from an arbitrary set of natural images, the algorithm produces oriented basis filters that can simultaneously represent features in an image and their transformations. The learned generative model can be used to translate features to different locations, thereby reducing the need to learn the same feature at multiple locations, a limitation of previous approaches to sparse coding and ICA. Our results suggest that by explicitly modeling the interaction between local image features and their transformations, the sparse bilinear approach can provide a basis for achieving transformation-invariant vision.

Book ChapterDOI
18 May 2005
TL;DR: A modification of the published algorithm to solve the sparsity problem that occurs in text clustering is presented, based on a new subspace clustering algorithm that automatically calculates the feature weights in the k-means clustering process.
Abstract: This paper presents a new method to solve the problem of clustering large and complex text data. The method is based on a new subspace clustering algorithm that automatically calculates the feature weights in the k-means clustering process. In clustering sparse text data the feature weights are used to discover clusters from subspaces of the document vector space and identify key words that represent the semantics of the clusters. We present a modification of the published algorithm to solve the sparsity problem that occurs in text clustering. Experimental results on real-world text data have shown that the new method outperformed the Standard KMeans and Bisection-KMeans algorithms, while still maintaining efficiency of the k-means clustering process.

Journal ArticleDOI
TL;DR: It is shown how the modification in the Cholesky factorization associated with this rank-2 modification of C can be computed efficiently using a sparse rank-1 technique developed in [T. A. Davis, SIAM J. Matrix Anal. Appl., 20 (1999), pp. 606--627].
Abstract: Given a sparse, symmetric positive definite matrix C and an associated sparse Cholesky factorization LDL$\tr$, we develop sparse techniques for updating the factorization after a symmetric modification of a row and column of C. We show how the modification in the Cholesky factorization associated with this rank-2 modification of C can be computed efficiently using a sparse rank-1 technique developed in [T. A. Davis and W. W. Hager, SIAM J. Matrix Anal. Appl., 20 (1999), pp. 606--627]. We also determine how the solution of a linear system Lx = b changes after changing a row and column of C or after a rank-r change in C.

Journal ArticleDOI
TL;DR: The computational details of a variant of the classical Gram--Schmidt algorithm, called the quasi--Gram-Schmidt--algorithm, to obtain two kinds of low-rank approximations are treated and a MATLAB implementation is described.
Abstract: In many applications---latent semantic indexing, for example---it is required to obtain a reduced rank approximation to a sparse matrix A. Unfortunately, the approximations based on traditional decompositions, like the singular value and QR decompositions, are not in general sparse. Stewart [(1999), 313--323] has shown how to use a variant of the classical Gram--Schmidt algorithm, called the quasi--Gram-Schmidt--algorithm, to obtain two kinds of low-rank approximations. The first, the SPQR, approximation, is a pivoted, Q-less QR approximation of the form (XR11−1)(R11R12), where X consists of columns of A. The second, the SCR approximation, is of the form the form A ≅ XTYT, where X and Y consist of columns and rows A and T, is small. In this article we treat the computational details of these algorithms and describe a MATLAB implementation.