scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 2001"


Journal ArticleDOI
TL;DR: It is proved that if S is representable as a highly sparse superposition of atoms from this time-frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the l/sup 1/ norm of the coefficients among all decompositions.
Abstract: Suppose a discrete-time signal S(t), 0/spl les/t

2,207 citations


Journal ArticleDOI
TL;DR: This work suggests a two-stage separation process: a priori selection of a possibly overcomplete signal dictionary in which the sources are assumed to be sparsely representable, followed by unmixing the sources by exploiting the their sparse representability.
Abstract: The blind source separation problem is to extract the underlying source signals from a set of linear mixtures, where the mixing matrix is unknown. This situation is common in acoustics, radio, medical signal and image processing, hyperspectral imaging, and other areas. We suggest a two-stage separation process: a priori selection of a possibly overcomplete signal dictionary (for instance, a wavelet frame or a learned dictionary) in which the sources are assumed to be sparsely representable, followed by unmixing the sources by exploiting the their sparse representability. We consider the general case of more sources than mixtures, but also derive a more efficient algorithm in the case of a nonovercomplete dictionary and an equal numbers of sources and mixtures. Experiments with artificial signals and musical sounds demonstrate significantly better separation than other known techniques.

829 citations


Proceedings ArticleDOI
21 May 2001
TL;DR: This paper gives a short introduction to some new developments related to support vector machines (SVM), a new class of kernel based techniques introduced within statistical learning theory and structural risk minimization which lends to solving convex optimization problems and also the model complexity follows from this solution.
Abstract: Neural networks such as multilayer perceptrons and radial basis function networks have been very successful in a wide range of problems. In this paper we give a short introduction to some new developments related to support vector machines (SVM), a new class of kernel based techniques introduced within statistical learning theory and structural risk minimization. This new approach lends to solving convex optimization problems and also the model complexity follows from this solution. We especially focus on a least squares support vector machine formulation (LS-SVM) which enables to solve highly nonlinear and noisy black-box modelling problems, even in very high dimensional input spaces. While standard SVMs have been basically only applied to static problems like classification and function estimation, LS-SVM models have been extended to recurrent models and use in optimal control problems. Moreover, using weighted least squares and special pruning techniques, LS-SVMs can be employed for robust nonlinear estimation and sparse approximation. Applications of (LS)-SVMs to a large variety of artificial and real-life data sets indicate the huge potential of these methods.

275 citations


Journal Article
TL;DR: In this article, the authors exploit the property of the sources to have a sparse representation in a corresponding signal dictionary, which can consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals.
Abstract: The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.

253 citations


Journal ArticleDOI
TL;DR: An overview of the algorithm, performance results and the integration of the solver into complex industrial simulation tools are given and an example is discussed inherently (due to the design goal) producing linear systems close to singularity.

186 citations


Book ChapterDOI
28 May 2001
TL;DR: The experience indicates that for matrices arising in scientific simulations, register level optimizations are critical, and this work focuses here on the optimizations and parameter selection techniques used in Sparsity for register-level optimizations.
Abstract: Sparse matrix-vector multiplication is an important computational kernel that tends to perform poorly on modern processors, largely because of its high ratio of memory operations to arithmetic operations. Optimizing this algorithm is difficult, both because of the complexity of memory systems and because the performance is highly dependent on the nonzero structure of the matrix. The Sparsity system is designed to address these problem by allowing users to automatically build sparse matrix kernels that are tuned to their matrices and machines. The most difficult aspect of optimizing these algorithms is selecting among a large set of possible transformations and choosing parameters, such as block size. In this paper we discuss the optimization of two operations: a sparse matrix times a dense vector and a sparse matrix times a set of dense vectors. Our experience indicates that for matrices arising in scientific simulations, register level optimizations are critical, and we focus here on the optimizations and parameter selection techniques used in Sparsity for register-level optimizations. We demonstrate speedups of up to 2× for the single vector case and 5× for the multiple vector case.

164 citations


Journal ArticleDOI
TL;DR: It turns out that the new method achieves correctness rates which are competitive to that of the best existing methods, i.e. the amount of data to be classified.
Abstract: (h n −1 n d −1) instead of O(h n −d ) grid points and unknowns are involved. Here d denotes the dimension of the feature space and h n = 2 −n gives the mesh size. To be precise, we suggest to use the sparse grid combination technique [42] where the classification problem is discretized and solved on a certain sequence of conventional grids with uniform mesh sizes in each coordinate direction. The sparse grid solution is then obtained from the solutions on these different grids by linear combination. In contrast to other sparse grid techniques, the combination method is simpler to use and can be parallelized in a natural and straightforward way. We describe the sparse grid combination technique for the classification problem in terms of the regularization network approach. We then give implementational details and discuss the complexity of the algorithm. It turns out that the method scales only linearly with the number of instances, i.e. the amount of data to be classified. Finally we report on the quality of the classifier built by our new method. Here we consider standard test problems from the UCI repository and problems with huge synthetical data sets in up to 9 dimensions. It turns out that our new method achieves correctness rates which are competitive to that of the best existing methods.

135 citations


Proceedings Article
03 Jan 2001
TL;DR: The property of multiscale transforms, such as wavelet or wavelet packets, is used to decompose signals into sets of local features with various degrees of sparsity, for selecting the best (most sparse) subsets of features for further separation.
Abstract: We consider a problem of blind source separation from a set of instantaneous linear mixtures, where the mixing matrix is unknown. It was discovered recently, that exploiting the sparsity of sources in an appropriate representation according to some signal dictionary, dramatically improves the quality of separation. In this work we use the property of multiscale transforms, such as wavelet or wavelet packets, to decompose signals into sets of local features with various degrees of sparsity. We use this intrinsic property for selecting the best (most sparse) subsets of features for further separation. The performance of the algorithm is verified on noise-free and noisy data. Experiments with simulated signals, musical sounds and images demonstrate significant improvement of separation quality over previously reported results.

96 citations


Journal ArticleDOI
TL;DR: This work considers the problem of computing low-rank approximations of matrices in a factorized form with sparse factors and presents numerical examples arising from some application areas to illustrate the efficiency and accuracy of the proposed algorithms.
Abstract: We consider the problem of computing low-rank approximations of matrices. The novel aspects of our approach are that we require the low-rank approximations to be written in a factorized form with sparse factors, and the degree of sparsity of the factors can be traded off for reduced reconstruction error by certain user-determined parameters. We give a detailed error analysis of our proposed algorithms and compare the computed sparse low-rank approximations with those obtained from singular value decomposition. We present numerical examples arising from some application areas to illustrate the efficiency and accuracy of our algorithms.

85 citations



Journal ArticleDOI
31 May 2001
TL;DR: This paper analyzes the performance of the sparse matrix–vector product with symmetric matrices originating from the FEM and describes techniques that lead to a fast implementation.
Abstract: The sparse matrix–vector product is an important computational kernel that runs ineffectively on many computers with super-scalar RISC processors. In this paper we analyse the performance of the sparse matrix–vector product with symmetric matrices originating from the FEM and describe techniques that lead to a fast implementation. It is shown how these optimisations can be incorporated into an efficient parallel implementation using message-passing. We conduct numerical experiments on many different machines and show that our optimisations speed up the sparse matrix–vector multiplication substantially.

Proceedings ArticleDOI
07 Jul 2001
TL;DR: The formulation of S-PCA is novel in that multi-scale representations emerge for a variety of ensembles including face images, images from outdoor scenes and a database of optical flow vectors representing a motion class.
Abstract: Sparse Principal Component Analysis (S-PCA) is a novel framework for learning a linear, orthonormal basis representation for structure intrinsic to an ensemble of images. S-PCA is based on the discovery that natural images exhibit structure in a low-dimensional subspace in a sparse, scale-dependent form. The S-PCA basis optimizes an objective function which trades off correlations among output coefficients for sparsity in the description of basis vector elements. This objective function is minimized by a simple, robust and highly scalable adaptation algorithm, consisting of successive planar rotations of pairs of basis vectors. The formulation of S-PCA is novel in that multi-scale representations emerge for a variety of ensembles including face images, images from outdoor scenes and a database of optical flow vectors representing a motion class.

Book
30 Nov 2001
TL;DR: In this article, the authors present a data structure for Sparse Matrix Computation and Sparse Symmetric Linear System Solver (SQLS) based on QR decomposition and load flow analysis.
Abstract: Preface. Acknowledgments. 1. Introduction. 2. Object Orientation for Modeling Computations. 3. Data Structure for Sparse Matrix Computation. 4. Sparse Symmetric Linear System Solver. 5. Sparse QR Decomposition. 6. Optimization Methods. 7. Sparse LP and QP Solvers. 8. Load Flow Analysis. 9. Short Circuit Analysis. 10. Power System State Estimation. 11. Optimal Power Flow. 12. Power System Dynamics. Appendices. References. Index.

Patent
13 Apr 2001
TL;DR: In this paper, a foveal reconstruction processor is proposed to reconstruct a signal approximation which has the same geometrical structures as the input signal along the trajectories and which is regular away from these trajectories.
Abstract: Methods and apparatus for processing n-dimensional digitized signals with a foveal processing which constructs a sparse representation by taking advantage of the geometrical regularity of the signal structures. This invention can compress, restore, match and classify signals. Foveal coefficients are computed with one-dimensional inner products along trajectories of an n-directional trajectory list. The invention includes a trajectory finder which computes an n-directional trajectory list from the input n-dimensional signal, in order to choose optimal locations to compute the foveal coefficients. From foveal coefficients, a foveal reconstruction processor recovers a signal approximation which has the same geometrical structures as the input signal along the trajectories and which is regular away from these trajectories. A foveal residue can be calculated as a difference with the input signal. A bandelet processor decorrelates the foveal coefficients by applying invertible linear operators along each trajectory. Bandelet coefficients are inner products between the signal and n-dimensional bandelet vectors elongated along the trajectories. A geometric processor computes geometric coefficients by decorrelating the coordinates of these trajectories with linear operators, to take advantage of their geometrical regularity. Setting to zero small bandelet coefficients and small geometric coefficients yields a sparse signal representation.

Proceedings ArticleDOI
02 Apr 2001
TL;DR: The application of sparse matrix-vector multiplication algorithms for text storage and retrieval as a means of supporting efficient and accurate text processing and to improve accuracy, a novel matrix based relevance feedback technique as well as a proximity search algorithm are developed.
Abstract: Documents, both internal and related publicly available, are now considered a corporate asset. The potential to efficiently and accurately search such documents is of great significance. We demonstrate the application of sparse matrix-vector multiplication algorithms for text storage and retrieval as a means of supporting efficient and accurate text processing. As many parallel sparse matrix-vector multiplication algorithms exist, such an information retrieval approach lends itself to parallelism. This enables us to attack the problem of parallel information retrieval, which has resisted good scalability. We use sparse matrix compression algorithms and compare the storage of a subcollection of the commonly used NIST TREC corpus with a traditional inverted index. We demonstrate query processing using a sparse matrix-vector multiplication algorithm. Our results indicate that our approach saves approximately 35% of the total storage requirements for the inverted index. Additionally to improve accuracy, we develop a novel matrix based relevance feedback technique as well as a proximity search algorithm. The results of our experiment to incorporate proximity search capability into the system also indicate 35% less storage for the sparse matrix over the inverted index.

Journal ArticleDOI
TL;DR: This work is the first work to give sparse and parallel sparse supports for array intrinsics of Fortran 90, and it provides a complete complexity analysis for the sparse implementation.
Abstract: Fortran 90 provides a rich set of array intrinsic functions. Each of these array intrinsic functions operates on the elements of multi-dimensional array objects concurrently. They provide a rich source of parallelism and play an increasingly important role in automatic support of data parallel programming. However, there is no such support if these intrinsic functions are applied to sparse data sets. In this paper, we address this open gap by presenting an efficient library for parallel sparse computations with Fortran 90 array intrinsic operations. Our method provides both compression schemes and distribution schemes on distributed memory environments applicable to higher-dimensional sparse arrays. This way, programmers need not worry about low-level system details when developing sparse applications. Sparse programs can be expressed concisely using array expressions, and parallelized with the help of our library. Our sparse libraries are built for array intrinsics of Fortran 90, and they include an extensive set of array operations such as CSHIFT, EOSHIFT, MATMUL, MERGE, PACK, SUM, RESHAPE, SPREAD, TRANSPOSE, UNPACK, and section moves. Our work, to our best knowledge, is the first work to give sparse and parallel sparse supports for array intrinsics of Fortran 90. In addition, we provide a complete complexity analysis for our sparse implementation. The complexity of our algorithms is in proportion to the number of nonzero elements in the arrays, and that is consistent with the conventional design criteria for sparse algorithms and data structures. Our current testbed is an IBM SP2 workstation cluster. Preliminary experimental results with numerical routines, numerical applications, and data-intensive applications related to OLAP (on-line analytical processing) show that our approach is promising in speeding up sparse matrix computations on both sequential and distributed memory environments if the programs are expressed with Fortran 90 array expressions.

Proceedings Article
03 Jan 2001
TL;DR: The adaptive TAP Gibbs free energy for a general densely connected probabilistic model with quadratic interactions and arbritary single site constraints is derived and how a specific sequential minimization of the free energy leads to a generalization of Minka's expectation propagation is shown.
Abstract: The adaptive TAP Gibbs free energy for a general densely connected probabilistic model with quadratic interactions and arbritary single site constraints is derived. We show how a specific sequential minimization of the free energy leads to a generalization of Minka's expectation propagation. Lastly, we derive a sparse representation version of the sequential algorithm. The usefulness of the approach is demonstrated on classification and density estimation with Gaussian processes and on an independent component analysis problem.

Book ChapterDOI
12 Sep 2001
TL;DR: A new approach to scale-space which is derived from the 3D Laplace equation instead of the heat equation is presented and a scale adaptive filtering which is used for denoising images is presented.
Abstract: In this paper, we present a new approach to scale-space which is derived from the 3D Laplace equation instead of the heat equation. The resulting lowpass and bandpass filters are discussed and they are related to the monogenic signal. As an application, we present a scale adaptive filtering which is used for denoising images. The adaptivity is based on the local energy of spherical quadrature filters and can also be used for sparse representation of images.

Proceedings ArticleDOI
TL;DR: A method to predict surfacerelated multiples in a true 3D sense that takes the sparse (crossline) sampling of the seismic data into account and extracts the 3D multiple information present in the sparsely sampled cross-line multiple contributions, which is then translated into (3D) predicted multiple traces.
Abstract: Although the theory of iterative surface-related multiple elimination applies to both 2D and 3D wave-fields, the procedure is in practice only applied to 2D (or 2D subsets of 3D) datasets. The method involves (Kirchhoff) summations of the extrapolated input data (multiple contributions), that requires alias (and edge effect) protection, a requirement not easily met by 3D datasets. With current marine acquisition geometries the seismic data are acquired densely sampled in the in-line direction but (very) sparsely in the cross-line direction. Therefore, in this paper, we propose a method to predict surfacerelated multiples in a true 3D sense that takes the sparse (crossline) sampling of the seismic data into account. In this method the (sparse) cross-line Kirchhoff summation is replaced with a sparse parametric inversion that extracts the 3D multiple information (i.e. the Fresnel zones at the surface of the extrapolated wave-field), present in the sparsely sampled cross-line multiple contributions, which is then translated into (3D) predicted multiple traces.

Journal ArticleDOI
TL;DR: It is shown that an efficient solution to the problem of approximating det(A)1/n for a large sparse symmetric positive definite matrix A of order n is obtained by using a sparse approximate inverse of A.
Abstract: This paper is concerned with the problem of approximating det(A)1/n for a large sparse symmetric positive definite matrix A of order n. It is shown that an efficient solution of this problem is obtained by using a sparse approximate inverse of A. The method is explained and theoretical properties are discussed. The method is ideal for implementation on a parallel computer. Numerical experiments are described that illustrate the performance of this new method and provide a comparison with Monte Carlo--type methods from the literature.

Book ChapterDOI
Y. Saad1
TL;DR: An overview of parallel algorithms and their implementations for solving large sparse linear systems which arise in scientific and engineering applications and Variants of Schwarz procedures and Schur complement techniques will be discussed.
Abstract: This paper presents an overview of parallel algorithms and their implementations for solving large sparse linear systems which arise in scientific and engineering applications. Preconditioners constitute the most important ingredient in solving such systems. As will be seen, the most common preconditioners used for sparse linear systems adapt domain decomposition concepts to the more general framework of “distributed sparse linear systems”. Variants of Schwarz procedures and Schur complement techniques will be discussed. We will also report on our own experience in the parallel implementation of a fairly complex simulation of solid-liquid flows.

Journal ArticleDOI
TL;DR: It is demonstrated that for established signal expansions like the Karhunen-Loeve transform, the lapped orthogonal transform, and the biorthogonal 7/9 wavelet, it is possible to improve the approximation capabilities by up to 30% by fine tuning of the expansion vectors.
Abstract: Traditional signal decompositions such as transforms, filterbanks, and wavelets generate signal expansions using the analysis-synthesis setting: the expansion coefficients are found by taking the inner product of the signal with the corresponding analysis vector. In this paper, we try to free ourselves from the analysis-synthesis paradigm by concentrating on the synthesis or reconstruction part of the signal expansion. Ignoring the analysis issue completely, we construct sets of synthesis vectors, which are denoted waveform dictionaries, for efficient signal representation. Within this framework, we present an algorithm for designing waveform dictionaries that allow sparse representations: the objective is to approximate a training signal using a small number of dictionary vectors. Our algorithm optimizes the dictionary vectors with respect to the average nonlinear approximation error, i.e., the error resulting when keeping a fixed number n of expansion coefficients but not necessarily the first n coefficients. Using signals from a Gaussian, autoregressive process with correlation factor 0.95, it is demonstrated that for established signal expansions like the Karhunen-Loeve transform, the lapped orthogonal transform, and the biorthogonal 7/9 wavelet, it is possible to improve the approximation capabilities by up to 30% by fine tuning of the expansion vectors.

Patent
10 Sep 2001
TL;DR: In this paper, an adaptive algorithm was proposed for calculating adaptive equalizer coefficients for sparse transmission channels, in the form of a Sparse Least Mean Squares (LMS) algorithm and a sparse Constant Modulus Algorithm (CMA) algorithm.
Abstract: An apparatus (320) and method is disclosed for using adaptive algorithms to exploit sparsity in target weight vectors in an adaptive channel equalizer (300) An adaptive algorithm comprises a selected value of a prior and a selected value of a cost function The present invention comprises algorithms adapted for calculating adaptive equalizer coefficients for sparse transmission channels The present invention provides sparse algorithms in the form of a Sparse Least Mean Squares (LMS) algorithm and a Sparse Constant Modulus Algorithm (CMA) and a Sparse Decision Directed (DD) algorithm

Proceedings ArticleDOI
07 Oct 2001
TL;DR: The main contribution in this paper is the improvement of an important result due to Donoho and Huo (1999) concerning the replacement of the l/sub 0/ optimization problem by a linear programming minimization when searching for the unique sparse representation.
Abstract: An elementary proof of a basic uncertainty principle concerning pairs of representations of /spl Rscr//sup N/ vectors in different orthonormal bases is provided. The result, slightly stronger than stated before, has a direct impact on the uniqueness property of the sparse representation of such vectors using pairs of orthonormal bases as overcomplete dictionaries. The main contribution in this paper is the improvement of an important result due to Donoho and Huo (1999) concerning the replacement of the l/sub 0/ optimization problem by a linear programming minimization when searching for the unique sparse representation.

Proceedings ArticleDOI
29 Nov 2001
TL;DR: This paper presents an enhanced method for data compression using a wavelet transform, to be applied in power systems signals for quality evaluation, based on a previous estimation of the sinusoidal components of the signal under analysis.
Abstract: This paper presents an enhanced method for data compression using a wavelet transform, to be applied in power systems signals for quality evaluation. The proposed approach is based on a previous estimation of the sinusoidal components of the signal under analysis, so that it could be subtracted from the original data in order to generate a transient type signal, which is subsequently applied to the compression techniques. The approach employs the Kalman filter and the adaptive notch filter techniques to provide the estimation of the sinusoidal components. Taking into account the wavelet property of sparse representation makes an improvement in the compression rate and in the signal degradation is attained. Finally, a proposed frame format to store the coded signal is presented.

Journal ArticleDOI
TL;DR: In this paper, a recursive method for the LU factorization of sparse matrices is described, and performance results show that the recursive approach may perform comparable to leading software packages for sparse matrix factorization in terms of execution time, memory usage and error estimates of the solution.
Abstract: This paper describes a recursive method for the LU factorization of sparse matrices. The recursive formulation of common linear algebra codes has been proven very successful in dense matrix computations. An extension of the recursive technique for sparse matrices is presented. Performance results given here show that the recursive approach may perform comparable to leading software packages for sparse matrix factorization in terms of execution time, memory usage, and error estimates of the solution.

Proceedings ArticleDOI
26 Aug 2001
TL;DR: It turns out that the new method achieves correctness rates which are competitive to that of the best existing methods and scales linearly with the number of given data points.
Abstract: Recently we presented a new approach [18] to the classification problem arising in data mining. It is based on the regularization network approach but, in contrast to other methods which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [49]. Thus, only O(hn-1nd-1) instead of O(hn-d) grid points and unknowns are involved. Here d denotes the dimension of the feature space and hn = 2-n gives the mesh size. We use the sparse grid combination technique [28] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. In contrast to our former work, where d-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point.We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 10 dimensions. It turns out that our new method achieves correctness rates which are competitive to that of the best existing methods.

Proceedings ArticleDOI
13 Jul 2001
TL;DR: In this paper, the authors jointly estimate the supports of unknown sparse objects in the image and pixel values on these supports using the level set method and the conjugate gradient method, respectively.
Abstract: We address image estimation from sparse Fourier samples. The problem is formulated as joint estimation of the supports of unknown sparse objects in the image, and pixel values on these supports. The domain and the pixel values are alternately estimated using the level-set method and the conjugate gradient method, respectively. Our level-set evolution shows a unique switching behavior, which stabilizes the level-set evolution. Furthermore, the trade-off between the stability and the speed of evolution can be easily controlled by the number of the conjugate gradient steps, hence removing the re-initialization steps in conventional level set approaches.

01 Jan 2001
TL;DR: Preconditioning Sparse Matrices for Computing Eigenvalues and Solving Linear Systems of Equations shows good results in terms of preconditioning and linearization.
Abstract: Preconditioning Sparse Matrices for Computing Eigenvalues and Solving Linear Systems of Equations

01 Jan 2001
TL;DR: A new type of representation for medium level vision operations is explored, and it is shown how sparse, monopolar representations can be used to speed up and improve template matching.
Abstract: In this thesis a new type of representation for medium level vision operations is explored. We focus on representations that are sparse and monopolar. The word sparse signifies that information in the feature sets used is not necessarily present at all points. On the contrary, most features will be inactive. The word monopolar signifies that all features have the same sign, e.g. are either positive or zero. A zero feature value denotes ``no information'', and for non-zero values, the magnitude signifies the relevance. A sparse scale-space representation of local image structure (lines and edges) is developed. A method known as the channel representation is used to generate sparse representations, and its ability to deal with multiple hypotheses is described. It is also shown how these hypotheses can be extracted in a robust manner. The connection of soft histograms (i.e. histograms with overlapping bins) to the channel representation, as well as to the use of dithering in relaxation of quantisation errors is shown. The use of soft histograms for estimation of unknown probability density functions (PDF), and estimation of image rotation are demonstrated. The advantage with the use of sparse, monopolar representations in associative learning is demonstrated. Finally we show how sparse, monopolar representations can be used to speed up and improve template matching.