scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 2002"


Journal ArticleDOI
TL;DR: The methods of this paper are illustrated for RBF kernels and demonstrate how to obtain robust estimates with selection of an appropriate number of hidden units, in the case of outliers or non-Gaussian error distributions with heavy tails.

1,197 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: A simple yet efficient multiplicative algorithm for finding the optimal values of the hidden components of non-negative sparse coding and how the basis vectors can be learned from the observed data is shown.
Abstract: Non-negative sparse coding is a method for decomposing multivariate data into non-negative sparse components. We briefly describe the motivation behind this type of data representation and its relation to standard sparse coding and non-negative matrix factorization. We then give a simple yet efficient multiplicative algorithm for finding the optimal values of the hidden components. In addition, we show how the basis vectors can be learned from the observed data. Simulations demonstrate the effectiveness of the proposed method.

871 citations


Journal ArticleDOI
TL;DR: The main contribution in this paper is the improvement of an important result due to Donoho and Huo (2001) concerning the replacement of the l/sub 0/ optimization problem by a linear programming minimization when searching for the unique sparse representation.
Abstract: An elementary proof of a basic uncertainty principle concerning pairs of representations of R/sup N/ vectors in different orthonormal bases is provided. The result, slightly stronger than stated before, has a direct impact on the uniqueness property of the sparse representation of such vectors using pairs of orthonormal bases as overcomplete dictionaries. The main contribution in this paper is the improvement of an important result due to Donoho and Huo (2001) concerning the replacement of the l/sub 0/ optimization problem by a linear programming (LP) minimization when searching for the unique sparse representation.

693 citations


Book ChapterDOI
28 May 2002
TL;DR: An approach for learning to detect objects in still gray images, that is based on a sparse, part-based representation of objects, that achieves high detection accuracy on a difficult test set of real-world images, and is highly robust to partial occlusion and background variation.
Abstract: We present an approach for learning to detect objects in still gray images, that is based on a sparse, part-based representation of objects. A vocabulary of information-rich object parts is automatically constructed from a set of sample images of the object class of interest. Images are then represented using parts from this vocabulary, along with spatial relations observed among them. Based on this representation, a feature-efficient learning algorithm is used to learn to detect instances of the object class. The framework developed can be applied to any object with distinguishable parts in a relatively fixed spatial configuration. We report experiments on images of side views of cars. Our experiments show that the method achieves high detection accuracy on a difficult test set of real-world images, and is highly robust to partial occlusion and background variation.In addition, we discuss and offer solutions to several methodological issues that are significant for the research community to be able to evaluate object detection approaches.

605 citations


Proceedings Article
01 Jan 2002
TL;DR: A framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretic principles, which allows for Bayesian model selection and is less complex in implementation is presented.
Abstract: We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretic principles, previously suggested for active learning. Our goal is not only to learn d-sparse predictors (which can be evaluated in O(d) rather than O(n), d ≪ n, n the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most O(n · d2), and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet can be significantly faster in training. In contrast to the SVM, our approximation produces estimates of predictive probabilities ('error bars'), allows for Bayesian model selection and is less complex in implementation.

590 citations


Proceedings ArticleDOI
10 Dec 2002
TL;DR: The contourlet transform can be designed to satisfy the anisotropy scaling relation for curves, and thus offers a fast and structured curvelet-like decomposition, and provides a sparse representation for two-dimensional piecewise smooth signals resembling images.
Abstract: We propose a new scheme, named contourlet, that provides a flexible multiresolution, local and directional image expansion. The contourlet transform is realized efficiently via a double iterated filter bank structure. Furthermore, it can be designed to satisfy the anisotropy scaling relation for curves, and thus offers a fast and structured curvelet-like decomposition. As a result, the contourlet transform provides a sparse representation for two-dimensional piecewise smooth signals resembling images. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing tasks.

440 citations


Journal ArticleDOI
TL;DR: The basic ideas of ℋ- andℋ2-matrices are introduced and an algorithm that adaptively computes approximations of general matrices in the latter format is presented.
Abstract: A class of matrices (H2-matrices) has recently been introduced for storing discretisations of elliptic problems and integral operators from the BEM. These matrices have the following properties: (i) They are sparse in the sense that only few data are needed for their representation. (ii) The matrix-vector multiplication is of linear complexity. (iii) In general, sums and products of these matrices are no longer in the same set, but after truncation to the H2-matrix format these operations are again of quasi-linear complexity.We introduce the basic ideas of H- and H2-matrices and present an algorithm that adaptively computes approximations of general matrices in the latter format.

247 citations


01 Jan 2002
TL;DR: This paper extends previous results and proves a similar relationship for the most general dictionary D and shows that previous results are emerging as special cases of the new extended theory.
Abstract: Finding a sparse representation of signals is desired in many applications. For a representation dictionary D and a given signal S 2 spanfDg, we are interested in nding the sparsest vector such that D = S. Previous results have shown that if D is composed of a pair of unitary matrices, then under some restrictions dictated by the nature of the matrices involved, one can nd the sparsest representation using an l1 minimization rather than using the l0 norm of the required composition. Obviously, such a result is highly desired since it leads to a convex Linear Programming form. In this paper we extend previous results and prove a similar relationship for the most general dictionary D. We also show that previous results are emerging as special cases of the new extended theory. In addition, we show that the above results can be markedly improved if an ensemble of such signals is given, and higher order moments are used.

217 citations


Journal ArticleDOI
01 Feb 2002
TL;DR: The block partitioning and scheduling problem for sparse parallel factorization without pivoting is considered, and the scalability of the parallel solver and the compromise between memory overhead and efficiency are considered.
Abstract: Solving large sparse symmetric positive definite systems of linear equations is a crucial and time-consuming step, arising in many scientific and engineering applications. The block partitioning and scheduling problem for sparse parallel factorization without pivoting is considered. There are two major aims to this study: the scalability of the parallel solver, and the compromise between memory overhead and efficiency. Parallel experiments on a large collection of irregular industrial problems validate our approach.

211 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a sparse minimum-variance reconstructor for a conventional natural guide star AO system using a sparse approximation for turbulence statistics and recognizing that the nonsparse matrix terms arising from LGS position uncertainty are low-rank adjustments that can be evaluated by using the matrix inversion lemma.
Abstract: The complexity of computing conventional matrix multiply wave-front reconstructors scales as O(n3) for most adaptive optical (AO) systems, where n is the number of deformable mirror (DM) actuators. This is impractical for proposed systems with extremely large n. It is known that sparse matrix methods improve this scaling for least-squares reconstructors, but sparse techniques are not immediately applicable to the minimum-variance reconstructors now favored for multiconjugate adaptive optical (MCAO) systems with multiple wave-front sensors (WFSs) and DMs. Complications arise from the nonsparse statistics of atmospheric turbulence, and the global tip/tilt WFS measurement errors associated with laser guide star (LGS) position uncertainty. A description is given of how sparse matrix methods can still be applied by use of a sparse approximation for turbulence statistics and by recognizing that the nonsparse matrix terms arising from LGS position uncertainty are low-rank adjustments that can be evaluated by using the matrix inversion lemma. Sample numerical results for AO and MCAO systems illustrate that the approximation made to turbulence statistics has negligible effect on estimation accuracy, the time to compute the sparse minimum-variance reconstructor for a conventional natural guide star AO system scales as O(n3/2) and is only a few seconds for n = 3500, and sparse techniques reduce the reconstructor computations by a factor of 8 for sample MCAO systems with 2417 DM actuators and 4280 WFS subapertures. With extrapolation to 9700 actuators and 17,120 subapertures, a reduction by a factor of approximately 30 or 40 to 1 is predicted.

178 citations


Journal ArticleDOI
TL;DR: The interface design for the Sparse Basic Linear Algebra Subprograms (BLAS) is discussed, the kernels in the recent standard that are concerned with unstructured sparse matrices are discussed, and how this interface can shield one from concern over the specific storage scheme for the sparse matrix.
Abstract: We discuss the interface design for the Sparse Basic Linear Algebra Subprograms (BLAS), the kernels in the recent standard from the BLAS Technical Forum that are concerned with unstructured sparse matrices. The motivation for such a standard is to encourage portable programming while allowing for library-specific optimizations. In particular, we show how this interface can shield one from concern over the specific storage scheme for the sparse matrix. This design makes it easy to add further functionality to the sparse BLAS in the future.We illustrate the use of the Sparse BLAS with examples in the three supported programming languages, Fortran 95, Fortran 77, and C.

Dissertation
01 Mar 2002
TL;DR: This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior of Gaussian processes, and combines the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution.
Abstract: In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.

Patent
11 Jan 2002
TL;DR: In this paper, a new data structure and algorithms which offer at least equal performance in common sparse matrix tasks, and improved performance in many, were proposed and applied to a word document index to produce fast build and query times for document retrieval.
Abstract: A new data structure and algorithms which offer at least equal performance in common sparse matrix tasks, and improved performance in many. This is applied to a word-document index to produce fast build and query times for document retrieval.

Journal ArticleDOI
Anshul Gupta1
TL;DR: The experiments show that the algorithmic choices made in WSMP enable it to run more than twice as fast as the best among similar solvers and that WSMP can factor some of the largest sparse matrices available from real applications in only a few seconds on a 4-CPU workstation.
Abstract: During the past few years, algorithmic improvements alone have reduced the time required for the direct solution of unsymmetric sparse systems of linear equations by almost an order of magnitude. This paper compares the performance of some well-known software packages for solving general sparse systems. In particular, it demonstrates the consistently high level of performance achieved by WSMP---the most recent of such solvers. It compares the various algorithmic components of these solvers and discusses their impact on solver performance. Our experiments show that the algorithmic choices made in WSMP enable it to run more than twice as fast as the best among similar solvers and that WSMP can factor some of the largest sparse matrices available from real applications in only a few seconds on a 4-CPU workstation. Thus, the combination of advances in hardware and algorithms makes it possible to solve those general sparse linear systems quickly and easily that might have been considered too large until recently.

Proceedings ArticleDOI
13 May 2002
TL;DR: A method of sparse decomposition of stereo audio signals is developed, and its application to blind separation of more than two sources from only two linear mixtures is tested.
Abstract: We develop a method of sparse decomposition of stereo audio signals, and test its application to blind separation of more than two sources from only two linear mixtures. The decomposition is done in a stereo dictionary which we can define based on any standard time-frequency or time-scale dictionary, such as the multiscale Gabor dictionary. A decomposition of a stereo mixture in the dictionary is computed with a Matching Pursuit type algorithm called Stereo Matching Pursuit. We experiment an application to blind source separation with three (mono) sources mixed on two channels. We cluster the parameters of the stereo atoms of the decomposition to estimate the mixing parameters, and recover estimates. of the sources by a partial reconstruction using only the appropriate atoms of the decomposition. The method outperforms the best achievable linear demixing by 3 dB to more than 7 dB on our preliminary experiments, and its performance should increase as we let the number of iterations of the pursuit increase. Sample sound files can be found here: http://www.irisa.fr/metiss/gribonva/

01 Jan 2002
TL;DR: The contourlet transform as mentioned in this paper is designed to satisfy the anisotropy scaling relation for curves, and thus offers a fast and structured curuelet-like decomposition.
Abstract: We propose a new scheme, named contourlet, that provides a flexible multiresolution, local and directional image expansion. ’ The contourlet transform is realized eficiently via a double iterated filter bank structure. Furthermore, it can be designed to satisfy the anisotropy scaling relation for curves, and thus offers a fast and structured curuelet-like decomposition. As a result, the eontourlet transform provides a sparse representation for two-dimensional piecewise smooth signals resembling images. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing tas!e.

Journal ArticleDOI
TL;DR: Simulations with synthetic evoked responses mixed into natural 122-channel MEG data show significant improvement in accuracy of signal restoration and the convex optimization problem is solved by a Newton-type method.

Journal ArticleDOI
TL;DR: An improved method for achieving sparsity in least-squares support vector machines, which takes into account the residuals for all training patterns, rather than only those incorporated in the sparse kernel expansion.

Proceedings ArticleDOI
03 Nov 2002
TL;DR: In this paper, the contourlet transform is proposed to satisfy the anisotropy scaling relation for curves, and thus offers a fast and structured curvelet-like decomposition.
Abstract: We propose a new scheme, named contourlet, that provides a flexible multiresolution, local and directional image expansion. The contourlet transform is realized efficiently via a double iterated filter bank structure. Furthermore, it can be designed to satisfy the anisotropy scaling relation for curves, and thus offers a fast and structured curvelet-like decomposition. As a result, the contourlet transform provides a sparse representation for two-dimensional piecewise smooth signals resembling images. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing tasks.

Journal ArticleDOI
TL;DR: A new multilayered representation technique for images that consists in a cascade of compressions applied successively to the image itself and to the residuals that resulted from the previous compressions.
Abstract: The main contribution of this work is a new paradigm for image representation and image compression. We describe a new multilayered representation technique for images. An image is parsed into a superposition of coherent layers: piecewise smooth regions layer, textures layer, etc. The multilayered decomposition algorithm consists in a cascade of compressions applied successively to the image itself and to the residuals that resulted from the previous compressions. During each iteration of the algorithm, we code the residual part in a lossy way: we only retain the most significant structures of the residual part, which results in a sparse representation. Each layer is encoded independently with a different transform, or basis, at a different bitrate, and the combination of the compressed layers can always be reconstructed in a meaningful way. The strength of the multilayer approach comes from the fact that different sets of basis functions complement each others: some of the basis functions will give reasonable account of the large trend of the data, while others will catch the local transients, or the oscillatory patterns. This multilayered representation has a lot of beautiful applications in image understanding, and image and video coding. We have implemented the algorithm and we have studied its capabilities.

Journal ArticleDOI
TL;DR: The new feature presented here is to construct the basis in a hierarchical decomposition of the three-space and not, as in previous approaches, in a parameter space of the boundary manifold, which leads to sparse representations of the operator.
Abstract: A multilevel transform is introduced to represent discretizations of integral operators from potential theory by nearly sparse matrices. The new feature presented here is to construct the basis in a hierarchical decomposition of the three-space and not, as in previous approaches, in a parameter space of the boundary manifold. This construction leads to sparse representations of the operator even for geometrically complicated, multiply connected domains. We will demonstrate that the numerical cost to apply a vector to the operator using the nonstandard form is essentially equal to performing the same operation with the fast multipole method. With a second compression scheme the multiscale approach can be further optimized. The diagonal blocks of the transformed matrix can be used as an inexpensive preconditioner which is empirically shown to reduce the condition number of discretizations of the single layer operator so as to be independent of mesh size.

Journal ArticleDOI
TL;DR: A universal piecewise-linear (PWL) CNN coupling cell, the simplicial cell, which is intended to work with binary as well as gray-level inputs, based on the theory of canonical simplicial PWL representations.
Abstract: In this paper, we propose a universal piecewise-linear (PWL) CNN coupling cell, the simplicial cell, which is intended to work with binary as well as gray-level inputs. The construction of the cell is based on the theory of canonical simplicial PWL representations. As a consequence, the coupling function is endowed with important numerical features, namely: the representation of the characteristic cell function is sparse; the family of coupling functions constitutes a Hilbert space; powerful solution algorithms have been developed for the approximation of nonlinear functions, which is particularly useful when the CNN parameters need to be tuned from examples; the parameters can be extracted from a truth table when the CNN is specified analytically.

Journal ArticleDOI
01 Dec 2002
TL;DR: It turns out that the method scales linearly with the number of given data points and is well suited for data mining applications where the amount of data is very large, but where the dimension of the feature space is moderately high.
Abstract: Recently we presented a new approach [20] to the classification problem arising in data mining. It is based on the regularization network approach but in contrast to other methods, which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [52]. Thus, only O(h_n^{-1} n^{d-1}) instead of O(h_n^{-d}) grid points and unknowns are involved. Here d denotes the dimension of the feature space and h_n = 2^{-n} gives the mesh size. We use the sparse grid combination technique [30] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. The method computes a nonlinear classifier but scales only linearly with the number of data points and is well suited for data mining applications where the amount of data is very large, but where the dimension of the feature space is moderately high. In contrast to our former work, where d-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point. We further extend the method to so-called anisotropic sparse grids, where now different a-priori chosen mesh sizes can be used for the discretization of each attribute. This can improve the run time of the method and the approximation results in the case of data sets with different importance of the attributes. We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 14 dimensions. We show that our new method achieves correctness rates which are competitive to those of the best existing methods.

01 Jan 2002
TL;DR: This paper addresses the problem of building high-performance uniprocessor implementations of sparse triangular solve (SpTS) automatically and describes fully automatic hybrid off-line/on-line heuristics for selecting the key tuning parameters: the register block size and the point at which to use the dense algorithm.
Abstract: We address the problem of building high-performance uniprocessor implementations of sparse triangular solve (SpTS) automatically. This computational kernel is often the bottleneck in a variety of scientific and engineering applications that require the direct solution of sparse linear systems. Performance tuning of SpTS—and sparse matrix kernels in general—is a tedious and time-consuming task, because performance depends on the complex interaction of many factors: the performance gap between processors and memory, the limits on the scope of compiler analyses and transformations, and the overhead of manipulating sparse data structures. Consequently, it is not unusual to see kernels such as SpTS run at under 10% of peak uniprocessor floating point performance. Our approach to automatic tuning of SpTS builds on prior experience with building tuning systems for sparse matrix-vector multiply (SpM×V) [21, 22, 40], and dense matrix kernels [8, 41]. In particular, we adopt the two-step methodology of previous approaches: (1) we identify and generate a set of reasonable candidate implementations, and (2) search this set for the fastest implementation by some combination of performance modeling and actually executing the implementations. In this paper, we consider the solution of the sparse lower triangular system Lx = y for a single dense vector x, given the lower triangular sparse matrix L and dense vector y. We refer to x as the solution vector and y as the right-hand side (RHS). Many of the lower triangular factors we have observed from sparse LU factorization have a large, dense triangle in the lower right-hand corner of the matrix; this trailing triangle can account for as much as 90% of the matrix non-zeros. Therefore, we consider both algorithmic and data structure reorganizations which partition the solve into a sparse phase and a dense phase. To the sparse phase, we adapt the register blocking optimization, previously proposed for sparse matrix-vector multiply (SpM×V) in the Sparsity system [21, 22], to the SpTS kernel; to the dense phase, we make judicious use of highly tuned BLAS routines by switching to a dense implementation (switch-to-dense optimization). We describe fully automatic hybrid off-line/on-line heuristics for selecting the key tuning parameters: the register block size and the point at which to use the dense algorithm. (See Section 2.) We then evaluate the performance of our optimized implementations relative to the fundamental limits on performance. Specifically, we first derive simple models of the upper bounds on the execution rate (Mflop/s) of our implementations. Using hardware counter data collected with the PAPI library [10], we then verify our models on three hardware platforms (Table 1) and a set of triangular factors from applications (Table 2). We observe that our optimized implementations can achieve 80% or more of these bounds; furthermore, we observe speedups of up to 1.8x when both register blocking and switch-to-dense optimizations are applied. We also present preliminary results confirming that our heuristics choose reasonable values for the tuning parameters. These results support our prior findings with SpM×V [40], suggesting two new directions for performance enhancements: (1) the use of higher-level matrix structures (e.g., matrix reordering and multiple register block sizes), and (2) optimizing kernels with more opportunities for data reuse (e.g., multiplication and solve with multiple vectors, multiplication of AA by a vector).

Journal ArticleDOI
TL;DR: A generative-model framework is used to analyze the requirements, the difficulty, and current approaches to sparse image coding and finds that overcomplete expansions are often desirable, as they are better models of the image-generation process.
Abstract: Linear expansions of images find many applications in image processing and computer vision. Overcomplete expansions are often desirable, as they are better models of the image-generation process. Such expansions lead to the use of sparse codes. However, minimizing the number of non-zero coefficients of linear expansions is an unsolved problem. In this article, a generative-model framework is used to analyze the requirements, the difficulty, and current approaches to sparse image coding.

Proceedings ArticleDOI
13 May 2002
TL;DR: This work considers the problem of designing a sparse FIR filter and shows that it can be cast into a problem of determining a sparse solution of a linear system of equations.
Abstract: We consider the problem of designing a sparse FIR filter and show that it can be cast into a problem of determining a sparse solution of a linear system of equations. Previously proposed design algorithms for FIR filter utilize an intelligent search over all possible structures for sparse filter. We propose a new filter design method based on a simpler algorithm for finding a sparse solution of the linear system. Simulation experiments show significant improvements over classical nonsparse methods.

Journal ArticleDOI
TL;DR: A class of parallel multistep successive preconditionsing strategies to enhance efficiency and robustness of standard sparse approximate inverse preconditioning techniques are developed.
Abstract: We develop a class of parallel multistep successive preconditioning strategies to enhance efficiency and robustness of standard sparse approximate inverse preconditioning techniques. The key idea is to compute a series of simple sparse matrices to approximate the inverse of the original matrix. Studies are conducted to show the advantages of such an approach in terms of both improving preconditioning accuracy and reducing computational cost, compared to the standard sparse approximate inverse preconditioners. Numerical experiments using one prototype implementation to solve a few sparse matrices on a distributed memory parallel computer are reported.

Patent
17 Dec 2002
TL;DR: In this article, a warped wavelet packet transform was proposed for processing or compressing an n-dimensional digital signal by constructing a sparse representation which takes advantage of the signal geometrical regularity.
Abstract: A method and apparatus for processing or compressing an n-dimensional digital signal by constructing a sparse representation which takes advantage of the signal geometrical regularity. The invention comprises a warped wavelet packet transform which performs a cascade of warped subband filtering along warping grids of sampling points adapted to the signal geometry. It also comprises a bandeletisation which decorrelates the warped wavelet packet coefficients to produce a sparse representation. An inverse warped wavelet packet transform and an inverse bandeletisation reconstruct a signal from its bandelet representation. The invention comprises a compression system which quantizes and codes the bandelet representation, a decompression system, a restoration system which enhances a signal by filtering its bandelet representation, and a feature vector extraction system for pattern recognition applications of a bandelet representation.

Dissertation
01 Jan 2002
TL;DR: This thesis solves step (2) of the general frame design problem using the compact notation of linear algebra, and proposes a new method for texture classification, denoted Frame Texture Classification Method, which may be useful in frame analysis in the future.
Abstract: Signal expansions using frames may be considered as generalizations of signal representations based on transforms and filter banks. Frames for sparse signal representations may be designed using an iterative method with two main steps: (1) Frame vector selection and expansion coefficient determination for signals in a training set, – selected to be representative of the signals for which compact representations are desired, using the frame designed in the previous iteration. (2) Update of frame vectors with the objective of improving the representation of step (1). In this thesis we solve step (2) of the general frame design problem using the compact notation of linear algebra.This makes the solution both conceptually and computationally easy, especially for the non-block-oriented frames, – for short overlapping frames, that may be viewed as generalizations of critically sampled filter banks. Also, the solution is more general than those presented earlier, facilitating the imposition of constraints, such as symmetry, on the designed frame vectors. We also take a closer look at step (1) in the design method. Some of the available vector selection algorithms are reviewed, and adaptations to some of these are given. These adaptations make the algorithms better suited for both the frame design method and the sparse representation of signals problem, both for block-oriented and overlapping frames.The performances of the improved frame design method are shown in extensive experiments. The sparse representation capabilities are illustrated both for one-dimensional and two-dimensional signals, and in both cases the new possibilities in frame design give better results.Also a new method for texture classification, denoted Frame Texture Classification Method (FTCM), is presented. The main idea is that a frame trained for making sparse representations of a certain class of signals is a model for this signal class. The FTCM is applied to nine test images, yielding excellent overall performance, for many test images the number of wrongly classified pixels is more than halved, in comparison to state of the art texture classification methods presented in [59].Finally, frames are analyzed from a practical viewpoint, rather than in a mathematical theoretic perspective. As a result of this, some new frame properties are suggested. So far, the new insight this has given has been moderate, but we think that this approach may be useful in frame analysis in the future.

Journal ArticleDOI
TL;DR: This paper develops irregular wavelet representations for complex domains with the goal of demonstrating their potential for three-dimensional (3D) scientific and engineering computing applications and shows how these new constructions can be applied to partial differential equations cast in the integral form.
Abstract: In this paper we develop irregular wavelet representations for complex domains with the goal of demonstrating their potential for three-dimensional (3D) scientific and engineering computing applications. We show existence and construction of a large class of continuous spatially adapted multiwavelets in $R^{\eta}$ with vanishing moments over complex geometries. These wavelets share all of the major advantages of conventional wavelets in that they provide an analytical tool for studying data, functions, and operators at different scales. However, unlike conventional wavelets, which are restricted to uniform grids, spatially adapted multiwavelets allow fast transform, localization, and decorrelation on complex meshes, such as those encountered in finite element modeling. We show how these new constructions can be applied to partial differential equations cast in the integral form. We implement the wavelet approach for several model two-dimensional (2D) and 3D potential problems. It is shown that the optimal convergence rate is achieved, with only ${\cal O}( N ( {{\rm log}N} )^{\alpha} )$ entries of the discrete operator matrix, where $\alpha$ is a small number and N is the number of unknowns.