scispace - formally typeset
Search or ask a question

Showing papers on "Singular value decomposition published in 2001"


Journal ArticleDOI
TL;DR: This work introduces a new dimensionality reduction technique which it is called Piecewise Aggregate Approximation (PAA), and theoretically and empirically compare it to the other techniques and demonstrate its superiority.
Abstract: The problem of similarity search in large time series databases has attracted much attention recently. It is a non-trivial problem because of the inherent high dimensionality of the data. The most promising solutions involve first performing dimensionality reduction on the data, and then indexing the reduced data with a spatial access method. Three major dimensionality reduction techniques have been proposed: Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and more recently the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction technique which we call Piecewise Aggregate Approximation (PAA). We theoretically and empirically compare it to the other techniques and demonstrate its superiority. In addition to being competitive with or faster than the other methods, our approach has numerous other advantages. It is simple to understand and to implement, it allows more flexible distance measures, including weighted Euclidean queries, and the index can be built in linear time.

1,550 citations


Journal ArticleDOI
TL;DR: The concept vectors produced by the spherical k-means algorithm constitute a powerful sparse and localized “basis” for text data sets and are localized in the word space, are sparse, and tend towards orthonormality.
Abstract: Unlabeled document collections are becoming increasingly common and availables mining such data sets represents a major contemporary challenge. Using words as features, text documents are often represented as high-dimensional and sparse vectors–a few thousand dimensions and a sparsity of 95 to 99% is typical. In this paper, we study a certain spherical k-means algorithm for clustering such document vectors. The algorithm outputs k disjoint clusters each with a concept vector that is the centroid of the cluster normalized to have unit Euclidean norm. As our first contribution, we empirically demonstrate that, owing to the high-dimensionality and sparsity of the text data, the clusters produced by the algorithm have a certain “fractal-like” and “self-similar” behavior. As our second contribution, we introduce concept decompositions to approximate the matrix of document vectorss these decompositions are obtained by taking the least-squares approximation onto the linear subspace spanned by all the concept vectors. We empirically establish that the approximation errors of the concept decompositions are close to the best possible, namely, to truncated singular value decompositions. As our third contribution, we show that the concept vectors are localized in the word space, are sparse, and tend towards orthonormality. In contrast, the singular vectors are global in the word space and are dense. Nonetheless, we observe the surprising fact that the linear subspaces spanned by the concept vectors and the leading singular vectors are quite close in the sense of small principal angles between them. In conclusion, the concept vectors produced by the spherical k-means algorithm constitute a powerful sparse and localized “basis” for text data sets.

1,398 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: This work introduces a new dimensionality reduction technique which it is shown how APCA can be indexed using a multidimensional index structure, and proposes two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching.
Abstract: Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data.. The most promising solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction technique which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments of varying lengths such that their individual reconstruction errors are minimal. We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower bounding Euclidean distance approximation, and a non-lower bounding, but very tight Euclidean distance approximation and show how they can support fast exact searching, and even faster approximate searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its superiority.

849 citations


Journal ArticleDOI
TL;DR: The singular value decomposition has been extensively used in engineering and statistical applications and certain properties of this decomposition are investigated as well as numerical algorithms.
Abstract: The singular value decomposition (SVD) has been extensively used in engineering and statistical applications. This method was originally discovered by Eckart and Young in [Psychometrika, 1 (1936), pp. 211--218], where they considered the problem of low-rank approximation to a matrix. A natural generalization of the SVD is the problem of low-rank approximation to high order tensors, which we call the multidimensional SVD. In this paper, we investigate certain properties of this decomposition as well as numerical algorithms.

461 citations


Journal ArticleDOI
TL;DR: The orthogonal decomposition of tensors (also known as multidimensional arrays or n-way arrays) using two different definitions of orthogonality are explored using a counterexample to a tensor extension of the Eckart--Young SVD approximation theorem.
Abstract: We explore the orthogonal decomposition of tensors (also known as multidimensional arrays or n-way arrays) using two different definitions of orthogonality. We present numerous examples to illustrate the difficulties in understanding such decompositions. We conclude with a counterexample to a tensor extension of the Eckart--Young SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl., 269 (1998), pp. 307--329].

421 citations


Journal ArticleDOI
TL;DR: It is shown that a truncated matrix linking just a few modes is a good approximation of the full time translation matrix, which suggests that the number of essential connections among the genes is small.
Abstract: We describe the time evolution of gene expression levels by using a time translational matrix to predict future expression levels of genes based on their expression levels at some initial time. We deduce the time translational matrix for previously published DNA microarray gene expression data sets by modeling them within a linear framework by using the characteristic modes obtained by singular value decomposition. The resulting time translation matrix provides a measure of the relationships among the modes and governs their time evolution. We show that a truncated matrix linking just a few modes is a good approximation of the full time translation matrix. This finding suggests that the number of essential connections among the genes is small.

334 citations


Proceedings ArticleDOI
W. Brand1
08 Dec 2001
TL;DR: The resulting integrated algorithm can track and reconstruct in 3D nonrigid surfaces having very little texture, for example the smooth parts of the face, via structured matrix decompositions.
Abstract: Nonrigid 3D structure-from-motion and 2D optical flow can both be formulated as tensor factorization problems. The two problems can be made equivalent through a noisy affine transform, yielding a combined nonrigid structure-from-intensities problem that we solve via structured matrix decompositions. Often the preconditions for this factorization are violated by image noise and deficiencies of the data visa-vis the sample complexity of the problem. Both issues are remediated with careful use of rank constraints, norm constraints, and integration over uncertainty in the intensity values, yielding novel solutions for SVD under uncertainty, factorization under uncertainty, nonrigid factorization, and subspace optical flow. The resulting integrated algorithm can track and reconstruct in 3D nonrigid surfaces having very little texture, for example the smooth parts of the face. Working with low-resolution low-texture "found video," these methods produce good tracking and 3D reconstruction results where prior algorithms fail.

244 citations


Proceedings ArticleDOI
01 Dec 2001
TL;DR: It is found that for regression the tensor-rank coding, as a dimensionality reduction technique, significantly outperforms other techniques like PCA.
Abstract: Given a collection of images (matrices) representing a "class" of objects we present a method for extracting the commonalities of the image space directly from the matrix representations (rather than from the vectorized representation which one would normally do in a PCA approach, for example). The general idea is to consider the collection of matrices as a tensor and to look for an approximation of its tensor-rank. The tensor-rank approximation is designed such that the SVD decomposition emerges in the special case where all the input matrices are the repeatition of a single matrix. We evaluate the coding technique both in terms of regression, i.e., the efficiency of the technique for functional approximation, and classification. We find that for regression the tensor-rank coding, as a dimensionality reduction technique, significantly outperforms other techniques like PCA. As for classification, the tensor-rank coding is at is best when the number of training examples is very small.

231 citations


Patent
07 Dec 2001
TL;DR: In this article, a time-domain implementation is provided which uses frequency-domain singular value decomposition and water-pouring results to derive timedomain pulse-shaping and beam-steering solutions at the transmitter and receiver.
Abstract: Techniques for processing a data transmission at the transmitter and receiver. In an aspect, a time-domain implementation is provided which uses frequency-domain singular value decomposition and “water-pouring” results to derive time-domain pulse-shaping and beam-steering solutions at the transmitter and receiver. The singular value decomposition is performed at the transmitter to determine eigen-modes (i.e., spatial subchannels) of the MIMO channel and to derive a first set of steering vectors used to “precondition” modulation symbols. The singular value decomposition is also performed at the receiver to derive a second set of steering vectors used to precondition the received signals such that orthogonal symbol streams are recovered at the receiver, which can simplify the receiver processing. Water-pouring analysis is used to more optimally allocate the total available transmit power to the eigen-modes, which then determines the data rate and the coding and modulation scheme to be used for each eigen-mode.

214 citations


Journal ArticleDOI
01 Dec 2001
TL;DR: The results showed that truncated SVD method can provide an efficient coding with high-compression ratios and demonstrated the method as an effective technique for ECG data storage or signals transmission.
Abstract: The method of truncated singular value decomposition (SVD) is proposed for electrocardiogram (ECG) data compression. The signal decomposition capability of SVD is exploited to extract the significant feature components of the ECG by decomposing the ECG into a set of basic patterns with associated scaling factors. The signal information is mostly concentrated within a certain number of singular values with related singular vectors due to the strong interbeat correlation among ECG cycles. Therefore, only the relevant parts of the singular triplets need to be retained as the compressed data for retrieving the original signals. The insignificant overhead can be truncated to eliminate the redundancy of ECG data compression. The Massachusetts Institute of Technology-Beth Israel Hospital arrhythmia database was applied to evaluate the compression performance and recoverability in the retrieved ECG signals. The approximate achievement was presented with an average data rate of 143.2 b/s with a relatively low reconstructed error. These results showed that the truncated SVD method can provide efficient coding with high-compression ratios. The computational efficiency of the SVD method in comparing with other techniques demonstrated the method as an effective technique for ECG data storage or signals transmission.

194 citations


Journal ArticleDOI
TL;DR: Experiments show that the factors of PTF are easier to interpret than those produced by methods based on the singular value decomposition, which might contain negative values.

Journal ArticleDOI
Paul D. Fiore1
TL;DR: An efficient algorithm for the solution of the exterior orientation problem using orthogonal decompositions to first isolate the unknown depths of feature points in the camera reference frame to solve the singular value decomposition problem.
Abstract: This paper concerns an efficient algorithm for the solution of the exterior orientation problem. Orthogonal decompositions are used to first isolate the unknown depths of feature points in the camera reference frame, allowing the problem to be reduced to an absolute orientation with scale problem, which is solved using the singular value decomposition (SVD). The key feature of this approach is the low computational cost compared to existing approaches.

Proceedings Article
01 Jan 2001
TL;DR: In this paper, a tensor factorization problem for non-rigid 3D structure-from-motion and 2D optical flow is formulated as tensor decomposition problems and solved via structured matrix decompositions.
Abstract: Nonrigid 3D structure-from-motion and 2D optical flow can both be formulated as tensor factorization problems The two problems can be made equivalent through a noisy affine transform, yielding a combined nonrigid structure-from-intensities problem that we solve via structured matrix decompositions Often the preconditions for this factorization are violated by image noise and deficiencies of the data vis-a-vis the sample complexity of the problem Both issues are remediated with careful use of rank constraints, norm constraints, and integration over uncertainty in the intensity values, yielding novel solutions for SVD under uncertainty, factorization under uncertainty, nonrigid factorization, and subspace optical flow The resulting integrated algorithm can track and 3D-reconstruct nonrigid surfaces that have very little texture, for example the smooth parts of the face Working with low-resolution low-texture “found video,” these methods produce good tracking and 3D reconstruction results where prior algorithms fail

Proceedings ArticleDOI
01 Aug 2001
TL;DR: This work presents a numerical technique, homomorphic factorization, that can decompose arbitrary BRDFs into products of two or more factors of lower dimensionality, each factor dependent on a different interpolated geometric parameter.
Abstract: A bidirectional reflectance distribution function (BRDF) describes how a material reflects light from its surface. To use arbitrary BRDFs in real-time rendering, a compression technique must be used to represent BRDFs using the available texture-mapping and computational capabilities of an accelerated graphics pipeline. We present a numerical technique, homomorphic factorization, that can decompose arbitrary BRDFs into products of two or more factors of lower dimensionality, each factor dependent on a different interpolated geometric parameter. Compared to an earlier factorization technique based on the singular value decomposition, this new technique generates a factorization with only positive factors (which makes it more suitable for current graphics hardware accelerators), provides control over the smoothness of the result, minimizes relative rather than absolute error, and can deal with scattered, sparse data without a separate resampling and interpolation algorithm.

Journal ArticleDOI
TL;DR: A novel singular value decomposition (SVD)- and vector quantization (VQ)-based image hiding scheme to hide image data is presented, showing good compression ratio and satisfactory image quality.

Journal ArticleDOI
TL;DR: A novel frequency-domain framework for the identification of a multiple-input multiple-output (MIMO) system driven by white, mutually independent, unobservable inputs and the freedom to select the polyspectra slices allows us to bypass the frequency-dependent permutation ambiguity.
Abstract: We present a novel frequency-domain framework for the identification of a multiple-input multiple-output (MIMO) system driven by white, mutually independent, unobservable inputs. The system frequency response is obtained based on singular value decomposition (SVD) of a matrix constructed based on the power-spectrum and slices of polyspectra of the system output. By appropriately selecting the polyspectra slices, we can create a set of such matrices, each of which could independently yield the solution, of they could all be combined in a joint diagonalization scheme to yield a solution with improved statistical performance. The freedom to select the polyspectra slices allows us to bypass the frequency-dependent permutation ambiguity that is usually associated with frequency domain SVD, while at the same time allows us compute and cancel the phase ambiguity. An asymptotic consistency analysis of the system magnitude response estimate is performed.

Journal ArticleDOI
TL;DR: A multiresolution form of the singular value decomposition (SVD) is proposed and it is shown how it may be used for signal analysis and approximation and has linear computational complexity.
Abstract: This paper proposes a multiresolution form of the singular value decomposition (SVD) and shows how it may be used for signal analysis and approximation. It is well-known that the SVD has optimal decorrelation and subrank approximation properties. The multiresolution form of SVD proposed here retains those properties, and moreover, has linear computational complexity. By using the multiresolution SVD, the following important characteristics of a signal may he measured, at each of several levels of resolution: isotropy, sphericity of principal components, self-similarity under scaling, and resolution of mean-squared error into meaningful components. Theoretical calculations are provided for simple statistical models to show what might be expected. Results are provided with real images to show the usefulness of the SVD decomposition.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the factorization of causal finite impulse response (FIR) paraunitary filterbanks with given filter length, and developed a new structure for the design and implementation of paraunary system based on the decomposition of Hermitian unitary matrices.
Abstract: We systematically investigate the factorization of causal finite impulse response (FIR) paraunitary filterbanks with given filter length. Based on the singular value decomposition of the coefficient matrices of the polyphase representation, a fundamental order-one factorization form is first proposed for general paraunitary systems. Then, we develop a new structure for the design and implementation of paraunitary system based on the decomposition of Hermitian unitary matrices. Within this framework, the linear-phase filterbank and pairwise mirror-image symmetry filterbank are revisited. Their structures are special cases of the proposed general structures. Compared with the existing structures, more efficient ones that only use approximately half the number of free parameters are derived. The proposed structures are complete and minimal. Although the factorization theory with or without constraints is discussed in the framework of M-channel filterbanks, the results can be applied to wavelets and multiwavelet systems and could serve as a general theory for paraunitary systems.

Book ChapterDOI
21 May 2001
TL;DR: A new approach to transparent embedding of data into digital images is proposed that provides a high rate of the embedded data and is robust to common and some intentional distortions and can be used both for hidden communication and watermarking.
Abstract: A new approach to transparent embedding of data into digital images is proposed. It provides a high rate of the embedded data and is robust to common and some intentional distortions. The developed technique employs properties of the singular value decomposition (SVD) of a digital image. According to these properties each singular value (SV) specifies the luminance of the SVD image layer, whereas the respective pair of singular vectors specifies image geometry. Therefore slight variations of SVs cannot affect the visual perception of the cover image. The proposed approach is based on embedding a bit of data through slight modifications of SVs of a small block of the segmented covers. The approach is robust because it supposes to embed extra data into low bands of covers in a distributed way. The size of small blocks is used as an attribute to achieve a tradeoff between the embedded data rate and robustness. An advantage of the approach is that it is blind. Simulation has proved its robustness to JPEG up to 40%. The approach can be used both for hidden communication and watermarking.

Journal ArticleDOI
TL;DR: SVDMAN provides a new means for using microarray data to develop hypotheses for gene associations and provides a measure of confidence in the hypotheses, thus extending current SVD research in the area of global gene expression analysis.
Abstract: Summary: We have developed two novel methods for Singular Value Decomposition analysis (SVD) of microarray data. The first is a threshold-based method for obtaining gene groups, and the second is a method for obtaining a measure of confidence in SVD analysis. Gene groups are obtained by identifying elements of the left singular vectors, or gene coefficient vectors, that are greater in magnitude than the threshold WN −1/2 , where N is the number of genes, and W is a weight factor whose default value is 3. The groups are non-exclusive and may contain genes of opposite (i.e. inversely correlated) regulatory response. The confidence measure is obtained by systematically deleting assays from the data set, interpolating the SVD of the reduced data set to reconstruct the missing assay, and calculating the Pearson correlation between the reconstructed assay and the original data. This confidence measure is applicable when each experimental assay corresponds to a value of parameter that can be interpolated, such as time, dose or concentration. Algorithms for the grouping method and the confidence measure are available in a software application called SVD Microarray ANalysis (SVDMAN). In addition to calculating the SVD for generic analysis, SVDMAN provides a new means for using microarray data to develop hypotheses for gene associations and provides a measure of confidence in the hypotheses, thus extending current SVD research in the area of global gene expression analysis. Availability: ftp://bpublic.lanl.gov/compbio/software

Journal ArticleDOI
01 Nov 2001
TL;DR: In this article, the singular value decomposition (SVD) was used for the estimation of harmonics in signals in the presence of high noise and the proposed approach results in a linear least squares method.
Abstract: The paper examines singular value decomposition (SVD) for the estimation of harmonics in signals in the presence of high noise. The proposed approach results in a linear least squares method. The methods developed for locating the frequencies as closely spaced sinusoidal signals are appropriate tools for the investigation of power system signals containing harmonics and interharmonics differing significantly in their multiplicity. The SVD approach is a numerical algorithm to calculate the linear least squares solution. The methods can also be applied for frequency estimation of heavy distorted periodical signals. To investigate the methods several experiments have been performed using simulated signals and the waveforms of a frequency converter current. For comparison, similar experiments have been repeated using the FFT with the same number of samples and sampling period. The comparison has proved the superiority of SVD for signals buried in the noise. However, the SVD computation is much more complex than FFT and requires more extensive mathematical manipulations.

Journal ArticleDOI
TL;DR: In this paper, an alternative orthonormalization method that computes the orthonormization basis from the right singular vectors of a matrix was proposed, which is typically more stable than classical Gram-Schmidt (GS).
Abstract: First, we consider the problem of orthonormalizing skinny (long) matrices. We propose an alternative orthonormalization method that computes the orthonormal basis from the right singular vectors of a matrix. Its advantages are that (a) all operations are matrix-matrix multiplications and thus cache efficient, (b) only one synchronization point is required in parallel implementations, and (c) it is typically more stable than classical Gram--Schmidt (GS). Second, we consider the problem of orthonormalizing a block of vectors against a previously orthonormal set of vectors and among itself. We solve this problem by alternating iteratively between a phase of GS and a phase of the new method. We provide error analysis and use it to derive bounds on how accurately the two successive orthonormalization phases should be performed to minimize total work performed. Our experiments confirm the favorable numerical behavior of the new method and its effectiveness on modern parallel computers.

Journal ArticleDOI
TL;DR: It is shown that the JDSVD can be seen as an accelerated (inexact) Newton scheme and experimentally compare the method with some other iterative SVD methods.
Abstract: We discuss a new method for the iterative computation of a portion of the singular values and vectors of a large sparse matrix. Similar to the Jacobi--Davidson method for the eigenvalue problem, we compute in each step a correction by (approximately) solving a correction equation. We give a few variants of this Jacobi--Davidson SVD (JDSVD) method with their theoretical properties. It is shown that the JDSVD can be seen as an accelerated (inexact) Newton scheme. We experimentally compare the method with some other iterative SVD methods.

31 Aug 2001
TL;DR: In this article, the authors reviewed linear reconstruction algorithms using assumed covariance matrices for conductivity and data and the formulation of Tikhonov regularization using the Singular Value Decomposition (SVD) with covariance norms.
Abstract: Linear reconstruction algorithms are reviewed using assumed covariance matrices for the conductivity and data and the formulation of Tikhonov regularization using the singular value decomposition (SVD) with covariance norms. It is shown how iterative reconstruction algorithms, such as Landweber and conjugate gradient, can be used for regularization and analysed in terms of the SVD, and implemented directly for a one−step Newton’s method. Where there are known inequality constraints, such as upper and lower bounds, these can be incorporated in iterative methods and have a stabilizing effect on reconstructions.

Journal ArticleDOI
TL;DR: Empirical examples show that the modified algorithm can be reasonably fast, but its purpose is to save an investigator's effort rather than that of his or her computer, making it more appropriate as a research tool than as an algorithm for established methods.
Abstract: A very general algorithm for orthogonal rotation is identified. It is shown that when an algorithm parameterα is sufficiently large the algorithm converges monotonically to a stationary point of the rotation criterion from any starting value. Because a sufficiently largeα is in general hard to find, a modification that does not require it is introduced. Without this requirement the modified algorithm is not only very general, but also very simple. Its implementation involves little more than computing the gradient of the rotation criterion. While the modified algorithm converges monotonically from any starting value, it is not guaranteed to converge to a stationary point. It, however, does so in all of our examples. While motivated by the rotation problem in factor analysis, the algorithms discussed may be used to optimize almost any function of a not necessarily square column-wise orthonormal matrix. A number of these more general applications are considered. Empirical examples show that the modified algorithm can be reasonably fast, but its purpose is to save an investigator's effort rather than that of his or her computer. This makes it more appropriate as a research tool than as an algorithm for established methods.

Journal ArticleDOI
TL;DR: This work considers the problem of computing low-rank approximations of matrices in a factorized form with sparse factors and presents numerical examples arising from some application areas to illustrate the efficiency and accuracy of the proposed algorithms.
Abstract: We consider the problem of computing low-rank approximations of matrices. The novel aspects of our approach are that we require the low-rank approximations to be written in a factorized form with sparse factors, and the degree of sparsity of the factors can be traded off for reduced reconstruction error by certain user-determined parameters. We give a detailed error analysis of our proposed algorithms and compare the computed sparse low-rank approximations with those obtained from singular value decomposition. We present numerical examples arising from some application areas to illustrate the efficiency and accuracy of our algorithms.

Journal ArticleDOI
TL;DR: In this article, the problem of determining the shape of unknown perfectly conducting infinitely long cylinders, starting from the knowledge of the scattered electric far field under the incidence of plane waves with a fixed angle of incidence and varying frequency, was formulated as a nonlinear inverse one by searching for a compact support distribution accounting for the objects contour.
Abstract: This paper deals with the problem of determining the shape of unknown perfectly conducting infinitely long cylinders, starting from the knowledge of the scattered electric far field under the incidence of plane waves with a fixed angle of incidence and varying frequency. The problem is formulated as a nonlinear inverse one by searching for a compact support distribution accounting for the objects contour. The nonlinear unknown to data mapping is then linearized by means of the Kirchhoff approximation, which reduces it into a Fourier transform relationship. Then, the Fourier transform inversion from incomplete data is dealt with by means of the singular value decomposition (SVD) approach and the features of the reconstructable unknowns are investigated. Finally, numerical results confirm the performed analysis.

Journal ArticleDOI
01 May 2001
TL;DR: It is shown how detection of redundant rules can be introduced in OLS by a simple extension of the algorithm and discusses the performance of rank-revealing reduction methods and advocate the use of a less complex method based on the pivoted QR decomposition.
Abstract: Comments on recent publications about the use of orthogonal transforms to order and select rules in a fuzzy rule base. The techniques are well-known from linear algebra, and we comment on their usefulness in fuzzy modeling. The application of rank-revealing methods based on singular value decomposition (SVD) to rule reduction gives rather conservative results. They are essentially subset selection methods, and we show that such methods do not produce an "importance ordering", contrary to what has been stated in the literature. The orthogonal least-squares (OLS) method, which evaluates the contribution of the rules to the output, is more attractive for systems modeling. However, it has been shown to sometimes assign high importance to rules that are correlated in the premise. This hampers the generalization capabilities of the resulting model. We discuss the performance of rank-revealing reduction methods and advocate the use of a less complex method based on the pivoted QR decomposition. Further, we show how detection of redundant rules can be introduced in OLS by a simple extension of the algorithm. The methods are applied to a problem known from the literature and compared to results reported by other researchers.

Journal ArticleDOI
TL;DR: In this paper, a singular value decomposition (SVD) based algorithm for polarization filtering of triaxial seismic recordings based on the assumption that the particle motion trajectory is essentially 2-D (elliptical polarization) is presented.
Abstract: We present a singular value decomposition (SVD) based algorithm for polarization filtering of triaxial seismic recordings based on the assumption that the particle motion trajectory is essentially 2-D (elliptical polarization). The filter is the sum of the first two eigenimages of the SVD on the signal matrix. Weighing functions, which are strictly dependent on the intensity (linearity and planarity) of the polarization, are applied. The efficiency of the filter is tested on synthetic traces and on real data, and found to be superior to solely covariance‐based filter algorithms. Although SVD and covariance‐based methods have similar theoretical approach to the solution of the eigenvalue problem, SVD does not require any further rotation along the polarization ellipsoid principal axes. The algorithm presented here is a robust and fast filter that properly reproduces polarization attributes, amplitude, and phase of the original signal. A major novelty is the enhancement of both elliptical and linear polariz...

Proceedings ArticleDOI
01 Jul 2001
TL;DR: A linear method for computing a projective reconstruction from a large number of images is presented and then evaluated, finding that it works with any mixture of line and point correspondence through the constraints these impose on the multilinear tensors.
Abstract: A linear method for computing a projective reconstruction from a large number of images is presented and then evaluated. The method uses planar homographies between views to linearize the resecting of the cameras. Constraints based on the fundamental matrix, trifocus tensor or quadrifocal tensor are used to derive relationship between the position vectors of all the cameras at once. The resulting set of equations are solved using a SVD. The algorithm is computationally efficient as it is linear in the number of matched points used. A key feature of the algorithm is that all of the images are processed simultaneously, as in the Sturm-Triggs factorization method, but it differs in not requiring that all points be visible in all views. An additional advantage is that it works with any mixture of line and point correspondence through the constraints these impose on the multilinear tensors. Experiments on both synthetic and real data confirm the method's utility.