scispace - formally typeset
Search or ask a question

Showing papers on "Circulant matrix published in 2015"


Journal ArticleDOI
TL;DR: A new kernelized correlation filter is derived, that unlike other kernel algorithms has the exact same complexity as its linear counterpart, which is called dual correlation filter (DCF), which outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite being implemented in a few lines of code.
Abstract: The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies—any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source.

4,994 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: This work explores the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection, which substantially reduces memory footprint and enables the use of the Fast Fourier Transform to speed up the computation.
Abstract: We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection. The circulant structure substantially reduces memory footprint and enables the use of the Fast Fourier Transform to speed up the computation. Considering a fully-connected neural network layer with d input nodes, and d output nodes, this method improves the time complexity from O(d2) to O(dlogd) and space complexity from O(d2) to O(d). The space savings are particularly important for modern deep convolutional neural network architectures, where fully-connected layers typically contain more than 90% of the network parameters. We further show that the gradient computation and optimization of the circulant projections can be performed very efficiently. Our experiments on three standard datasets show that the proposed approach achieves this significant gain in storage and efficiency with minimal increase in error rate compared to neural networks with unstructured projections.

299 citations


Posted Content
TL;DR: The MSGP framework enables the use of Gaussian processes on billions of datapoints, without requiring distributed inference, or severe assumptions, and reduces the standard GP learning and inference complexity to O(n), and the standard test point prediction complexity to $O(1).
Abstract: We introduce a framework and early results for massively scalable Gaussian processes (MSGP), significantly extending the KISS-GP approach of Wilson and Nickisch (2015). The MSGP framework enables the use of Gaussian processes (GPs) on billions of datapoints, without requiring distributed inference, or severe assumptions. In particular, MSGP reduces the standard $O(n^3)$ complexity of GP learning and inference to $O(n)$, and the standard $O(n^2)$ complexity per test point prediction to $O(1)$. MSGP involves 1) decomposing covariance matrices as Kronecker products of Toeplitz matrices approximated by circulant matrices. This multi-level circulant approximation allows one to unify the orthogonal computational benefits of fast Kronecker and Toeplitz approaches, and is significantly faster than either approach in isolation; 2) local kernel interpolation and inducing points to allow for arbitrarily located data inputs, and $O(1)$ test time predictions; 3) exploiting block-Toeplitz Toeplitz-block structure (BTTB), which enables fast inference and learning when multidimensional Kronecker structure is not present; and 4) projections of the input space to flexibly model correlated inputs and high dimensional data. The ability to handle many ($m \approx n$) inducing points allows for near-exact accuracy and large scale kernel learning.

90 citations


Posted Content
TL;DR: In this paper, the authors explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection, which substantially reduces memory footprint and enables the use of the Fast Fourier Transform to speed up the computation.
Abstract: We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection. The circulant structure substantially reduces memory footprint and enables the use of the Fast Fourier Transform to speed up the computation. Considering a fully-connected neural network layer with d input nodes, and d output nodes, this method improves the time complexity from O(d^2) to O(dlogd) and space complexity from O(d^2) to O(d). The space savings are particularly important for modern deep convolutional neural network architectures, where fully-connected layers typically contain more than 90% of the network parameters. We further show that the gradient computation and optimization of the circulant projections can be performed very efficiently. Our experiments on three standard datasets show that the proposed approach achieves this significant gain in storage and efficiency with minimal increase in error rate compared to neural networks with unstructured projections.

87 citations


Book ChapterDOI
08 Mar 2015
TL;DR: In this article, the authors provide new methods to look for lightweight MDS matrices, and in particular involutory ones, by proving many new properties and equivalence classes for various MDS matrix constructions such as circulant, Hadamard, Cauchy, and Hadhamard-Cauchy.
Abstract: In this article, we provide new methods to look for lightweight MDS matrices, and in particular involutory ones. By proving many new properties and equivalence classes for various MDS matrices constructions such as circulant, Hadamard, Cauchy and Hadamard-Cauchy, we exhibit new search algorithms that greatly reduce the search space and make lightweight MDS matrices of rather high dimension possible to find. We also explain why the choice of the irreducible polynomial might have a significant impact on the lightweightness, and in contrary to the classical belief, we show that the Hamming weight has no direct impact. Even though we focused our studies on involutory MDS matrices, we also obtained results for non-involutory MDS matrices. Overall, using Hadamard or Hadamard-Cauchy constructions, we provide the (involutory or non-involutory) MDS matrices with the least possible XOR gates for the classical dimensions \(4 \times 4\), \(8 \times 8\), \(16 \times 16\) and \(32 \times 32\) in \(\mathrm {GF}(2^4)\) and \(\mathrm {GF}(2^8)\). Compared to the best known matrices, some of our new candidates save up to 50 % on the amount of XOR gates required for an hardware implementation. Finally, our work indicates that involutory MDS matrices are really interesting building blocks for designers as they can be implemented with almost the same number of XOR gates as non-involutory MDS matrices, the latter being usually non-lightweight when the inverse matrix is required.

75 citations


Journal ArticleDOI
TL;DR: The proposed scheme has a better PAPR performance and a lower computational complexity than T-OFDM and has an equivalent BER performance when their precoding matrices are designed in such a way as to obtain full frequency diversity.
Abstract: The computational complexity and peak-to-average power ratio (PAPR) of conventional precoded orthogonal frequency division multiplexing (OFDM) systems can be reduced using a T-OFDM precoded system based on the Walsh–Hadamard matrix. The present paper proposes a novel precoding scheme for further reducing the computational complexity and PAPR of T-OFDM. In the proposed scheme, the precoding matrix is combined with an inverse discrete Fourier transform to construct a new transform matrix at the transmitter. Notably, the transform matrix is both unitary and circulant, with each column being a perfect Gaussian integer sequence containing just four non-zero elements of $\{\pm 1,\pm j\}$ . A low-complexity receiver is additionally constructed for the proposed precoding scheme. A closed-form expression is derived for the bit error rate (BER) in T-OFDM and the proposed precoding scheme under frequency-selective fading channels. The simulation results for the BER are shown to be in good agreement with the mathematical derivations. In addition, it is demonstrated that T-OFDM and the proposed scheme have an equivalent BER performance when their precoding matrices are designed in such a way as to obtain full frequency diversity. However, the proposed scheme has a better PAPR performance and a lower computational complexity than T-OFDM.

69 citations


Journal ArticleDOI
TL;DR: A scheme of fast IAA (IAA-F) is proposed for mitigating the computational burden of scanning radar angular superresolution, and numerical results illustrate that the proposed IAA-F offers a time complexity reduction without loss of performance.
Abstract: Angular superresolution technique is of great significance in enhancing the azimuth resolution when the real aperture is constrained. Recently, based on weighted least squares (WLS), the iterative adaptive approach (IAA) has been applied to scanning radar angular superresolution. The resulting estimates present noticeably superior performance compared with the existing approaches. However, the improved performance of IAA comes at the cost of high computational complexity. In this paper, a scheme of fast IAA (IAA-F) is proposed for mitigating the computational burden. First, based on the circulant structure of the steering matrix, the covariance matrix and WLS estimates are rewritten by fast convolution. Second, according to the similar block tridiagonal property between the covariance matrix and Schur complement of its submatrix, the fast matrix inversion works in an improved divide and conquer (D&C) approach, by recursively breaking down the problem of fast inversion into the same subproblem. Numerical results illustrate that the proposed IAA-F offers a time complexity reduction without loss of performance.

65 citations


Journal ArticleDOI
TL;DR: The GMRES method with the block circulant preconditioner is proposed to solve relevant linear systems and some conclusions about the convergence analysis and spectrum of the preconditionsed matrices are drawn if the diffusion coefficients are constant.

50 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the class of complex random vectors whose covariance matrix is linearly parameterized by a basis of Hermitian Toeplitz (HT) matrices and derived the maximum compression ratios that preserve all second-order information.
Abstract: The class of complex random vectors whose covariance matrix is linearly parameterized by a basis of Hermitian Toeplitz (HT) matrices is considered, and the maximum compression ratios that preserve all second-order information are derived-the statistics of the uncompressed vector must be recoverable from a set of linearly compressed observations. This kind of vectors arises naturally when sampling wide-sense stationary random processes and features a number of applications in signal and array processing. Explicit guidelines to design optimal and nearly optimal schemes operating both in a periodic and nonperiodic fashion are provided by considering two of the most common linear compression schemes, which we classify as dense or sparse. It is seen that the maximum compression ratios depend on the structure of the HT subspace containing the covariance matrix of the uncompressed observations. Compression patterns attaining these maximum ratios are found for the case without structure as well as for the cases with circulant or banded structure. Universal samplers are also proposed to compress unknown HT subspaces.

48 citations


Posted Content
TL;DR: This work develops a new variant of AMP based on a unitary transformation of the original model, called UT-AMP, where the unitary matrix is available for any matrix A, e.g., the conjugate transpose of the left singular matrix of A, or a normalized DFT matrix for any circulant A.
Abstract: Approximate message passing (AMP) and its variants, developed based on loopy belief propagation, are attractive for estimating a vector x from a noisy version of z = Ax, which arises in many applications. For a large A with i. i. d. elements, AMP can be characterized by the state evolution and exhibits fast convergence. However, it has been shown that, AMP mayeasily diverge for a generic A. In this work, we develop a new variant of AMP based on a unitary transformation of the original model (hence the variant is called UT-AMP), where the unitary matrix is available for any matrix A, e.g., the conjugate transpose of the left singular matrix of A, or a normalized DFT (discrete Fourier transform) matrix for any circulant A. We prove that, in the case of Gaussian priors, UT-AMP always converges for any matrix A. It is observed that UT-AMP is much more robust than the original AMP for difficult A and exhibits fast convergence. A special form of UT-AMP with a circulant A was used in our previous work [13] for turbo equalization. This work extends it to a generic A, and provides a theoretical investigation on the convergence.

47 citations


Posted Content
11 Feb 2015
TL;DR: This work proposes to replace the conventional linear projection with the circulant projection, which enables the use of the Fast Fourier Transform to speed up the computation of a fully-connected neural network layer.
Abstract: The basic computation of a fully-connected neural network layer is a linear projection of the input signal followed by a non-linear transformation. The linear projection step consumes the bulk of the processing time and memory footprint. In this work, we propose to replace the conventional linear projection with the circulant projection. The circulant structure enables the use of the Fast Fourier Transform to speed up the computation. Considering a neural network layer with d input nodes, and d output nodes, this method improves the time complexity from O(d) to O(d log d) and space complexity from O(d) to O(d). We further show that the gradient computation and optimization of the circulant projections can be performed very efficiently. Our experiments on three standard datasets show that the proposed approach achieves this significant gain in efficiency and storage with minimal loss of accuracy compared to neural networks with unstructured projections.

Journal ArticleDOI
TL;DR: The mechanisms behind fractional transport on circulant networks and how this long-range process dynamically induces the small-world property in different structures are analyzed.
Abstract: In this paper, we study fractional random walks on networks defined from the equivalent of the fractional diffusion equation in graphs. We explore this process analytically in circulant networks; in particular, interacting cycles and limit cases such as a ring and a complete graph. From the spectra and the eigenvectors of the Laplacian matrix, we deduce explicit results for different quantities that characterize this dynamical process. We obtain analytical expressions for the fractional transition matrix, the fractional degrees and the average probability of return of the random walker. Also, we discuss the Kemeny constant, which gives the average number of steps necessary to reach any site of the network. Throughout this work, we analyze the mechanisms behind fractional transport on circulant networks and how this long-range process dynamically induces the small-world property in different structures.

Journal ArticleDOI
TL;DR: In this article, the authors study two applications of standard Gaussian random multipliers and prove that with a probability close to 1 such a multiplier is expected to numerically stabilize Gaussian elimination with no pivoting as well as block Gaussian elimination.

Journal ArticleDOI
TL;DR: By the simple device of reordering, a circulant preconditioned short recurrence Krylov subspace iterative method of minimum residual type for nonsymmetric (and possibly highly nonnormal) Toeplitz systems is rigorously established.
Abstract: Circulant preconditioning for symmetric Toeplitz linear systems is well established; theoretical guarantees of fast convergence for the conjugate gradient method are descriptive of the convergence seen in computations. This has led to robust and highly efficient solvers based on use of the fast Fourier transform exactly as originally envisaged in [G. Strang, Stud. Appl. Math., 74 (1986), pp. 171--176]. For nonsymmetric systems, the lack of generally descriptive convergence theory for most iterative methods of Krylov type has provided a barrier to such a comprehensive guarantee, though several methods have been proposed and some analysis of performance with the normal equations is available. In this paper, by the simple device of reordering, we rigorously establish a circulant preconditioned short recurrence Krylov subspace iterative method of minimum residual type for nonsymmetric (and possibly highly nonnormal) Toeplitz systems. Convergence estimates similar to those in the symmetric case are established.

Proceedings Article
10 Jun 2015
TL;DR: Huang et al. as mentioned in this paper proposed a tensor decomposition algorithm for parameter estimation of convolutional models, which is embarrassingly parallel and consists of simple operations such as fast Fourier transforms and matrix multiplications.
Abstract: Author(s): Huang, Furong; Anandkumar, Animashree | Abstract: Tensor methods have emerged as a powerful paradigm for consistent learning of many latent variable models such as topic models, independent component analysis and dictionary learning. Model parameters are estimated via CP decomposition of the observed higher order input moments. However, in many domains, additional invariances such as shift invariances exist, enforced via models such as convolutional dictionary learning. In this paper, we develop novel tensor decomposition algorithms for parameter estimation of convolutional models. Our algorithm is based on the popular alternating least squares method, but with efficient projections onto the space of stacked circulant matrices. Our method is embarrassingly parallel and consists of simple operations such as fast Fourier transforms and matrix multiplications. Our algorithm converges to the dictionary much faster and more accurately compared to the alternating minimization over filters and activation maps.

Journal ArticleDOI
TL;DR: A new quantum scheme to encode Fourier coefficients in the computational basis, with fidelity 1 - \delta $$1-δ and digit accuracy ϵ for each Fourier coefficient is detailed.
Abstract: The conventional Quantum Fourier Transform, with exponential speedup compared to the classical Fast Fourier Transform, has played an important role in quantum computation as a vital part of many quantum algorithms (most prominently, the Shor's factoring algorithm). However, situations arise where it is not sufficient to encode the Fourier coefficients within the quantum amplitudes, for example in the implementation of control operations that depend on Fourier coefficients. In this paper, we detail a new quantum algorithm to encode the Fourier coefficients in the computational basis, with success probability $1-\delta$ and desired precision $\epsilon$. Its time complexity %$\mathcal{O}\big((\log N)^2\log(N/\delta)/\epsilon)\big)$ depends polynomially on $\log(N)$, where $N$ is the problem size, and linearly on $\log(1/\delta)$ and $1/\epsilon$. We also discuss an application of potential practical importance, namely the simulation of circulant Hamiltonians.

Journal ArticleDOI
TL;DR: Theoretical and experimental results involving numerical solutions of FDEs demonstrate that the proposed k -step preconditioner is efficient to accelerate the GMRES solver for non-Hermitian Toeplitz systems.

Journal ArticleDOI
TL;DR: This study focuses on numerically solving these FPDEs via the fully implicit scheme, with the shifted Grünwald approximation, and the circulant preconditioned generalized minimal residual method which converges very fast with theoretical proof is incorporated for solving resultant linear systems.
Abstract: In recent years, considerable literature has proposed the more general class of exponential Levy processes as the underlying model for prices of financial quantities, which thus better explain many important empirical facts of financial markets. Finite moment log stable, Carr–Geman–Madan–Yor and KoBoL models are chosen from those above-mentioned models as the dynamics of underlying equity prices in this paper. With such models pricing barrier options, one kind of financial derivatives is transformed to solve specific fractional partial differential equations FPDEs. This study focuses on numerically solving these FPDEs via the fully implicit scheme, with the shifted Grunwald approximation. The circulant preconditioned generalized minimal residual method which converges very fast with theoretical proof is incorporated for solving resultant linear systems. Numerical examples are given to demonstrate the effectiveness of the proposed preconditioner and show the accuracy of our method compared with that done by the Fourier cosine expansion method as a benchmark.

Posted Content
TL;DR: This paper proposes a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm or t-TNN for short, which exploits the horizontal translation relationship between the frames in a video, and endows the t- TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low- rank models.
Abstract: In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm or t-TNN for short. The twist tensor denotes a 3-way tensor representation to laterally store 2D data slices in order. On one hand, t-TNN convexly relaxes the tensor multi-rank of the twist tensor in the Fourier domain, which allows an efficient computation using FFT. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way. We test the t-TNN model on a video completion application that aims to fill missing values and the experiment results validate its effectiveness, especially when dealing with video recorded by a non-stationary panning camera. The block circulant matricization of the twist tensor can be transformed into a circulant block representation with nuclear norm invariance. This representation, after transformation, exploits the horizontal translation relationship between the frames in a video, and endows the t-TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low-rank models.

Journal ArticleDOI
TL;DR: It is presented here necessary and sufficient conditions for the invertibility of some circulant matrices that depend on three parameters and moreover, the inverse is explicitly computed.

Journal ArticleDOI
TL;DR: In this article, the -circulant matrix associated with the numbers defined by the recursive relation with initial conditions and, where and, are obtained some formulas for the determinants and inverses of.
Abstract: Let be the -circulant matrix associated with the numbers defined by the recursive relation with initial conditions and , where and We obtain some formulas for the determinants and inverses of . Some bounds for spectral norms of are obtained as applications.

Journal ArticleDOI
TL;DR: The invertibility of generalized Lucas skew circulant matrix is discussed and the determinant and the inverse matrix of generalizedLucas skew left circulants type matrices are obtained respectively.

Journal ArticleDOI
TL;DR: This paper introduces a new type of circulant-like matrices which are involutory by construction and they are called Type-II circulants-likeMatrices, which are suitable for lightweight cryptography for d up to 8 and considers orthogonal and involutory properties of such matrices.
Abstract: MDS matrices incorporate diffusion layers in block ciphers and hash functions. MDS matrices are in general not sparse and have a large description and thus induce costly implementations both in hardware and software. It is also nontrivial to find MDS matrices which could be used in lightweight cryptography. In the AES MixColumn operation, a circulant MDS matrix is used which is efficient as its elements are of low hamming weights, but no general constructions and study of MDS matrices from d×d circulant matrices for arbitrary d is available in the literature. In a SAC 2004 paper, Junod et al. constructed a new class of efficient matrices whose submatrices were circulant matrices and they coined the term circulating-like matrices for these new class of matrices. We call these matrices as Type-I circulant-like matrices. In this paper we introduce a new type of circulant-like matrices which are involutory by construction and we call them Type-II circulant-like matrices. We study the MDS properties of d×d circulant, Type-I and Type-II circulant-like matrices and construct new and efficient MDS matrices which are suitable for lightweight cryptography for d up to 8. We also consider orthogonal and involutory properties of such matrices and study the construction of efficient MDS matrices whose inverses are also efficient. We explore some interesting and useful properties of circulant, Type-I and Type-II circulant-like matrices which are prevalent in many parts of mathematics and computer science.

Proceedings Article
25 Jul 2015
TL;DR: A novel random feature mapping method that uses a signed Circulant Random Matrix (CRM) instead of an unstructured random matrix to project input data and proves that approximating Gaussian kernel using this mapping method is unbiased and does not increase the variance.
Abstract: Random feature mappings have been successfully used for approximating non-linear kernels to scale up kernel methods. Some work aims at speeding up the feature mappings, but brings increasing variance of the approximation. In this paper, we propose a novel random feature mapping method that uses a signed Circulant Random Matrix (CRM) instead of an unstructured random matrix to project input data. The signed CRM has linear space complexity as the whole signed CRM can be recovered from one column of the CRM, and ensures loglinear time complexity to compute the feature mapping using the Fast Fourier Transform (FFT). Theoretically, we prove that approximating Gaussian kernel using our mapping method is unbiased and does not increase the variance. Experimentally, we demonstrate that our proposed mapping method is time and space efficient while retaining similar accuracies with state-of-the-art random feature mapping methods. Our proposed random feature mapping method can be implemented easily and make kernel methods scalable and practical for large scale training and predicting problems.

Journal ArticleDOI
TL;DR: A patch based image denoising method is developed in this paper by introducing a new type of image self-similarity obtained by cyclic shift, which is called circulant similarity, and shows very competitive performance with state-of-the-art denoise method, especially on images corrupted by strong noise.

Journal ArticleDOI
TL;DR: The invertibility of the Tribonacci skew circulant matrix is shown and the determinant and the inverse matrix based on constructing the transformation matrices are presented.

Journal ArticleDOI
TL;DR: In this article, the authors studied norms of circulant and r-circulant matrices involving harmonic Fibonacci and hyperharmonic numbers and obtained inequalities by using matrix norms.
Abstract: In this paper, we study norms of circulant and r-circulant matrices involving harmonic Fibonacci and hyperharmonic Fibonacci numbers. We obtain inequalities by using matrix norms.

Journal ArticleDOI
TL;DR: It is observed that, since Gram matrix of dictionary matrix ( is closed to identity matrix in case of proposed modified SBCM, therefore, it helps to reconstruct the medical images of very good quality.
Abstract: Compressive sensing theory enables faithful reconstruction of signals, sparse in domain Ψ, at sampling rate lesser than Nyquist criterion, while using sampling or sensing matrix Φ which satisfies restricted isometric property. The role played by sensing matrix Φ and sparsity matrix Ψ is vital in faithful reconstruction. If the sensing matrix is dense then it takes large storage space and leads to high computational cost. In this paper, effort is made to design sparse sensing matrix with least incurred computational cost while maintaining quality of reconstructed image. The design approach followed is based on sparse block circulant matrix (SBCM) with few modifications. The other used sparse sensing matrix consists of 15 ones in each column. The medical images used are acquired from US, MRI and CT modalities. The image quality measurement parameters are used to compare the performance of reconstructed medical images using various sensing matrices. It is observed that, since Gram matrix of dictionary matrix...

Proceedings ArticleDOI
01 Dec 2015
TL;DR: This paper considers solutions to weighted entropy functionals, and shows that all rational solutions of certain bounded degree can be characterized by these, and considers identification of spectra based on simultaneous covariance and cepstral matching.
Abstract: Rational functions play a fundamental role in systems engineering for modelling, identification, and control applications. In this paper we extend the framework by Lindquist and Picci for obtaining such models from the circulant trigonometric moment problems, from the one-dimensional to the multidimensional setting in the sense that the spectrum domain is multidimensional. We consider solutions to weighted entropy functionals, and show that all rational solutions of certain bounded degree can be characterized by these. We also consider identification of spectra based on simultaneous covariance and cepstral matching, and apply this theory for image compression. This provides an approximation procedure for moment problems where the moment integral is over a multidimensional domain, and is also a step towards a realization theory for random fields.

Proceedings ArticleDOI
15 Jul 2015
TL;DR: A fast Newton algorithm is developed for computing the solution utilizing the structure of the Hessian of the Toeplitz-plus-Hankel case to reduce the computational complexity of the Newton search from O(n3) to O( n2), where n corresponds to the number of covariances and cepstral coefficients.
Abstract: The rational covariance extension problem is to parametrize the family of rational spectra of bounded degree that matches a given set of covariances. This article treats a circulant version of this problem, where the underlying process is periodic and we seek a spectrum that also matches a set of given cepstral coefficients. The interest in the circulant problem stems partly from the fact that this problem is a natural approximation of the non-periodic problem, but is also a tool in itself for analysing periodic processes. We develop a fast Newton algorithm for computing the solution utilizing the structure of the Hessian. This is done by extending a current algorithm for Toeplitz-plus-Hankel systems to the block-Toeplitz-plus-block-Hankel case. We use this algorithm to reduce the computational complexity of the Newton search from O(n3) to O(n2), where n corresponds to the number of covariances and cepstral coefficients.