scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Shape from moments - an estimation theory perspective

TL;DR: This paper discusses a set of possible estimation procedures that are based on the Prony and the Pencil methods, relate them one to the other, and compare them through simulations, and presents an improvement over these methodsbased on the direct use of the maximum-likelihood estimator, exploiting the above methods as initialization.
Abstract: This paper discusses the problem of recovering a planar polygon from its measured complex moments These moments correspond to an indicator function defined over the polygon's support Previous work on this problem gave necessary and sufficient conditions for such successful recovery process and focused mainly on the case of exact measurements being given In this paper, we extend these results and treat the same problem in the case where a longer than necessary series of noise corrupted moments is given Similar to methods found in array processing, system identification, and signal processing, we discuss a set of possible estimation procedures that are based on the Prony and the Pencil methods, relate them one to the other, and compare them through simulations We then present an improvement over these methods based on the direct use of the maximum-likelihood estimator, exploiting the above methods as initialization Finally, we show how regularization and, thus, maximum a posteriori probability estimator could be applied to reflect prior knowledge about the recovered polygon
Citations
More filters
Journal ArticleDOI
TL;DR: Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints which match data properties and extract more general latent components in the data than matrix-based methods.
Abstract: The widespread use of multisensor technology and the emergence of big data sets have highlighted the limitations of standard flat-view matrix models and the necessity to move toward more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift toward models that are essentially polynomial, the uniqueness of which, unlike the matrix methods, is guaranteed under very mild and natural conditions. Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints which match data properties and extract more general latent components in the data than matrix-based methods.

1,250 citations


Cites background from "Shape from moments - an estimation ..."

  • ...Top: Tensorization of a vector or matrix into the so-called quantized format; in scientific computing this facilitates super-compression of large-scale vectors or matrices....

    [...]

Journal ArticleDOI
TL;DR: A flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz, and moment structures and catalog applications from diverse fields under this framework is introduced.
Abstract: We introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz, and moment structures and catalog applications from diverse fields under this framework. We discuss various first-order methods for solving the resulting optimization problem, including alternating direction methods of multipliers, proximal point algorithms, and gradient projection methods. We perform computational experiments to compare these methods on system identification problems and system realization problems. For the system identification problem, the gradient projection method (accelerated by Nesterov's extrapolation techniques) and the proximal point algorithm usually outperform other first-order methods in terms of CPU time on both real and simulated data, for small and large regularization parameters, respectively, while for the system realization problem, the alternating direction method of multipliers, as applied to a certain primal reformulation, usuall...

492 citations


Cites background from "Shape from moments - an estimation ..."

  • ...Note that the number of vertices is equal to the rank of the Hankel matrix consisting of the moments [16, 23]....

    [...]

  • ...The problem of determining P given its complex moments has been studied in [16, 38, 49], and arises in many applications such as computer tomography, where X-ray is used to estimate moments of mass distribution, and geophysical inversion, where the goal is to estimate the shape of a region from external gravitational measurements....

    [...]

Journal ArticleDOI
TL;DR: This paper shows that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm.
Abstract: Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling

481 citations


Cites background or methods from "Shape from moments - an estimation ..."

  • ...For an insightful review of those techniques we refer to the recent paper [ 13 ] and to the book [22]....

    [...]

  • ...correction coding [3], but also for sampling, interpolation [35], [14] and shape reconstruction [19], [ 13 ]....

    [...]

Journal ArticleDOI
TL;DR: A new class of tensor decompositions is introduced, where the size is characterized by a set of mode-$n$ ranks, and conditions under which essential uniqueness is guaranteed are derived.
Abstract: In this paper we introduce a new class of tensor decompositions. Intuitively, we decompose a given tensor block into blocks of smaller size, where the size is characterized by a set of mode-$n$ ranks. We study different types of such decompositions. For each type we derive conditions under which essential uniqueness is guaranteed. The parallel factor decomposition and Tucker's decomposition can be considered as special cases in the new framework. The paper sheds new light on fundamental aspects of tensor algebra.

431 citations


Cites background from "Shape from moments - an estimation ..."

  • ...On the other hand, because of the genericity of C, we can interpret (Cr)L×M,3 · (Cr)†L×M,1 as (Cr)L×M,2·(Cr) † L×M,1+Er, in which Er ∈ KL×L is a generic perturbation, 1 r R. Perturbation analysis now states that the individual eigenvectors of TI×J,3 · T†I×J,1 do not correspond to those of TI×J,2 · T † I×J,1 [23, 32]....

    [...]

  • ...We have TI×J,2 ·T†I×J,1 = A·blockdiag((C1)L×M,2 ·(C1) † L×M,1, . . . , (CR)L×M,2 ·(CR) † L×M,1)·A†, where M = L....

    [...]

  • ...(4.7) This means that the columns of (AT )† are generalized eigenvectors of the pencil (TTI×J,1,T T I×J,2) [4, 22]....

    [...]

  • ...From (2.9) we have TI×J,1 = A · blockdiag(c11IL1×L1 , . . . , c1RILR×LR) · BT ,(4.6) TI×J,2 = A · blockdiag(c21IL1×L1 , . . . , c2RILR×LR) · BT ....

    [...]

  • ...From this equation it is clear that the column space of any Ar is an invariant subspace of TI×J,2 · T†I×J,1....

    [...]

Journal ArticleDOI
TL;DR: In this article, a comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker decomposition, through to advanced cause-effect and multi-view data analysis schemes.
Abstract: The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train.

369 citations

References
More filters
Book
01 Nov 1996

8,608 citations


"Shape from moments - an estimation ..." refers background or methods in this paper

  • ...Then, the solution is given by [10]...

    [...]

  • ...Numerically, the TLS problem is solved using the singular value decomposition (SVD) [10]....

    [...]

  • ...The square Vandermonde matrix is nonsingular since the polygon is assumed to be nondegenerate [10], [11], [26]....

    [...]

  • ...This equation as posed is hard to use for solving for , given the complex moments, since it is nonlinear as appear both inside the Vandermonde matrix [10] and are also hidden in the values of ....

    [...]

  • ...its eigenvalues are the roots of the polynomial [10]....

    [...]

Journal ArticleDOI
TL;DR: Although discussed in the context of direction-of-arrival estimation, ESPRIT can be applied to a wide variety of problems including accurate detection and estimation of sinusoids in noise.
Abstract: An approach to the general problem of signal parameter estimation is described. The algorithm differs from its predecessor in that a total least-squares rather than a standard least-squares criterion is used. Although discussed in the context of direction-of-arrival estimation, ESPRIT can be applied to a wide variety of problems including accurate detection and estimation of sinusoids in noise. It exploits an underlying rotational invariance among signal subspaces induced by an array of sensors with a translational invariance structure. The technique, when applicable, manifests significant performance and computational advantages over previous algorithms such as MEM, Capon's MLM, and MUSIC. >

6,273 citations


"Shape from moments - an estimation ..." refers methods in this paper

  • ...In [17], a relationship between this method and several variants of the ESPRIT method [31], [32] is derived showing comparable performance....

    [...]

Book
16 Feb 2013
TL;DR: This well written book is enlarged by the following topics: B-splines and their computation, elimination methods for large sparse systems of linear equations, Lanczos algorithm for eigenvalue problems, implicit shift techniques for theLR and QR algorithm, implicit differential equations, differential algebraic systems, new methods for stiff differential equations and preconditioning techniques.
Abstract: This well written book is enlarged by the following topics: $B$-splines and their computation, elimination methods for large sparse systems of linear equations, Lanczos algorithm for eigenvalue problems, implicit shift techniques for the $LR$ and $QR$ algorithm, implicit differential equations, differential algebraic systems, new methods for stiff differential equations, preconditioning techniques and convergence rate of the conjugate gradient algorithm and multigrid methods for boundary value problems. Cf. also the reviews of the German original editions.

6,270 citations

MonographDOI
26 Sep 2001
TL;DR: 1. The numerical evaluation of expressions 2. Linear systems of equations 3. Interpolation and numerical differentiation 4. Numerical integration 5. Univariate non linear equations 6. Systems of nonlinear equations.
Abstract: Numerical analysis is an increasingly important link between pure mathematics and its application in science and technology. This textbook provides an introduction to the justification and development of constructive methods that provide sufficiently accurate approximations to the solution of numerical problems, and the analysis of the influence that errors in data, finite-precision calculations, and approximation formulas have on results, problem formulation and the choice of method. It also serves as an introduction to scientific programming in MATLAB, including many simple and difficult, theoretical and computational exercises. A unique feature of this book is the consequent development of interval analysis as a tool for rigorous computation and computer assisted proofs, along with the traditional material.

3,746 citations


"Shape from moments - an estimation ..." refers methods in this paper

  • ...We start by briefly describing the formulation of the shapefrom-moments reconstruction problem....

    [...]

Journal ArticleDOI
TL;DR: The generalized cross-validation (GCV) method as discussed by the authors is a generalized version of Allen's PRESS, which can be used in subset selection and singular value truncation, and even to choose from among mixtures of these methods.
Abstract: Consider the ridge estimate (λ) for β in the model unknown, (λ) = (X T X + nλI)−1 X T y. We study the method of generalized cross-validation (GCV) for choosing a good value for λ from the data. The estimate is the minimizer of V(λ) given by where A(λ) = X(X T X + nλI)−1 X T . This estimate is a rotation-invariant version of Allen's PRESS, or ordinary cross-validation. This estimate behaves like a risk improvement estimator, but does not require an estimate of σ2, so can be used when n − p is small, or even if p ≥ 2 n in certain cases. The GCV method can also be used in subset selection and singular value truncation methods for regression, and even to choose from among mixtures of these methods.

3,697 citations